text
stringlengths
559
401k
source
stringlengths
13
121
Observations show that the expansion of the universe is accelerating, such that the velocity at which a distant galaxy recedes from the observer is continuously increasing with time. The accelerated expansion of the universe was discovered in 1998 by two independent projects, the Supernova Cosmology Project and the High-Z Supernova Search Team, which used distant type Ia supernovae to measure the acceleration. The idea was that as type Ia supernovae have almost the same intrinsic brightness (a standard candle), and since objects that are further away appear dimmer, the observed brightness of these supernovae can be used to measure the distance to them. The distance can then be compared to the supernovae's cosmological redshift, which measures how much the universe has expanded since the supernova occurred; the Hubble law established that the further away an object is, the faster it is receding. The unexpected result was that objects in the universe are moving away from one another at an accelerating rate. Cosmologists at the time expected that recession velocity would always be decelerating, due to the gravitational attraction of the matter in the universe. Three members of these two groups have subsequently been awarded Nobel Prizes for their discovery. Confirmatory evidence has been found in baryon acoustic oscillations, and in analyses of the clustering of galaxies. The accelerated expansion of the universe is thought to have begun since the universe entered its dark-energy-dominated era roughly 5 billion years ago. Within the framework of general relativity, an accelerated expansion can be accounted for by a positive value of the cosmological constant Λ, equivalent to the presence of a positive vacuum energy, dubbed "dark energy". While there are alternative possible explanations, the description assuming dark energy (positive Λ) is used in the standard model of cosmology, which also includes cold dark matter (CDM) and is known as the Lambda-CDM model. == Background == In the decades since the detection of cosmic microwave background (CMB) in 1965, the Big Bang model has become the most accepted model explaining the evolution of our universe. The Friedmann equation defines how the energy in the universe drives its expansion. H 2 = ( a ˙ a ) 2 = 8 π G 3 ρ − κ c 2 a 2 {\displaystyle H^{2}={\left({\frac {\dot {a}}{a}}\right)}^{2}={\frac {8{\pi }G}{3}}\rho -{\frac {{\kappa }c^{2}}{a^{2}}}} where κ represents the curvature of the universe, a(t) is the scale factor, ρ is the total energy density of the universe, and H is the Hubble parameter. The critical density is defined as ρ c = 3 H 2 8 π G {\displaystyle \rho _{c}={\frac {3H^{2}}{8{\pi }G}}} and the density parameter Ω = ρ ρ c {\displaystyle \Omega ={\frac {\rho }{\rho _{c}}}} The Hubble parameter can then be rewritten as H ( a ) = H 0 Ω k a − 2 + Ω m a − 3 + Ω r a − 4 + Ω D E a − 3 ( 1 + w ) {\displaystyle H(a)=H_{0}{\sqrt {{\Omega _{k}a^{-2}+\Omega }_{m}a^{-3}+\Omega _{r}a^{-4}+\Omega _{\mathrm {DE} }a^{-3(1+w)}}}} where the four currently hypothesized contributors to the energy density of the universe are curvature, matter, radiation and dark energy. Each of the components decreases with the expansion of the universe (increasing scale factor), except perhaps the dark energy term. It is the values of these cosmological parameters which physicists use to determine the acceleration of the universe. The acceleration equation describes the evolution of the scale factor with time a ¨ a = − 4 π G 3 ( ρ + 3 P c 2 ) {\displaystyle {\frac {\ddot {a}}{a}}=-{\frac {4{\pi }G}{3}}\left(\rho +{\frac {3P}{c^{2}}}\right)} where the pressure P is defined by the cosmological model chosen. (see explanatory models) Physicists at one time were so assured of the deceleration of the universe's expansion that they introduced a so-called deceleration parameter q0. Recent observations indicate this deceleration parameter is negative. === Relation to inflation === According to the theory of cosmic inflation, the very early universe underwent a period of very rapid, quasi-exponential expansion. While the time-scale for this period of expansion was far shorter than that of the existing expansion, this was a period of accelerated expansion with some similarities to the current epoch. === Technical definition === The definition of "accelerating expansion" is that the second time derivative of the cosmic scale factor, a ¨ {\displaystyle {\ddot {a}}} , is positive, which is equivalent to the deceleration parameter, q {\displaystyle q} , being negative. However, note this does not imply that the Hubble parameter is increasing with time. Since the Hubble parameter is defined as H ( t ) ≡ a ˙ ( t ) / a ( t ) {\displaystyle H(t)\equiv {\dot {a}}(t)/a(t)} , it follows from the definitions that the derivative of the Hubble parameter is given by d H d t = − H 2 ( 1 + q ) {\displaystyle {\frac {dH}{dt}}=-H^{2}(1+q)} so the Hubble parameter is decreasing with time unless q < − 1 {\displaystyle q<-1} . Observations prefer q ≈ − 0.55 {\displaystyle q\approx -0.55} , which implies that a ¨ {\displaystyle {\ddot {a}}} is positive but d H / d t {\displaystyle dH/dt} is negative. Essentially, this implies that the cosmic recession velocity of any one particular galaxy is increasing with time, but its velocity/distance ratio is still decreasing; thus different galaxies expanding across a sphere of fixed radius cross the sphere more slowly at later times. It is seen from above that the case of "zero acceleration/deceleration" corresponds to a ( t ) {\displaystyle a(t)} is a linear function of t {\displaystyle t} , q = 0 {\displaystyle q=0} , a ˙ = c o n s t {\displaystyle {\dot {a}}=const} , and H ( t ) = 1 / t {\displaystyle H(t)=1/t} . == Evidence for acceleration == The rate of expansion of the universe can be analyzed using the magnitude-redshift relationship of astronomical objects using standard candles, or their distance-redshift relationship using standard rulers. Also a factor is the growth of large-scale structure, finding that the observed values of the cosmological parameters are best described by models which include an accelerating expansion. === Supernova observation === In 1998, the first evidence for acceleration came from the observation of Type Ia supernovae, which are exploding white dwarf stars that have exceeded their stability limit. Because they all have similar masses, their intrinsic luminosity can be standardized. Repeated imaging of selected areas of the sky is used to discover the supernovae, then follow-up observations give their peak brightness, which is converted into a quantity known as luminosity distance (see distance measures in cosmology for details). Spectral lines of their light can be used to determine their redshift. For supernovae at redshift less than around 0.1, or light travel time less than 10 percent of the age of the universe, this gives a nearly linear distance–redshift relation due to Hubble's law. At larger distances, since the expansion rate of the universe has changed over time, the distance-redshift relation deviates from linearity, and this deviation depends on how the expansion rate has changed over time. The full calculation requires computer integration of the Friedmann equation, but a simple derivation can be given as follows: the redshift z directly gives the cosmic scale factor at the time the supernova exploded. a ( t ) = 1 1 + z {\displaystyle a(t)={\frac {1}{1+z}}} So a supernova with a measured redshift z = 0.5 implies the universe was ⁠1/1 + 0.5⁠ = ⁠2/3⁠ of its present size when the supernova exploded. In the case of accelerated expansion, a ¨ {\displaystyle {\ddot {a}}} is positive; therefore, a ˙ {\displaystyle {\dot {a}}} was smaller in the past than today. Thus, an accelerating universe took a longer time to expand from 2/3 to 1 times its present size, compared to a non-accelerating universe with constant a ˙ {\displaystyle {\dot {a}}} and the same present-day value of the Hubble constant. This results in a larger light-travel time, larger distance and fainter supernovae, which corresponds to the actual observations. Adam Riess et al. found that "the distances of the high-redshift SNe Ia were, on average, 10% to 15% further than expected in a low mass density ΩM = 0.2 universe without a cosmological constant". This means that the measured high-redshift distances were too large, compared to nearby ones, for a decelerating universe. Several researchers have questioned the majority opinion on the acceleration or the assumption of the "cosmological principle" (that the universe is homogeneous and isotropic). For example, a 2019 paper analyzed the Joint Light-curve Analysis catalog of Type Ia supernovas, containing ten times as many supernova as were used in the 1998 analyses, and concluded that there was little evidence for a "monopole", that is, for an isotropic acceleration in all directions. See also the section on Alternate theories below. === Baryon acoustic oscillations === In the early universe before recombination and decoupling took place, photons and matter existed in a primordial plasma. Points of higher density in the photon-baryon plasma would contract, being compressed by gravity until the pressure became too large and they expanded again. This contraction and expansion created vibrations in the plasma analogous to sound waves. Since dark matter only interacts gravitationally, it stayed at the centre of the sound wave, the origin of the original overdensity. When decoupling occurred, approximately 380,000 years after the Big Bang, photons separated from matter and were able to stream freely through the universe, creating the cosmic microwave background as we know it. This left shells of baryonic matter at a fixed radius from the overdensities of dark matter, a distance known as the sound horizon. As time passed and the universe expanded, it was at these inhomogeneities of matter density where galaxies started to form. So by looking at the distances at which galaxies at different redshifts tend to cluster, it is possible to determine a standard angular diameter distance and use that to compare to the distances predicted by different cosmological models. Peaks have been found in the correlation function (the probability that two galaxies will be a certain distance apart) at 100 h−1 Mpc, (where h is the dimensionless Hubble constant) indicating that this is the size of the sound horizon today, and by comparing this to the sound horizon at the time of decoupling (using the CMB), we can confirm the accelerated expansion of the universe. === Clusters of galaxies === Measuring the mass functions of galaxy clusters, which describe the number density of the clusters above a threshold mass, also provides evidence for dark energy . By comparing these mass functions at high and low redshifts to those predicted by different cosmological models, values for w and Ωm are obtained which confirm a low matter density and a non-zero amount of dark energy. === Age of the universe === Given a cosmological model with certain values of the cosmological density parameters, it is possible to integrate the Friedmann equations and derive the age of the universe. t 0 = ∫ 0 1 d a a ˙ {\displaystyle t_{0}=\int _{0}^{1}{\frac {da}{\dot {a}}}} By comparing this to actual measured values of the cosmological parameters, we can confirm the validity of a model which is accelerating now, and had a slower expansion in the past. === Gravitational waves as standard sirens === Recent discoveries of gravitational waves through LIGO and VIRGO not only confirmed Einstein's predictions but also opened a new window into the universe. These gravitational waves can work as sort of standard sirens to measure the expansion rate of the universe. Abbot et al. 2017 measured the Hubble constant value to be approximately 70 kilometres per second per megaparsec. The amplitudes of the strain 'h' is dependent on the masses of the objects causing waves, distances from observation point and gravitational waves detection frequencies. The associated distance measures are dependent on the cosmological parameters like the Hubble Constant for nearby objects and will be dependent on other cosmological parameters like the dark energy density, matter density, etc. for distant sources. == Explanatory models == === Dark energy === The most important property of dark energy is that it has negative pressure (repulsive action) which is distributed relatively homogeneously in space. P = w c 2 ρ {\displaystyle P=wc^{2}\rho } where c is the speed of light and ρ is the energy density. Different theories of dark energy suggest different values of w, with w < −⁠1/3⁠ for cosmic acceleration (this leads to a positive value of ä in the acceleration equation above). The simplest explanation for dark energy is that it is a cosmological constant or vacuum energy; in this case w = −1. This leads to the Lambda-CDM model, which has generally been known as the Standard Model of Cosmology from 2003 through the present, since it is the simplest model in good agreement with a variety of recent observations. Riess et al. found that their results from supernova observations favoured expanding models with positive cosmological constant (Ωλ > 0) and an accelerated expansion (q0 < 0). === Phantom energy === These observations allow the possibility of a cosmological model containing a dark energy component with equation of state w < −1. This phantom energy density would become infinite in finite time, causing such a huge gravitational repulsion that the universe would lose all structure and end in a Big Rip. For example, for w = −⁠3/2⁠ and H0 =70 km·s−1·Mpc−1, the time remaining before the universe ends in this Big Rip is 22 billion years. === Alternative theories === There are many alternative explanations for the accelerating universe. Some examples are quintessence, a proposed form of dark energy with a non-constant state equation, whose density decreases with time. A negative mass cosmology does not assume that the mass density of the universe is positive (as is done in supernova observations), and instead finds a negative cosmological constant. Occam's razor also suggests that this is the 'more parsimonious hypothesis'. Dark fluid is an alternative explanation for accelerating expansion which attempts to unite dark matter and dark energy into a single framework. Alternatively, some authors have argued that the accelerated expansion of the universe could be due to a repulsive gravitational interaction of antimatter or a deviation of the gravitational laws from general relativity, such as massive gravity, meaning that gravitons themselves have mass. The measurement of the speed of gravity with the gravitational wave event GW170817 ruled out many modified gravity theories as alternative explanations to dark energy. Another type of model, the backreaction conjecture, was proposed by cosmologist Syksy Räsänen: the rate of expansion is not homogenous, but Earth is in a region where expansion is faster than the background. Inhomogeneities in the early universe cause the formation of walls and bubbles, where the inside of a bubble has less matter than on average. According to general relativity, space is less curved than on the walls, and thus appears to have more volume and a higher expansion rate. In the denser regions, the expansion is slowed by a higher gravitational attraction. Therefore, the inward collapse of the denser regions looks the same as an accelerating expansion of the bubbles, leading us to conclude that the universe is undergoing an accelerated expansion. The benefit is that it does not require any new physics such as dark energy. Räsänen does not consider the model likely, but without any falsification, it must remain a possibility. It would require rather large density fluctuations (20%) to work. Shockwave cosmology, proposed by Joel Smoller and Blake Temple in 2003, has the “big bang” as an explosion inside a black hole, producing the expanding volume of space and matter that includes the observable universe. A related theory by Smoller, Temple, and Vogler proposes that this shockwave may have resulted in our part of the universe having a lower density than that surrounding it, causing the accelerated expansion normally attributed to dark energy. They also propose that this related theory could be tested: a universe with dark energy should give a figure for the cubic correction to redshift versus luminosity C = −0.180 at a = a whereas for Smoller, Temple, and Vogler's alternative C should be positive rather than negative. They give a more precise calculation for their shockwave model alternative as: the cubic correction to redshift versus luminosity at a = a is C = 0.359. Although shockwave cosmology produces a universe that "looks essentially identical to the aftermath of the big bang", cosmologists consider that it needs further development before it could be considered as a more advantageous model than the big bang theory (or standard model) in explaining the universe. In particular, and especially for the proposed alternative to dark energy, it would need to explain big bang nucleosynthesis, the quantitative details of the microwave background anisotropies, the Lyman-alpha forest, and galaxy surveys. A final possibility is that dark energy is an illusion caused by some bias in measurements. For example, if we are located in an emptier-than-average region of space, the observed cosmic expansion rate could be mistaken for a variation in time, or acceleration. A different approach uses a cosmological extension of the equivalence principle to show how space might appear to be expanding more rapidly in the voids surrounding our local cluster. While weak, such effects considered cumulatively over billions of years could become significant, creating the illusion of cosmic acceleration, and making it appear as if we live in a Hubble bubble. Yet other possibilities are that the accelerated expansion of the universe is an illusion caused by the relative motion of us to the rest of the universe, or that the supernova sample size used wasn't large enough. == Consequences for the universe == As the universe expands, the density of radiation and ordinary dark matter declines more quickly than the density of dark energy (see equation of state) and, eventually, dark energy dominates. Specifically, when the scale of the universe doubles, the density of matter is reduced by a factor of 8, but the density of dark energy is nearly unchanged (it is exactly constant if the dark energy is the cosmological constant). In models where dark energy is the cosmological constant, the universe will expand exponentially with time in the far future, coming closer and closer to a de Sitter universe. This will eventually lead to all evidence for the Big Bang disappearing, as the cosmic microwave background is redshifted to lower intensities and longer wavelengths. Eventually, its frequency will be low enough that it will be absorbed by the interstellar medium, and so be screened from any observer within the galaxy. This will occur when the universe is less than 50 times its existing age, leading to the end of any life as the distant universe turns dark. A constantly expanding universe with a non-zero cosmological constant has mass density decreasing over time. Under such a scenario, it is understood that all matter will ionize and disintegrate into isolated stable particles such as electrons and neutrinos, with all complex structures dissipating. This is called "heat death of the universe" (or the Big Freeze). Alternatives for the ultimate fate of the universe include the Big Rip mentioned above, a Big Bounce, or a Big Crunch. == See also == == Notes == == References ==
Wikipedia/Accelerated_expansion
Terrestrial Time (TT) is a modern astronomical time standard defined by the International Astronomical Union, primarily for time-measurements of astronomical observations made from the surface of Earth. For example, the Astronomical Almanac uses TT for its tables of positions (ephemerides) of the Sun, Moon and planets as seen from Earth. In this role, TT continues Terrestrial Dynamical Time (TDT or TD), which succeeded ephemeris time (ET). TT shares the original purpose for which ET was designed, to be free of the irregularities in the rotation of Earth. The unit of TT is the SI second, the definition of which is based currently on the caesium atomic clock, but TT is not itself defined by atomic clocks. It is a theoretical ideal, and real clocks can only approximate it. TT is distinct from the time scale often used as a basis for civil purposes, Coordinated Universal Time (UTC). TT is indirectly the basis of UTC, via International Atomic Time (TAI). Because of the historical difference between TAI and ET when TT was introduced, TT is 32.184 s ahead of TAI. == History == A definition of a terrestrial time standard was adopted by the International Astronomical Union (IAU) in 1976 at its XVI General Assembly and later named Terrestrial Dynamical Time (TDT). It was the counterpart to Barycentric Dynamical Time (TDB), which was a time standard for Solar system ephemerides, to be based on a dynamical time scale. Both of these time standards turned out to be imperfectly defined. Doubts were also expressed about the meaning of 'dynamical' in the name TDT. In 1991, in Recommendation IV of the XXI General Assembly, the IAU redefined TDT, also renaming it "Terrestrial Time". TT was formally defined in terms of Geocentric Coordinate Time (TCG), defined by the IAU on the same occasion. TT was defined to be a linear scaling of TCG, such that the unit of TT is the "SI second on the geoid", i.e. the rate approximately matched the rate of proper time on the Earth's surface at mean sea level. Thus the exact ratio between TT time and TCG time was 1 − L G {\displaystyle 1-L_{\mathrm {G} }} , where L G = U G / c 2 {\displaystyle L_{\mathrm {G} }=U_{\mathrm {G} }/c^{2}} was a constant and U G {\displaystyle U_{\mathrm {G} }} was the gravitational potential at the geoid surface, a value measured by physical geodesy. In 1991 the best available estimate of L G {\displaystyle L_{\mathrm {G} }} was 6.969291×10−10. In 2000, the IAU very slightly altered the definition of TT by adopting an exact value, LG = 6.969290134×10−10. == Current definition == TT differs from Geocentric Coordinate Time (TCG) by a constant rate. Formally it is defined by the equation T T = ( 1 − L G ) × T C G + E , {\displaystyle \mathrm {TT} ={\bigl (}1-L_{\mathrm {G} }{\bigr )}\times \mathrm {TCG} +E,} where TT and TCG are linear counts of SI seconds in Terrestrial Time and Geocentric Coordinate Time respectively, L G {\displaystyle L_{\mathrm {G} }} is the constant difference in the rates of the two time scales, and E {\displaystyle E} is a constant to resolve the epochs (see below). L G {\displaystyle L_{\mathrm {G} }} is defined as exactly 6.969290134×10−10. Due to the term 1 − L G {\displaystyle 1-L_{\mathrm {G} }} the rate of TT is very slightly slower than that of TCG. The equation linking TT and TCG more commonly has the form given by the IAU, T T = T C G − L G × ( J D T C G − 2443144.5003725 ) × 86400 , {\displaystyle \mathrm {TT} =\mathrm {TCG} -L_{\mathrm {G} }\times {\bigl (}\mathrm {JD_{TCG}} -2443144.5003725{\bigr )}\times 86400,} where J D T C G {\displaystyle \mathrm {JD_{TCG}} } is the TCG time expressed as a Julian date (JD). The Julian Date is a linear transformation of the raw count of seconds represented by the variable TCG, so this form of the equation is not simplified. The use of a Julian Date specifies the epoch fully. The above equation is often given with the Julian Date 2443144.5 for the epoch, but that is inexact (though inappreciably so, because of the small size of the multiplier L G {\displaystyle L_{\mathrm {G} }} ). The value 2443144.5003725 is exactly in accord with the definition. Time coordinates on the TT and TCG scales are specified conventionally using traditional means of specifying days, inherited from non-uniform time standards based on the rotation of Earth. Specifically, both Julian Dates and the Gregorian calendar are used. For continuity with their predecessor Ephemeris Time (ET), TT and TCG were set to match ET at around Julian Date 2443144.5 (1977-01-01T00Z). More precisely, it was defined that TT instant 1977-01-01T00:00:32.184 and TCG instant 1977-01-01T00:00:32.184 exactly correspond to the International Atomic Time (TAI) instant 1977-01-01T00:00:00.000. This is also the instant at which TAI introduced corrections for gravitational time dilation. TT and TCG expressed as Julian Dates can be related precisely and most simply by the equation J D T T = E J D + ( J D T C G − E J D ) × ( 1 − L G ) , {\displaystyle \mathrm {JD_{TT}} =E_{\mathrm {JD} }+{\bigl (}\mathrm {JD_{TCG}} -E_{\mathrm {JD} }{\bigr )}\times {\bigl (}1-L_{\mathrm {G} }{\bigr )},} where E J D {\displaystyle E_{\mathrm {JD} }} is 2443144.5003725 exactly. == Realizations == TT is a theoretical ideal, not dependent on a particular realization. For practical use, physical clocks must be measured and their readings processed to estimate TT. A simple offset calculation is sufficient for most applications, but in demanding applications, detailed modeling of relativistic physics and measurement uncertainties may be needed. === TAI === The main realization of TT is supplied by TAI. The BIPM TAI service, performed since 1958, estimates TT using measurements from an ensemble of atomic clocks spread over the surface and low orbital space of Earth. TAI is canonically defined retrospectively, in monthly bulletins, in relation to the readings shown by that particular group of atomic clocks at the time. Estimates of TAI are also provided in real time by the institutions that operate the participating clocks. Because of the historical difference between TAI and ET when TT was introduced, the TAI realization of TT is defined thus: T T ( T A I ) = T A I + 32.184 s . {\displaystyle \mathrm {TT(TAI)=TAI+32.184~s} .} The offset 32.184 s arises from history. The atomic time scale A1 (a predecessor of TAI) was set equal to UT2 at its conventional starting date of 1 January 1958, when ΔT (ET − UT) was about 32 seconds. The offset 32.184 seconds was the 1976 estimate of the difference between Ephemeris Time (ET) and TAI, "to provide continuity with the current values and practice in the use of Ephemeris Time". TAI is never revised once published and TT(TAI) has small errors relative to TT(BIPM), on the order of 10-50 microseconds. The GPS time scale has a nominal difference from atomic time (TAI − GPS time = +19 seconds), so that TT ≈ GPS time + 51.184 seconds. This realization introduces up to a microsecond of additional error, as the GPS signal is not precisely synchronized with TAI, but GPS receiving devices are widely available. === TT(BIPM) === Approximately annually since 1992, the International Bureau of Weights and Measures (BIPM) has produced better realizations of TT based on reanalysis of historical TAI data. BIPM's realizations of TT are named in the form "TT(BIPM08)", with the digits indicating the year of publication. They are published in the form of a table of differences from TT(TAI), along with an extrapolation equation that may be used for dates later than the table. The latest as of July 2024 is TT(BIPM23). === Pulsars === Researchers from the International Pulsar Timing Array collaboration have created a realization TT(IPTA16) of TT based on observations of an ensemble of pulsars up to 2012. This new pulsar time scale is an independent means of computing TT. The researchers observed that their scale was within 0.5 microseconds of TT(BIPM17), with significantly lower errors since 2003. The data used was insufficient to analyze long-term stability, and contained several anomalies, but as more data is collected and analyzed, this realization may eventually be useful to identify defects in TAI and TT(BIPM). === Other standards === TT is in effect a continuation of (but is more precisely uniform than) the former Ephemeris Time (ET). It was designed for continuity with ET, and it runs at the rate of the SI second, which was itself derived from a calibration using the second of ET (see, under Ephemeris time, Redefinition of the second and Implementations). The JPL ephemeris time argument Teph is within a few milliseconds of TT. TT is slightly ahead of UT1 (a refined measure of mean solar time at Greenwich) by an amount known as ΔT = TT − UT1. ΔT was measured at +67.6439 seconds (TT ahead of UT1) at 0 h UTC on 1 January 2015; and by retrospective calculation, ΔT was close to zero about the year 1900. ΔT is expected to continue to increase, with UT1 becoming steadily (but irregularly) further behind TT in the future. In fine detail, ΔT is somewhat unpredictable, with 10-year extrapolations diverging by 2-3 seconds from the actual value. == Relativistic relationships == Observers in different locations, that are in relative motion or at different altitudes, can disagree about the rates of each other's clocks, owing to effects described by the theory of relativity. As a result, TT (even as a theoretical ideal) does not match the proper time of all observers. In relativistic terms, TT is described as the proper time of a clock located on the geoid (essentially mean sea level). However, TT is now actually defined as a coordinate time scale. The redefinition did not quantitatively change TT, but rather made the existing definition more precise. In effect it defined the geoid (mean sea level) in terms of a particular level of gravitational time dilation relative to a notional observer located at infinitely high altitude. The present definition of TT is a linear scaling of Geocentric Coordinate Time (TCG), which is the proper time of a notional observer who is infinitely far away (so not affected by gravitational time dilation) and at rest relative to Earth. TCG is used to date mainly for theoretical purposes in astronomy. From the point of view of an observer on Earth's surface the second of TCG passes in slightly less than the observer's SI second. The comparison of the observer's clock against TT depends on the observer's altitude: they will match on the geoid, and clocks at higher altitude tick slightly faster. == See also == Barycentric Coordinate Time Geocentric Coordinate Time == References == == External links == BIPM technical services: Time Metrology Time and Frequency from A to Z
Wikipedia/Terrestrial_Time
The Einstein–Infeld–Hoffmann equations of motion, jointly derived by Albert Einstein, Leopold Infeld and Banesh Hoffmann, are the differential equations describing the approximate dynamics of a system of point-like masses due to their mutual gravitational interactions, including general relativistic effects. It uses a first-order post-Newtonian expansion and thus is valid in the limit where the velocities of the bodies are small compared to the speed of light and where the gravitational fields affecting them are correspondingly weak. Given a system of N bodies, labelled by indices A = 1, ..., N, the barycentric acceleration vector of body A is given by: a → A = ∑ B ≠ A G m B n → B A r A B 2 + 1 c 2 ∑ B ≠ A G m B n → B A r A B 2 [ v A 2 + 2 v B 2 − 4 ( v → A ⋅ v → B ) − 3 2 ( n → A B ⋅ v → B ) 2 − 4 ∑ C ≠ A G m C r A C − ∑ C ≠ B G m C r B C + 1 2 ( ( x → B − x → A ) ⋅ a → B ) ] + 1 c 2 ∑ B ≠ A G m B r A B 2 [ n → A B ⋅ ( 4 v → A − 3 v → B ) ] ( v → A − v → B ) + 7 2 c 2 ∑ B ≠ A G m B a → B r A B + O ( c − 4 ) {\displaystyle {\begin{aligned}{\vec {a}}_{A}&=\sum _{B\not =A}{\frac {Gm_{B}{\vec {n}}_{BA}}{r_{AB}^{2}}}\\&{}\quad {}+{\frac {1}{c^{2}}}\sum _{B\not =A}{\frac {Gm_{B}{\vec {n}}_{BA}}{r_{AB}^{2}}}\left[v_{A}^{2}+2v_{B}^{2}-4({\vec {v}}_{A}\cdot {\vec {v}}_{B})-{\frac {3}{2}}({\vec {n}}_{AB}\cdot {\vec {v}}_{B})^{2}\right.\\&{}\qquad {}\left.{}-4\sum _{C\not =A}{\frac {Gm_{C}}{r_{AC}}}-\sum _{C\not =B}{\frac {Gm_{C}}{r_{BC}}}+{\frac {1}{2}}(({\vec {x}}_{B}-{\vec {x}}_{A})\cdot {\vec {a}}_{B})\right]\\&{}\quad {}+{\frac {1}{c^{2}}}\sum _{B\not =A}{\frac {Gm_{B}}{r_{AB}^{2}}}\left[{\vec {n}}_{AB}\cdot (4{\vec {v}}_{A}-3{\vec {v}}_{B})\right]({\vec {v}}_{A}-{\vec {v}}_{B})\\&{}\quad {}+{\frac {7}{2c^{2}}}\sum _{B\not =A}{\frac {Gm_{B}{\vec {a}}_{B}}{r_{AB}}}+O(c^{-4})\end{aligned}}} where: x → A {\displaystyle {\vec {x}}_{A}} is the barycentric position vector of body A v → A = d x → A / d t {\displaystyle {\vec {v}}_{A}=d{\vec {x}}_{A}/dt} is the barycentric velocity vector of body A a → A = d 2 x → A / d t 2 {\displaystyle {\vec {a}}_{A}=d^{2}{\vec {x}}_{A}/dt^{2}} is the barycentric acceleration vector of body A r A B = | x → A − x → B | {\displaystyle r_{AB}=|{\vec {x}}_{A}-{\vec {x}}_{B}|} is the coordinate distance between bodies A and B n → A B = ( x → A − x → B ) / r A B {\displaystyle {\vec {n}}_{AB}=({\vec {x}}_{A}-{\vec {x}}_{B})/r_{AB}} is the unit vector pointing from body B to body A m A {\displaystyle m_{A}} is the mass of body A. c {\displaystyle c} is the speed of light G {\displaystyle G} is the gravitational constant and the big O notation is used to indicate that terms of order c−4 or beyond have been omitted. The coordinates used here are harmonic. The first term on the right hand side is the Newtonian gravitational acceleration at A; in the limit as c → ∞, one recovers Newton's law of motion. The acceleration of a particular body depends on the accelerations of all the other bodies. Since the quantity on the left hand side also appears in the right hand side, this system of equations must be solved iteratively. In practice, using the Newtonian acceleration instead of the true acceleration provides sufficient accuracy. == References == == Further reading == Einstein, A.; Infeld, L.; Hoffmann, B. (1938). "The Gravitational Equations and the Problem of Motion". Annals of Mathematics. Second series. 39 (1): 65–100. Bibcode:1938AnMat..39...65E. doi:10.2307/1968714. JSTOR 1968714. Kovalevsky, Jean; Seidelmann, P. Kenneth (2004). Fundamentals of Astrometry. New York: Cambridge University Press. p. 173. ISBN 0521642167. Landau, Lev; Lifshitz, Evgeny (1971). The classical theory of fields. Oxford: Pergamon Press. p. 337.
Wikipedia/Einstein–Infeld–Hoffmann_equations
In general relativity, the Oppenheimer–Snyder model is a solution to the Einstein field equations based on the Schwarzschild metric describing the collapse of an object of extreme mass into a black hole. It is named after physicists J. Robert Oppenheimer and Hartland Snyder, who published it in 1939. During the collapse of a star to a black hole the geometry on the outside of the sphere is the Schwarzschild geometry. However the geometry inside is, curiously enough, the same Robertson-Walker geometry as in the rest of the observable universe. == History == Albert Einstein, who had developed his theory of general relativity in 1915, initially denied the possibility of black holes, even though they were a genuine implication of the Schwarzschild metric, obtained by Karl Schwarzschild in 1916, the first known non-trivial exact solution to Einstein's field equations. In 1939, Einstein published "On a Stationary System with Spherical Symmetry Consisting of Many Gravitating Masses" in the Annals of Mathematics, claiming to provide "a clear understanding as to why these 'Schwarzschild singularities' do not exist in physical reality." Months after the issuing of Einstein's article, J. Robert Oppenheimer and his student Hartland Snyder studied this topic with their paper "On Continued Gravitational Contraction" making the opposite argument as Einstein's. They showed when a sufficiently massive star runs out of thermonuclear fuel, it will undergo continued gravitational contraction and become separated from the rest of the universe by a boundary called the event horizon, which not even light can escape. This paper predicted the existence of what are today known as black holes. The term "black hole" was coined decades later, in the fall of 1967, by John Archibald Wheeler at a conference held by the Goddard Institute for Space Studies in New York City; it appeared for the first time in print the following year. Oppenheimer and Snyder used Einstein's own theory of gravity to prove how black holes could develop for the first time in contemporary physics, but without referencing the aforementioned article by Einstein. Oppenheimer and Snyder did, however, refer to an earlier article by Oppenheimer and Volkoff on neutron stars, improving upon the work of Lev Davidovich Landau. Previously, and in the same year, Oppenheimer and three colleagues, Richard Tolman, Robert Serber, and George Volkoff, had investigated the stability of neutron stars, obtaining the Tolman-Oppenheimer-Volkoff limit. Oppenheimer would not revisit the topic in future publications. == Model == The Oppenheimer–Snyder model of continued gravitational collapse is described by the line element d s 2 = − d τ 2 + A 2 ( η ) ( d R 2 1 − 2 M R − 2 R b 2 1 R + + R 2 d Ω 2 ) {\displaystyle ds^{2}=-d\tau ^{2}+A^{2}(\eta )\left({\frac {dR^{2}}{1-2M{\frac {R_{-}^{2}}{R_{b}^{2}}}{\frac {1}{R_{+}}}}}+R^{2}d\Omega ^{2}\right)} The quantities appearing in this expression are as follows: The coordinates are ( τ , R , θ , ϕ ) {\displaystyle (\tau ,R,\theta ,\phi )} where θ , ϕ {\displaystyle \theta ,\phi } are coordinates for the 2-sphere. R b {\displaystyle R_{b}} is a positive quantity, the "boundary radius", representing the boundary of the matter region. M {\displaystyle M} is a positive quantity, the mass. R − = m i n ( R , R b ) {\displaystyle R_{-}=\mathrm {min} (R,R_{b})} and R + = m a x ( R , R b ) {\displaystyle R_{+}=\mathrm {max} (R,R_{b})} . η {\displaystyle \eta } is defined implicitly by the equation τ ( η , R ) = 1 2 R + 3 2 M ( η + sin ⁡ η ) . {\displaystyle \tau (\eta ,R)={\frac {1}{2}}{\sqrt {\frac {R_{+}^{3}}{2M}}}(\eta +\sin \eta ).} A ( η ) = 1 + cos ⁡ η 2 {\displaystyle A(\eta )={\frac {1+\cos \eta }{2}}} . This expression is valid both in the matter region R < R b {\displaystyle R<R_{b}} , and the vacuum region R > R b {\displaystyle R>R_{b}} , and continuously transitions between the two. == Reception and legacy == Kip Thorne recalled that physicists were initially skeptical of the model, viewing it as "truly strange" at the time. He explained further, "It was hard for people of that era to understand the paper because the things that were being smoked out of the mathematics were so different from any mental picture of how things should behave in the universe." Oppenheimer himself thought little of this discovery. However, some considered the model's discovery to be more significant than Oppenheimer did, and model would later be described as forward thinking. Freeman Dyson thought it was Oppenheimer's greatest contribution to science. Lev Davidovich Landau added the Oppenheimer-Snyder paper to his "golden list" of classic papers. John Archibald Wheeler was initially an opponent of the model until the late 1950s, when he was asked to teach a course on general relativity at Princeton University. Wheeler claimed at a conference in 1958 that the Oppenheimer-Snyder model had neglected the many features of a realistic star. However, he later changed his mind completely after being informed by Edward Teller that a computer simulation ran by Stirling Colgate and his team at the Lawrence Livermore National Laboratory had shown a sufficiently heavy star would undergo continued gravitational contraction in a manner similar to the idealized scenario described by Oppenheimer and Snyder. Wheeler subsequently played a key role in reviving interest in general relativity in the United States, and popularized the term "black hole" in the late 1960s. Various theoretical physicists pursued this topic and by the late 1960s and early 1970s, advances in observational astronomy, such as radio telescopes, changed the attitude of the scientific community. Pulsars had already been discovered and black holes were no longer considered mere textbook curiosities. Cygnus X-1, the first solid black-hole candidate, was discovered by the Uhuru X-ray space telescope in 1971. Jeremy Bernstein described it as "one of the great papers in twentieth-century physics." After winning the Nobel Prize in Physics in 2020, Roger Penrose would credit the Oppenheimer–Snyder model as one of his inspirations for research. The Hindu wrote in 2023: The world of physics does indeed remember the paper. While Oppenheimer is remembered in history as the “father of the atomic bomb”, his greatest contribution as a physicist was on the physics of black holes. The work of Oppenheimer and Hartland Snyder helped transform black holes from figments of mathematics to real, physical possibilities – something to be found in the cosmos out there. == In popular culture == In the 2023 film Oppenheimer, an interaction between Oppenheimer and his student Snyder occurs as their paper was published on the same day as the Invasion of Poland. == See also == Tolman–Oppenheimer–Volkoff equation Timeline of gravitational physics and relativity == References ==
Wikipedia/Oppenheimer–Snyder_model
Relativity: The Special and the General Theory (German: Über die spezielle und die allgemeine Relativitätstheorie) is a popular science book by Albert Einstein. It began as a short paper and was eventually expanded into a book written with the aim of explaining the special and general theories of relativity. It was published in German in 1916 and translated into English in 1920. It is divided into three parts, the first dealing with special relativity, the second dealing with general relativity, and the third dealing with cosmology. == Contents == "The present book is intended, as far as possible, to give an exact insight into the theory of relativity to those readers who, from a general scientific and philosophical point of view, are interested in the theory, but who are not conversant with the mathematical apparatus of theoretical physics ... I adhered scrupulously to the precept of the brilliant theoretical physicist L. Boltzmann, according to whom the matters of elegance ought to be left to the tailor and the cobbler. I make no pretence of having withheld from the reader difficulties which are inherent in the subject. On the other hand, I have purposely treated the empirical physical foundations of the theory in a 'step-motherly' fashion, so that readers unfamiliar with physics may not feel like the wanderer who was unable to see the forest for trees. May the book bring some one a few happy hours of suggestive thought!" says the preface === Part I: The Special Theory of Relativity === Einstein gives a brief overview of Galilean invariance, that the laws of physics are the same in all frames of reference. He provides a thought experiment of two co-ordinate systems, K and K′, moving uniformly to one another: "If K is a Galilean co-ordinate system, then every other co-ordinate system K′ is a Galilean one, when, in the relation to K, it is in a condition of uniform motion of translation. Relative to K′ the mechanical laws of Galilei-Newton hold good exactly as they do with respect to K. We advance a step farther in our generalisation when we express the tenet thus: If, relative to K, K′, is a uniformly moving co-ordinate system devoid of rotation, then natural phenomena run their course with respect to K′ according to exactly the same general laws as with respect to K." He introduces the fact of the constancy of the speed of light in vacuo, which is apparently in contradiction with Galilean invariance. "There is hardly a simpler law in physics than that according to which light is propagated in empty space. Every child at school knows, or believe he knows, that this propagation takes place in straight lines with a velocity c = 300,000 km./sec. At all events we know with great exactness that this velocity is the same for all colours, because if this were not the case, the minimum emission would not be observed simultaneously for different colours during the eclipse of a fixed star by its dark neighbour. By means of similar considerations based on observations of double stars, the Dutch astronomer De Sitter was able to show that the velocity of propagation of light cannot depend on the velocity of motion by the body emitting the light. The assumption that this velocity of propagation of light is dependent on the direction 'in space' is in itself improbable." He gives a thought experiment about a ray of light observed from a moving railway carriage and embankment, and argues that observers should measure the same speed of light. He argues that the constancy of the speed of light must be observed in all reference frames; "like every other general law of nature, the law of the transmission of light in vacuo must, according to the principle of relativity, be the same for the railway carriage as reference-body was when rails are the frame of reference ... The epoch-making theoretical investigations of H. A. Lorentz on the electrodynamical and optical phenomena connected with moving bodies show that experience in this domain leads conclusively to a theory of electromagnetic phenomena, of which the law of the constancy of the velocity of light in vacuo is a necessary consequence." The relativity of simultaneity is introduced. We return to the embankment with another thought experiment: lightning flashes at points A and B while the train moves from A to B. An observer on the embankment at M, equidistant from A and B, sees the flashes as simultaneous. An observer at on the train at M′, moving from A to B, sees the flash at B before the flash at A: "Events which are simultaneous with reference to the embankment are not simultaneous with respect to the train, and vise versa (relativity of simultaneity.) Every reference body co-ordinate system) has its own particular time; unless we are told the reference-body to which the statement of time refers, there is no meaning in the time of an event. Now before the advent of the theory of relativity it had always tacitly been assumed in physics that the statement of time had an absolute significance, i.e. that it is independent of the state of motion of the body of reference. But we have just seen that this assumption is incompatible with the most natural definition of simultaneity; if we discard this assumption, then the conflict between the law of propagation of light in vacuo and the principle of relativity (developed in Section 7) disappears." He points to the eponymous experiment of Hippolyte Fizeau as demonstration that the speed of light is constant. He presents the Lorentz transformation and its nonintuitive consequences, such as time dilation. Mass-energy equivalence is discussed: "Before the advent of relativity, physics recognized two conservation laws of fundamental importance, namely, the law of the conservation of energy and the law of the conservation of mass; these two fundamental laws appeared to be quite independent of each other. By means of the theory of relativity they have been united into one law." The limiting velocity of light is discussed. He introduces the concept of spacetime, formulated by his teacher Hermann Minkowski. === Part II: The General Theory of Relativity === Einstein begins with a brief review of Special Relativity, which posits that the laws of physics are the same for all frames of reference moving uniformly to one another. He argues that this should be expanded to include acceleration and gravitation. A clue comes from the equivalence principle: "Bodies which are moving under the sole influence of a gravitational field receive an acceleration, which does not in the least depend on the material or on the physical state of the body. For instance, a piece of lead and a piece of wood fall in exactly the same manner in a gravitational field (in vacuo), when they start off from rest or with the same initial velocity ... If now, as we find from experience, the acceleration is to be independent of the nature and the condition of the body and always the same for a given gravitational field, then the ratio of the gravitational to the inertial mass must likewise be the same for all bodies. By a suitable choice of units we can thus make this ratio equal to unity. We then have the following law: The gravitational mass of a body is equal to its inertial mass." Einstein presents a thought experiment, asking us to imagine a chest accelerating freely through empty space. An occupant of the chest might conclude that he is at rest in a gravitational field: "If he releases a body which he previously had in his hand, the acceleration of the chest will no longer be transmitted to this body, and for this reason the body will approach the floor of the chest with an accelerated relative motion. The observer will further convince himself that the acceleration of the body towards the floor of the chest is always of the same magnitude, whatever kind of body he may happen to use for the experiment." He asks: "Ought we ought to smile at the man and say that he errs in his conclusion? I do not believe we ought to if we wish to remain consistent we must rather admit that his mode of grasping the situation violates neither reason nor known mechanical laws." The equivalence of gravitational and inertial masses, unaccounted for by Newton, is explained by Einstein. He offers the thought experiment of an observer on a rotating disk to argue that non-Euclidean geometry is needed to describe gravity. Einstein outlines the theory's explanatory and predictive power. Newton's theory of gravity could not account for the precession of the perihelion of Mercury; Urbain Le Verrier, who predicted the existence of Neptune based on deviations from Newton, tried and failed to explain it: "The value obtained for this rotary movement of the orbital ellipse was 43 seconds of arc per century ... This effect can be explained by means of classical mechanics only on the assumption of hypotheses which have little probability, but which were devised solely for this purpose." General relativity predicts the correct value: "in the case of Mercury it must amount to 43 seconds of arc per century, a result which is strictly in agreement with observation." === Part III: Considerations on the Universe as a Whole === Einstein applies General Relativity to cosmology. The theory raises the possibility that the Universe is finite but unbounded. === Appendices === In Appendix One, Einstein offers a "Simple Derivation of the Lorentz Transformation". In Appendix Two, he details "Minkowski's Four-Dimensional Space ('World')". In Appendix Three, he describes "The Experimental Confirmation of the General Relativity": the theory predicts the correct value (43 arc-seconds per century) for the precession of the perihelion of Mercury; that light should curve towards a massive body, a prediction confirmed by Arthur Eddington in 1919; that light from a massive star should be shifted to the red, a prediction confirmed by Walter Sydney Adams in 1925. In Appendix Four, he returns to "The Structure of Space According to the General Theory of Relativity". In 1922, Alexander Friedmann showed that "the theory demands an expansion of space. A few years later Hubble showed, by a special investigation of the extra-galactic nebulae ('milky ways') that the spectral lines emitted showed a red shift which increased regularly with the distance of the nebulae. This can be interpreted in regard to our present knowledge only in the sense of the Doppler's principle, as an expansive motion of the stars in the large—as required, according to Friedman [sic], by the field equations of gravitation. Hubble's discovery can, therefore, be considered to some extent a confirmation of the theory." In Appendix Five, he returns to "Relativity and the Problem of Space". == Publication history == Abraham Pais suggests that Hendrik Lorentz may have influenced Einstein to write a popularization. He writes that the "beautiful, fifty-page account was completed in March 1916. It was well-received ... In December 1916 he completed Über die spezielle und die allgemeine Relativitätstheorie, gemeinverständlich, his most widely known work. Demand for it became especially high after the results of the eclipse expedition caused such a stir." There have been many versions published since the original in 1916, the latest being in 2015. The 2015 publication by Princeton University Press, is billed as the 100th Anniversary Edition, and was issued as an e-book in 2019. == Reception == A review in Nature wrote: "Here is an excellent translation of Einstein's own book; we hasten to it to know the whole truth and nothing but the truth. The reviewer on this occasion should be the man in the street, the man who, with thousands, has been asking, 'What is Relativity?' 'What is the matter with Euclid and with Newton?' 'What is this message from the stars?' Whether it is possible for the prophet to make his message clear to the multitude, only history can prove." Walter Rathenau wrote a letter to Einstein about the book: "I have been immersed in your ideas for weeks ... I would not have thought it possible to force such a radical rearrangement of ideas through, the way you do, with such simple means and using classical architectonics." This letter is reprinted in the Princeton University Press edition, along with a page of the text in Einstein's handwriting. People such as Robert W. Lawson have called the work unique in that it gives readers an insight into the thought processes of one of the greatest minds of the 20th century. Martin Rees, Astronomer Royal, says: “This book is not only an important historical document, but displays the style and clarity of Einstein’s thought in a manner accessible to a wide readership.” == See also == The Evolution of Physics (1938), a popular history of physics by Einstein and Leopold Infeld A Brief History of Time (1988), introduction to physics and cosmology by Stephen Hawking Three Roads to Quantum Gravity (2001), overview of potential unified field theories by Lee Smolin The Fabric of the Cosmos (2004), a popular overview of space and time in modern physics by Brian Greene The Road to Reality (2004), overview of physics by Roger Penrose == Notes == == External links == Albert Einstein. Relativity: the Special and the General Theory, 10th edition (there are a total of 17 editions). ISBN 0-517-029618 at Project Gutenberg Relativity: The Special and General Theory public domain audiobook at LibriVox Albert Einstein, Relativity: The Special and General Theory (1920/2000) ISBN 1-58734-092-5 at Bartleby.com
Wikipedia/Relativity:_The_Special_and_the_General_Theory
In mechanics, acceleration is the rate of change of the velocity of an object with respect to time. Acceleration is one of several components of kinematics, the study of motion. Accelerations are vector quantities (in that they have magnitude and direction). The orientation of an object's acceleration is given by the orientation of the net force acting on that object. The magnitude of an object's acceleration, as described by Newton's second law, is the combined effect of two causes: the net balance of all external forces acting onto that object — magnitude is directly proportional to this net resulting force; that object's mass, depending on the materials out of which it is made — magnitude is inversely proportional to the object's mass. The SI unit for acceleration is metre per second squared (m⋅s−2, m s 2 {\displaystyle \mathrm {\tfrac {m}{s^{2}}} } ). For example, when a vehicle starts from a standstill (zero velocity, in an inertial frame of reference) and travels in a straight line at increasing speeds, it is accelerating in the direction of travel. If the vehicle turns, an acceleration occurs toward the new direction and changes its motion vector. The acceleration of the vehicle in its current direction of motion is called a linear (or tangential during circular motions) acceleration, the reaction to which the passengers on board experience as a force pushing them back into their seats. When changing direction, the effecting acceleration is called radial (or centripetal during circular motions) acceleration, the reaction to which the passengers experience as a centrifugal force. If the speed of the vehicle decreases, this is an acceleration in the opposite direction of the velocity vector (mathematically a negative, if the movement is unidimensional and the velocity is positive), sometimes called deceleration or retardation, and passengers experience the reaction to deceleration as an inertial force pushing them forward. Such negative accelerations are often achieved by retrorocket burning in spacecraft. Both acceleration and deceleration are treated the same, as they are both changes in velocity. Each of these accelerations (tangential, radial, deceleration) is felt by passengers until their relative (differential) velocity are neutralised in reference to the acceleration due to change in speed. == Definition and properties == === Average acceleration === An object's average acceleration over a period of time is its change in velocity, Δ v {\displaystyle \Delta \mathbf {v} } , divided by the duration of the period, Δ t {\displaystyle \Delta t} . Mathematically, a ¯ = Δ v Δ t . {\displaystyle {\bar {\mathbf {a} }}={\frac {\Delta \mathbf {v} }{\Delta t}}.} === Instantaneous acceleration === Instantaneous acceleration, meanwhile, is the limit of the average acceleration over an infinitesimal interval of time. In the terms of calculus, instantaneous acceleration is the derivative of the velocity vector with respect to time: a = lim Δ t → 0 Δ v Δ t = d v d t . {\displaystyle \mathbf {a} =\lim _{{\Delta t}\to 0}{\frac {\Delta \mathbf {v} }{\Delta t}}={\frac {d\mathbf {v} }{dt}}.} As acceleration is defined as the derivative of velocity, v, with respect to time t and velocity is defined as the derivative of position, x, with respect to time, acceleration can be thought of as the second derivative of x with respect to t: a = d v d t = d 2 x d t 2 . {\displaystyle \mathbf {a} ={\frac {d\mathbf {v} }{dt}}={\frac {d^{2}\mathbf {x} }{dt^{2}}}.} (Here and elsewhere, if motion is in a straight line, vector quantities can be substituted by scalars in the equations.) By the fundamental theorem of calculus, it can be seen that the integral of the acceleration function a(t) is the velocity function v(t); that is, the area under the curve of an acceleration vs. time (a vs. t) graph corresponds to the change of velocity. Δ v = ∫ a d t . {\displaystyle \mathbf {\Delta v} =\int \mathbf {a} \,dt.} Likewise, the integral of the jerk function j(t), the derivative of the acceleration function, can be used to find the change of acceleration at a certain time: Δ a = ∫ j d t . {\displaystyle \mathbf {\Delta a} =\int \mathbf {j} \,dt.} === Units === Acceleration has the dimensions of velocity (L/T) divided by time, i.e. L T−2. The SI unit of acceleration is the metre per second squared (m s−2); or "metre per second per second", as the velocity in metres per second changes by the acceleration value, every second. === Other forms === An object moving in a circular motion—such as a satellite orbiting the Earth—is accelerating due to the change of direction of motion, although its speed may be constant. In this case it is said to be undergoing centripetal (directed towards the center) acceleration. Proper acceleration, the acceleration of a body relative to a free-fall condition, is measured by an instrument called an accelerometer. In classical mechanics, for a body with constant mass, the (vector) acceleration of the body's center of mass is proportional to the net force vector (i.e. sum of all forces) acting on it (Newton's second law): F = m a ⟹ a = F m , {\displaystyle \mathbf {F} =m\mathbf {a} \quad \implies \quad \mathbf {a} ={\frac {\mathbf {F} }{m}},} where F is the net force acting on the body, m is the mass of the body, and a is the center-of-mass acceleration. As speeds approach the speed of light, relativistic effects become increasingly large. == Tangential and centripetal acceleration == The velocity of a particle moving on a curved path as a function of time can be written as: v ( t ) = v ( t ) v ( t ) v ( t ) = v ( t ) u t ( t ) , {\displaystyle \mathbf {v} (t)=v(t){\frac {\mathbf {v} (t)}{v(t)}}=v(t)\mathbf {u} _{\mathrm {t} }(t),} with v(t) equal to the speed of travel along the path, and u t = v ( t ) v ( t ) , {\displaystyle \mathbf {u} _{\mathrm {t} }={\frac {\mathbf {v} (t)}{v(t)}}\,,} a unit vector tangent to the path pointing in the direction of motion at the chosen moment in time. Taking into account both the changing speed v(t) and the changing direction of ut, the acceleration of a particle moving on a curved path can be written using the chain rule of differentiation for the product of two functions of time as: a = d v d t = d v d t u t + v ( t ) d u t d t = d v d t u t + v 2 r u n , {\displaystyle {\begin{alignedat}{3}\mathbf {a} &={\frac {d\mathbf {v} }{dt}}\\&={\frac {dv}{dt}}\mathbf {u} _{\mathrm {t} }+v(t){\frac {d\mathbf {u} _{\mathrm {t} }}{dt}}\\&={\frac {dv}{dt}}\mathbf {u} _{\mathrm {t} }+{\frac {v^{2}}{r}}\mathbf {u} _{\mathrm {n} }\ ,\end{alignedat}}} where un is the unit (inward) normal vector to the particle's trajectory (also called the principal normal), and r is its instantaneous radius of curvature based upon the osculating circle at time t. The components a t = d v d t u t and a c = v 2 r u n {\displaystyle \mathbf {a} _{\mathrm {t} }={\frac {dv}{dt}}\mathbf {u} _{\mathrm {t} }\quad {\text{and}}\quad \mathbf {a} _{\mathrm {c} }={\frac {v^{2}}{r}}\mathbf {u} _{\mathrm {n} }} are called the tangential acceleration and the normal or radial acceleration (or centripetal acceleration in circular motion, see also circular motion and centripetal force), respectively. Geometrical analysis of three-dimensional space curves, which explains tangent, (principal) normal and binormal, is described by the Frenet–Serret formulas. == Special cases == === Uniform acceleration === Uniform or constant acceleration is a type of motion in which the velocity of an object changes by an equal amount in every equal time period. A frequently cited example of uniform acceleration is that of an object in free fall in a uniform gravitational field. The acceleration of a falling body in the absence of resistances to motion is dependent only on the gravitational field strength g (also called acceleration due to gravity). By Newton's second law the force F g {\displaystyle \mathbf {F_{g}} } acting on a body is given by: F g = m g . {\displaystyle \mathbf {F_{g}} =m\mathbf {g} .} Because of the simple analytic properties of the case of constant acceleration, there are simple formulas relating the displacement, initial and time-dependent velocities, and acceleration to the time elapsed: s ( t ) = s 0 + v 0 t + 1 2 a t 2 = s 0 + 1 2 ( v 0 + v ( t ) ) t v ( t ) = v 0 + a t v 2 ( t ) = v 0 2 + 2 a ⋅ [ s ( t ) − s 0 ] , {\displaystyle {\begin{aligned}\mathbf {s} (t)&=\mathbf {s} _{0}+\mathbf {v} _{0}t+{\tfrac {1}{2}}\mathbf {a} t^{2}=\mathbf {s} _{0}+{\tfrac {1}{2}}\left(\mathbf {v} _{0}+\mathbf {v} (t)\right)t\\\mathbf {v} (t)&=\mathbf {v} _{0}+\mathbf {a} t\\{v^{2}}(t)&={v_{0}}^{2}+2\mathbf {a\cdot } [\mathbf {s} (t)-\mathbf {s} _{0}],\end{aligned}}} where t {\displaystyle t} is the elapsed time, s 0 {\displaystyle \mathbf {s} _{0}} is the initial displacement from the origin, s ( t ) {\displaystyle \mathbf {s} (t)} is the displacement from the origin at time t {\displaystyle t} , v 0 {\displaystyle \mathbf {v} _{0}} is the initial velocity, v ( t ) {\displaystyle \mathbf {v} (t)} is the velocity at time t {\displaystyle t} , and a {\displaystyle \mathbf {a} } is the uniform rate of acceleration. In particular, the motion can be resolved into two orthogonal parts, one of constant velocity and the other according to the above equations. As Galileo showed, the net result is parabolic motion, which describes, e.g., the trajectory of a projectile in vacuum near the surface of Earth. === Circular motion === In uniform circular motion, that is moving with constant speed along a circular path, a particle experiences an acceleration resulting from the change of the direction of the velocity vector, while its magnitude remains constant. The derivative of the location of a point on a curve with respect to time, i.e. its velocity, turns out to be always exactly tangential to the curve, respectively orthogonal to the radius in this point. Since in uniform motion the velocity in the tangential direction does not change, the acceleration must be in radial direction, pointing to the center of the circle. This acceleration constantly changes the direction of the velocity to be tangent in the neighbouring point, thereby rotating the velocity vector along the circle. For a given speed v {\displaystyle v} , the magnitude of this geometrically caused acceleration (centripetal acceleration) is inversely proportional to the radius r {\displaystyle r} of the circle, and increases as the square of this speed: a c = v 2 r . {\displaystyle a_{c}={\frac {v^{2}}{r}}\,.} For a given angular velocity ω {\displaystyle \omega } , the centripetal acceleration is directly proportional to radius r {\displaystyle r} . This is due to the dependence of velocity v {\displaystyle v} on the radius r {\displaystyle r} . v = ω r . {\displaystyle v=\omega r.} Expressing centripetal acceleration vector in polar components, where r {\displaystyle \mathbf {r} } is a vector from the centre of the circle to the particle with magnitude equal to this distance, and considering the orientation of the acceleration towards the center, yields a c = − v 2 | r | ⋅ r | r | . {\displaystyle \mathbf {a_{c}} =-{\frac {v^{2}}{|\mathbf {r} |}}\cdot {\frac {\mathbf {r} }{|\mathbf {r} |}}\,.} As usual in rotations, the speed v {\displaystyle v} of a particle may be expressed as an angular speed with respect to a point at the distance r {\displaystyle r} as ω = v r . {\displaystyle \omega ={\frac {v}{r}}.} Thus a c = − ω 2 r . {\displaystyle \mathbf {a_{c}} =-\omega ^{2}\mathbf {r} \,.} This acceleration and the mass of the particle determine the necessary centripetal force, directed toward the centre of the circle, as the net force acting on this particle to keep it in this uniform circular motion. The so-called 'centrifugal force', appearing to act outward on the body, is a so-called pseudo force experienced in the frame of reference of the body in circular motion, due to the body's linear momentum, a vector tangent to the circle of motion. In a nonuniform circular motion, i.e., the speed along the curved path is changing, the acceleration has a non-zero component tangential to the curve, and is not confined to the principal normal, which directs to the center of the osculating circle, that determines the radius r {\displaystyle r} for the centripetal acceleration. The tangential component is given by the angular acceleration α {\displaystyle \alpha } , i.e., the rate of change α = ω ˙ {\displaystyle \alpha ={\dot {\omega }}} of the angular speed ω {\displaystyle \omega } times the radius r {\displaystyle r} . That is, a t = r α . {\displaystyle a_{t}=r\alpha .} The sign of the tangential component of the acceleration is determined by the sign of the angular acceleration ( α {\displaystyle \alpha } ), and the tangent is always directed at right angles to the radius vector. == Coordinate systems == In multi-dimensional Cartesian coordinate systems, acceleration is broken up into components that correspond with each dimensional axis of the coordinate system. In a two-dimensional system, where there is an x-axis and a y-axis, corresponding acceleration components are defined as a x = d v x / d t = d 2 x / d t 2 , {\displaystyle a_{x}=dv_{x}/dt=d^{2}x/dt^{2},} a y = d v y / d t = d 2 y / d t 2 . {\displaystyle a_{y}=dv_{y}/dt=d^{2}y/dt^{2}.} The two-dimensional acceleration vector is then defined as a =< a x , a y > {\displaystyle {\textbf {a}}=<a_{x},a_{y}>} . The magnitude of this vector is found by the distance formula as | a | = a x 2 + a y 2 . {\displaystyle |a|={\sqrt {a_{x}^{2}+a_{y}^{2}}}.} In three-dimensional systems where there is an additional z-axis, the corresponding acceleration component is defined as a z = d v z / d t = d 2 z / d t 2 . {\displaystyle a_{z}=dv_{z}/dt=d^{2}z/dt^{2}.} The three-dimensional acceleration vector is defined as a =< a x , a y , a z > {\displaystyle {\textbf {a}}=<a_{x},a_{y},a_{z}>} with its magnitude being determined by | a | = a x 2 + a y 2 + a z 2 . {\displaystyle |a|={\sqrt {a_{x}^{2}+a_{y}^{2}+a_{z}^{2}}}.} == Relation to relativity == === Special relativity === The special theory of relativity describes the behaviour of objects travelling relative to other objects at speeds approaching that of light in vacuum. Newtonian mechanics is exactly revealed to be an approximation to reality, valid to great accuracy at lower speeds. As the relevant speeds increase toward the speed of light, acceleration no longer follows classical equations. As speeds approach that of light, the acceleration produced by a given force decreases, becoming infinitesimally small as light speed is approached; an object with mass can approach this speed asymptotically, but never reach it. === General relativity === Unless the state of motion of an object is known, it is impossible to distinguish whether an observed force is due to gravity or to acceleration—gravity and inertial acceleration have identical effects. Albert Einstein called this the equivalence principle, and said that only observers who feel no force at all—including the force of gravity—are justified in concluding that they are not accelerating. == Conversions == == See also == == References == == External links == Acceleration Calculator Simple acceleration unit converter
Wikipedia/Accelerated_motion
Standard-Model Extension (SME) is an effective field theory that contains the Standard Model, general relativity, and all possible operators that break Lorentz symmetry. Violations of this fundamental symmetry can be studied within this general framework. CPT violation implies the breaking of Lorentz symmetry, and the SME includes operators that both break and preserve CPT symmetry. == Development == In 1989, Alan Kostelecký and Stuart Samuel proved that interactions in string theories could lead to the spontaneous breaking of Lorentz symmetry. Later studies have indicated that loop-quantum gravity, non-commutative field theories, brane-world scenarios, and random dynamics models also involve the breakdown of Lorentz invariance. Interest in Lorentz violation has grown rapidly in the last decades because it can arise in these and other candidate theories for quantum gravity. In the early 1990s, it was shown in the context of bosonic superstrings that string interactions can also spontaneously break CPT symmetry. This work suggested that experiments with kaon interferometry would be promising for seeking possible signals of CPT violation due to their high sensitivity. The SME was conceived to facilitate experimental investigations of Lorentz and CPT symmetry, given the theoretical motivation for violation of these symmetries. An initial step, in 1995, was the introduction of effective interactions. Although Lorentz-breaking interactions are motivated by constructs such as string theory, the low-energy effective action appearing in the SME is independent of the underlying theory. Each term in the effective theory involves the expectation of a tensor field in the underlying theory. These coefficients are small due to Planck-scale suppression, and in principle are measurable in experiments. The first case considered the mixing of neutral mesons, because their interferometric nature makes them highly sensitive to suppressed effects. In 1997 and 1998, two papers by Don Colladay and Alan Kostelecký gave birth to the minimal SME in flat spacetime. This provided a framework for Lorentz violation across the spectrum of standard-model particles, and provided information about types of signals for potential new experimental searches. In 2004, the leading Lorentz-breaking terms in curved spacetimes were published, thereby completing the picture for the minimal SME. In 1999, Sidney Coleman and Sheldon Glashow presented a special isotropic limit of the SME. Higher-order Lorentz violating terms have been studied in various contexts, including electrodynamics. == Lorentz transformations: observer vs. particle == The distinction between particle and observer transformations is essential to understanding Lorentz violation in physics because Lorentz violation implies a measurable difference between two systems differing only by a particle Lorentz transformation. In special relativity, observer Lorentz transformations relate measurements made in reference frames with differing velocities and orientations. The coordinates in the one system are related to those in the other by an observer Lorentz transformation—a rotation, a boost, or a combination of both. Each observer will agree on the laws of physics, since this transformation is simply a change of coordinates. On the other hand, identical experiments can be rotated or boosted relative to each other, while being studied by the same inertial observer. These transformations are called particle transformations, because the matter and fields of the experiment are physically transformed into the new configuration. In a conventional vacuum, observer and particle transformations can be related to each other in a simple way—basically one is the inverse of the other. This apparent equivalence is often expressed using the terminology of active and passive transformations. The equivalence fails in Lorentz-violating theories, however, because fixed background fields are the source of the symmetry breaking. These background fields are tensor-like quantities, creating preferred directions and boost-dependent effects. The fields extend over all space and time, and are essentially frozen. When an experiment sensitive to one of the background fields is rotated or boosted, i.e. particle transformed, the background fields remain unchanged, and measurable effects are possible. Observer Lorentz symmetry is expected for all theories, including Lorentz violating ones, since a change in the coordinates cannot affect the physics. This invariance is implemented in field theories by writing a scalar lagrangian, with properly contracted spacetime indices. Particle Lorentz breaking enters if the theory includes fixed SME background fields filling the universe. == Building the SME == The SME can be expressed as a Lagrangian with various terms. Each Lorentz-violating term is an observer scalar constructed by contracting standard field operators with controlling coefficients called coefficients for Lorentz violation. These are not parameters, but rather predictions of the theory, since they can in principle be measured by appropriate experiments. The coefficients are expected to be small because of the Planck-scale suppression, so perturbative methods are appropriate. In some cases, other suppression mechanisms could mask large Lorentz violations. For instance, large violations that may exist in gravity could have gone undetected so far because of couplings with weak gravitational fields. Stability and causality of the theory have been studied in detail. == Spontaneous Lorentz symmetry breaking == In field theory, there are two possible ways to implement the breaking of a symmetry: explicit and spontaneous. A key result in the formal theory of Lorentz violation, published by Kostelecký in 2004, is that explicit Lorentz violation leads to incompatibility of the Bianchi identities with the covariant conservation laws for the energy–momentum and spin-density tensors, whereas spontaneous Lorentz breaking evades this difficulty. This theorem requires that any breaking of Lorentz symmetry must be dynamical. Formal studies of the possible causes of the breakdown of Lorentz symmetry include investigations of the fate of the expected Nambu–Goldstone modes. Goldstone's theorem implies that the spontaneous breaking must be accompanied by massless bosons. These modes might be identified with the photon, the graviton, spin-dependent interactions, and spin-independent interactions. == Experimental searches == The possible signals of Lorentz violation in any experiment can be calculated from the SME. It has therefore proven to be a remarkable tool in the search for Lorentz violation across the landscape of experimental physics. Up until the present, experimental results have taken the form of upper bounds on the SME coefficients. Since the results will be numerically different for different inertial reference frames, the standard frame adopted for reporting results is the Sun-centered frame. This frame is a practical and appropriate choice, since it is accessible and inertial on the time scale of hundreds of years. Typical experiments seek couplings between the background fields and various particle properties such as spin, or propagation direction. One of the key signals of Lorentz violation arises because experiments on Earth are unavoidably rotating and revolving relative to the Sun-centered frame. These motions lead to both annual and sidereal variations of the measured coefficients for Lorentz violation. Since the translational motion of the Earth around the Sun is nonrelativistic, annual variations are typically suppressed by a factor 10−4. This makes sidereal variations the leading time-dependent effect to look for in experimental data. Measurements of SME coefficients have been done with experiments involving: birefringence and dispersion from cosmological sources clock-comparison measurements CMB polarization collider experiments electromagnetic resonant cavities equivalence principle gauge and Higgs particles high-energy astrophysical observations laboratory and gravimetric tests of gravity matter interferometry neutrino oscillations oscillations and decays of K, B, D mesons particle-antiparticle comparisons Post-newtonian gravity in the Solar System and beyond second- and third-generation particles space-based missions spectroscopy of hydrogen and antihydrogen spin-polarized matter. All experimental results for SME coefficients are tabulated in the Data Tables for Lorentz and CPT Violation. == See also == Antimatter tests of Lorentz violation Lorentz-violating electrodynamics Lorentz-violating neutrino oscillations Bumblebee Models Tests of special relativity Test theories of special relativity == References == == External links == Background information on Lorentz and CPT violation Data Tables for Lorentz and CPT Violation
Wikipedia/Standard-Model_Extension
In particle physics, the Peccei–Quinn theory is a well-known, long-standing proposal for the resolution of the strong CP problem formulated by Roberto Peccei and Helen Quinn in 1977. The theory introduces a new anomalous symmetry to the Standard Model along with a new scalar field which spontaneously breaks the symmetry at low energies, giving rise to an axion that suppresses the problematic CP violation. This model has long since been ruled out by experiments and has instead been replaced by similar invisible axion models which utilize the same mechanism to solve the strong CP problem. == Overview == Quantum chromodynamics (QCD) has a complicated vacuum structure which gives rise to a CP violating θ-term in the Lagrangian. Such a term can have a number of non-perturbative effects, one of which is to give the neutron an electric dipole moment. The absence of this dipole moment in experiments requires the fine-tuning of the θ-term to be very small, something known as the strong CP problem. Motivated as a solution to this problem, Peccei–Quinn (PQ) theory introduces a new complex scalar field φ {\displaystyle \varphi } in addition to the standard Higgs doublet. This scalar field couples to d-type quarks through Yukawa terms, while the Higgs now only couples to the up-type quarks. Additionally, a new global chiral anomalous U(1) symmetry is introduced, the Peccei–Quinn symmetry, under which φ {\displaystyle \varphi } is charged, requiring some of the fermions also have a PQ charge. The scalar field also has a potential V ( φ ) = μ 2 ( | φ | 2 − f a 2 2 ) 2 , {\displaystyle V(\varphi )=\mu ^{2}{\bigg (}|\varphi |^{2}-{\frac {f_{a}^{2}}{2}}{\bigg )}^{2},} where μ {\displaystyle \mu } is a dimensionless parameter and f a {\displaystyle f_{a}} is known as the decay constant. The potential results in φ {\displaystyle \varphi } having the vacuum expectation value of ⟨ φ ⟩ = f a / 2 {\displaystyle \langle \varphi \rangle =f_{a}/{\sqrt {2}}} at the electroweak phase transition. Spontaneous symmetry breaking of the Peccei–Quinn symmetry below the electroweak scale gives rise to a pseudo-Goldstone boson known as the axion a {\displaystyle a} , with the resulting Lagrangian taking the form L tot = L SM,axions + θ g s 2 32 π 2 G ~ b μ ν G b μ ν + ξ a f a g s 2 32 π 2 G ~ b μ ν G b μ ν , {\displaystyle {\mathcal {L}}_{\text{tot}}={\mathcal {L}}_{\text{SM,axions}}+\theta {\frac {g_{s}^{2}}{32\pi ^{2}}}{\tilde {G}}_{b}^{\mu \nu }G_{b\mu \nu }+\xi {\frac {a}{f_{a}}}{\frac {g_{s}^{2}}{32\pi ^{2}}}{\tilde {G}}_{b}^{\mu \nu }G_{b\mu \nu },} where the first term is the Standard Model (SM) and axion Lagrangian which includes axion-fermion interactions arising from the Yukawa terms. The second term is the CP violating θ-term, with g s {\displaystyle g_{s}} the strong coupling constant, G b μ ν {\displaystyle G_{b\mu \nu }} the gluon field strength tensor, and G ~ b μ ν {\displaystyle {\tilde {G}}_{b\mu \nu }} the dual field strength tensor. The third term is known as the color anomaly, a consequence of the Peccei–Quinn symmetry being anomalous, with ξ {\displaystyle \xi } determined by the choice of PQ charges for the quarks. If the symmetry is also anomalous in the electromagnetic sector, there will additionally be an anomaly term coupling the axion to photons. Due to the presence of the color anomaly, the effective θ {\displaystyle \theta } angle is modified to θ + ξ a / f a {\displaystyle \theta +\xi a/f_{a}} , giving rise to an effective potential through instanton effects, which can be approximated in the dilute gas approximation as V eff ∼ cos ⁡ ( θ + ξ ⟨ a ⟩ f a ) . {\displaystyle V_{\text{eff}}\sim \cos {\bigg (}\theta +\xi {\frac {\langle a\rangle }{f_{a}}}{\bigg )}.} To minimize the ground state energy, the axion field picks the vacuum expectation value ⟨ a ⟩ = − f a θ / ξ {\displaystyle \langle a\rangle =-f_{a}\theta /\xi } , with axions now being excitations around this vacuum. This prompts the field redefinition a → a + ⟨ a ⟩ {\displaystyle a\rightarrow a+\langle a\rangle } which leads to the cancellation of the θ {\displaystyle \theta } angle, dynamically solving the strong CP problem. It is important to point out that the axion is massive since the Peccei–Quinn symmetry is explicitly broken by the chiral anomaly, with the axion mass roughly given in terms of the pion mass and pion decay constant as m a ≈ f π m π / f a {\displaystyle m_{a}\approx f_{\pi }m_{\pi }/f_{a}} . == Invisible axion models == For the Peccei–Quinn model to work, the decay constant must be set at the electroweak scale, leading to a heavy axion. Such an axion has long been ruled out by experiments, for example through bounds on rare kaon decays K + → π + + a {\displaystyle K^{+}\rightarrow \pi ^{+}+a} . Instead, there are a variety of modified models called invisible axion models which introduce the new scalar field φ {\displaystyle \varphi } independently of the electroweak scale, enabling much larger vacuum expectation values, hence very light axions. The most popular such models are the Kim–Shifman–Vainshtein–Zakharov (KSVZ) and the Dine–Fischler–Srednicki–Zhitnisky (DFSZ) models. The KSVZ model introduces a new heavy quark doublet with PQ charge, acquiring its mass through a Yukawa term involving φ {\displaystyle \varphi } . Since in this model the only fermions that carry a PQ charge are the heavy quarks, there are no tree-level couplings between the SM fermions and the axion. Meanwhile, the DFSZ model replaces the usual Higgs with two PQ charged Higgs doublets, H u {\displaystyle H_{u}} and H d {\displaystyle H_{d}} , that give mass to the SM fermions through the usual Yukawa terms, while the new scalar only interacts with the standard model through a quartic coupling φ 2 H u H d {\displaystyle \varphi ^{2}H_{u}H_{d}} . Since the two Higgs doublets carry PQ charge, the resulting axion couples to SM fermions at tree-level. == See also == Axion QCD vacuum Strong CP problem == References == == Further reading == Sarkar, U. (2008). "Peccei–Quinn Symmetry". Particle and Astroparticle Physics. Taylor & Francis. pp. 191–197. ISBN 978-1-58488-931-1.
Wikipedia/Peccei–Quinn_theory
In theoretical physics, the hierarchy problem is the problem concerning the large discrepancy between aspects of the weak force and gravity. There is no scientific consensus on why, for example, the weak force is 1024 times stronger than gravity. == Technical definition == A hierarchy problem occurs when the fundamental value of some physical parameter, such as a coupling constant or a mass, in some Lagrangian is vastly different from its effective value, which is the value that gets measured in an experiment. This happens because the effective value is related to the fundamental value by a prescription known as renormalization, which applies corrections to it. Typically the renormalized value of parameters are close to their fundamental values, but in some cases, it appears that there has been a delicate cancellation between the fundamental quantity and the quantum corrections. Hierarchy problems are related to fine-tuning problems and problems of naturalness. Throughout the 2010s, many scientists argued that the hierarchy problem is a specific application of Bayesian statistics. Studying renormalization in hierarchy problems is difficult, because such quantum corrections are usually power-law divergent, which means that the shortest-distance physics are most important. Because we do not know the precise details of the quantum gravity, we cannot even address how this delicate cancellation between two large terms occurs. Therefore, researchers are led to postulate new physical phenomena that resolve hierarchy problems without fine-tuning. == Overview == Suppose a physics model requires four parameters to produce a very high-quality working model capable of generating predictions regarding some aspect of our physical universe. Suppose we find through experiments that the parameters have values: 1.2, 1.31, 0.9 and a value near 4×1029. One might wonder how such figures arise. In particular, one might be especially curious about a theory where three values are close to one, and the fourth is so different; i.e., the huge disproportion we seem to find between the first three parameters and the fourth. If one force is so much weaker than the others that it needs a factor of 4×1029 to allow it to be related to the others in terms of effects, we might also wonder how our universe come to be so exactly balanced when its forces emerged. In current particle physics, the differences between some actual parameters are much larger than this, so the question is noteworthy. One explanation given by philosophers is the anthropic principle. If the universe came to exist by chance and vast numbers of other universes exist or have existed, then lifeforms capable of performing physics experiments only arose in universes that, by chance, had very balanced forces. All of the universes where the forces were not balanced did not develop life capable of asking this question. So if lifeforms like human beings are aware and capable of asking such a question, humans must have arisen in a universe having balanced forces, however rare that might be. A second possible answer is that there is a deeper understanding of physics that we currently do not possess. There may be parameters from which we can derive physical constants that have fewer unbalanced values, or there may be a model with fewer parameters. == Examples in particle physics == === Higgs mass === In particle physics, the most important hierarchy problem is the question that asks why the weak force is 1024 times as strong as gravity. Both of these forces involve constants of nature, the Fermi constant for the weak force and the Newtonian constant of gravitation for gravity. Furthermore, if the Standard Model is used to calculate the quantum corrections to Fermi's constant, it appears that Fermi's constant is surprisingly large and is expected to be closer to Newton's constant unless there is a delicate cancellation between the bare value of Fermi's constant and the quantum corrections to it. More technically, the question is why the Higgs boson is so much lighter than the Planck mass (or the grand unification energy, or a heavy neutrino mass scale): one would expect that the large quantum contributions to the square of the Higgs boson mass would inevitably make the mass huge, comparable to the scale at which new physics appears unless there is an incredible fine-tuning cancellation between the quadratic radiative corrections and the bare mass. The problem cannot even be formulated in the strict context of the Standard Model, for the Higgs mass cannot be calculated. In a sense, the problem amounts to the worry that a future theory of fundamental particles, in which the Higgs boson mass will be calculable, should not have excessive fine-tunings. === Theoretical solutions === There have been many proposed solutions by many experienced physicists. ==== Supersymmetry ==== Some physicists believe that one may solve the hierarchy problem via supersymmetry. Supersymmetry can explain how a tiny Higgs mass can be protected from quantum corrections. Supersymmetry removes the power-law divergences of the radiative corrections to the Higgs mass and solves the hierarchy problem as long as the supersymmetric particles are light enough to satisfy the Barbieri–Giudice criterion. This still leaves open the mu problem, however. The tenets of supersymmetry are being tested at the LHC, although no evidence has been found so far for supersymmetry. Each particle that couples to the Higgs field has an associated Yukawa coupling λ f {\textstyle \lambda _{f}} . The coupling with the Higgs field for fermions gives an interaction term L Y u k a w a = − λ f ψ ¯ H ψ {\textstyle {\mathcal {L}}_{\mathrm {Yukawa} }=-\lambda _{f}{\bar {\psi }}H\psi } , with ψ {\textstyle \psi } being the Dirac field and H {\textstyle H} the Higgs field. Also, the mass of a fermion is proportional to its Yukawa coupling, meaning that the Higgs boson will couple most to the most massive particle. This means that the most significant corrections to the Higgs mass will originate from the heaviest particles, most prominently the top quark. By applying the Feynman rules, one gets the quantum corrections to the Higgs mass squared from a fermion to be: Δ m H 2 = − | λ f | 2 8 π 2 [ Λ U V 2 + … ] . {\displaystyle \Delta m_{\rm {H}}^{2}=-{\frac {\left|\lambda _{f}\right|^{2}}{8\pi ^{2}}}[\Lambda _{\mathrm {UV} }^{2}+\dots ].} The Λ U V {\textstyle \Lambda _{\mathrm {UV} }} is called the ultraviolet cutoff and is the scale up to which the Standard Model is valid. If we take this scale to be the Planck scale, then we have the quadratically diverging Lagrangian. However, suppose there existed two complex scalars (taken to be spin 0) such that: λ S = | λ f | 2 {\displaystyle \lambda _{S}=\left|\lambda _{f}\right|^{2}} (the couplings to the Higgs are exactly the same). Then by the Feynman rules, the correction (from both scalars) is: Δ m H 2 = 2 × λ S 16 π 2 [ Λ U V 2 + … ] . {\displaystyle \Delta m_{\rm {H}}^{2}=2\times {\frac {\lambda _{S}}{16\pi ^{2}}}[\Lambda _{\mathrm {UV} }^{2}+\dots ].} (Note that the contribution here is positive. This is because of the spin-statistics theorem, which means that fermions will have a negative contribution and bosons a positive contribution. This fact is exploited.) This gives a total contribution to the Higgs mass to be zero if we include both the fermionic and bosonic particles. Supersymmetry is an extension of this that creates 'superpartners' for all Standard Model particles. ==== Conformal ==== Without supersymmetry, a solution to the hierarchy problem has been proposed using just the Standard Model. The idea can be traced back to the fact that the term in the Higgs field that produces the uncontrolled quadratic correction upon renormalization is the quadratic one. If the Higgs field had no mass term, then no hierarchy problem arises. But by missing a quadratic term in the Higgs field, one must find a way to recover the breaking of electroweak symmetry through a non-null vacuum expectation value. This can be obtained using the Weinberg–Coleman mechanism with terms in the Higgs potential arising from quantum corrections. Mass obtained in this way is far too small with respect to what is seen in accelerator facilities and so a conformal Standard Model needs more than one Higgs particle. This proposal has been put forward in 2006 by Krzysztof Antoni Meissner and Hermann Nicolai and is currently under scrutiny. But if no further excitation is observed beyond the one seen so far at LHC, this model would have to be abandoned. ==== Extra dimensions ==== No experimental or observational evidence of extra dimensions has been officially reported. Analyses of results from the Large Hadron Collider severely constrain theories with large extra dimensions. However, extra dimensions could explain why the gravity force is so weak, and why the expansion of the universe is faster than expected. If we live in a 3+1 dimensional world, then we calculate the gravitational force via Gauss's law for gravity: g ( r ) = − G m e r r 2 ( 1 ) {\displaystyle \mathbf {g} (\mathbf {r} )=-Gm{\frac {\mathbf {e_{r}} }{r^{2}}}\qquad (1)} which is simply Newton's law of gravitation. Note that Newton's constant G can be rewritten in terms of the Planck mass. G = ℏ c M P l 2 {\displaystyle G={\frac {\hbar c}{M_{\mathrm {Pl} }^{2}}}} If we extend this idea to δ extra dimensions, then we get: g ( r ) = − m e r M P l 3 + 1 + δ 2 + δ r 2 + δ ( 2 ) {\displaystyle \mathbf {g} (\mathbf {r} )=-m{\frac {\mathbf {e_{r}} }{M_{\mathrm {Pl} _{3+1+\delta }}^{2+\delta }r^{2+\delta }}}\qquad (2)} where M P l 3 + 1 + δ {\textstyle M_{\mathrm {Pl} _{3+1+\delta }}} is the 3+1+ δ {\textstyle \delta } -dimensional Planck mass. However, we are assuming that these extra dimensions are the same size as the normal 3+1 dimensions. Let us say that the extra dimensions are of size n ≪ than normal dimensions. If we let r ≪ n, then we get (2). However, if we let r ≫ n, then we get our usual Newton's law. However, when r ≫ n, the flux in the extra dimensions becomes a constant, because there is no extra room for gravitational flux to flow through. Thus the flux will be proportional to nδ because this is the flux in the extra dimensions. The formula is: g ( r ) = − m e r M P l 3 + 1 + δ 2 + δ r 2 n δ − m e r M P l 2 r 2 = − m e r M P l 3 + 1 + δ 2 + δ r 2 n δ {\displaystyle {\begin{aligned}\mathbf {g} (\mathbf {r} )&=-m{\frac {\mathbf {e_{r}} }{M_{\mathrm {Pl} _{3+1+\delta }}^{2+\delta }r^{2}n^{\delta }}}\\[2pt]-m{\frac {\mathbf {e_{r}} }{M_{\mathrm {Pl} }^{2}r^{2}}}&=-m{\frac {\mathbf {e_{r}} }{M_{\mathrm {Pl} _{3+1+\delta }}^{2+\delta }r^{2}n^{\delta }}}\end{aligned}}} which gives: 1 M P l 2 r 2 = 1 M P l 3 + 1 + δ 2 + δ r 2 n δ ⟹ M P l 2 = M P l 3 + 1 + δ 2 + δ n δ {\displaystyle {\begin{aligned}{\frac {1}{M_{\mathrm {Pl} }^{2}r^{2}}}&={\frac {1}{M_{\mathrm {Pl} _{3+1+\delta }}^{2+\delta }r^{2}n^{\delta }}}\\[2pt]\implies \quad M_{\mathrm {Pl} }^{2}&=M_{\mathrm {Pl} _{3+1+\delta }}^{2+\delta }n^{\delta }\end{aligned}}} Thus the fundamental Planck mass (the extra-dimensional one) could actually be small, meaning that gravity is actually strong, but this must be compensated by the number of the extra dimensions and their size. Physically, this means that gravity is weak because there is a loss of flux to the extra dimensions. This section is adapted from Quantum Field Theory in a Nutshell by A. Zee. ==== Braneworld models ==== In 1998 Nima Arkani-Hamed, Savas Dimopoulos, and Gia Dvali proposed the ADD model, also known as the model with large extra dimensions, an alternative scenario to explain the weakness of gravity relative to the other forces. This theory requires that the fields of the Standard Model are confined to a four-dimensional membrane, while gravity propagates in several additional spatial dimensions that are large compared to the Planck scale. In 1998–99 Merab Gogberashvili published on arXiv (and subsequently in peer-reviewed journals) a number of articles where he showed that if the Universe is considered as a thin shell (a mathematical synonym for "brane") expanding in 5-dimensional space then it is possible to obtain one scale for particle theory corresponding to the 5-dimensional cosmological constant and Universe thickness, and thus to solve the hierarchy problem. It was also shown that four-dimensionality of the Universe is the result of a stability requirement since the extra component of the Einstein field equations giving the localized solution for matter fields coincides with one of the conditions of stability. Subsequently, there were proposed the closely related Randall–Sundrum scenarios which offered their solution to the hierarchy problem. ==== UV/IR mixing ==== In 2019, a pair of researchers proposed that IR/UV mixing resulting in the breakdown of the effective quantum field theory could resolve the hierarchy problem. In 2021, another group of researchers showed that UV/IR mixing could resolve the hierarchy problem in string theory. === Cosmological constant === In physical cosmology, current observations in favor of an accelerating universe imply the existence of a tiny, but nonzero cosmological constant. This problem, called the cosmological constant problem, is a hierarchy problem very similar to that of the Higgs boson mass problem, since the cosmological constant is also very sensitive to quantum corrections, but its calculation is complicated by the necessary involvement of general relativity in the problem. Proposed solutions to the cosmological constant problem include modifying and/or extending gravity, adding matter with unvanishing pressure, and UV/IR mixing in the Standard Model and gravity. Some physicists have resorted to anthropic reasoning to solve the cosmological constant problem, but it is disputed whether such anthropic reasoning is scientific. == See also == CP violation Quantum triviality Weak gravity conjecture == References ==
Wikipedia/Naturalness_(physics)
In particle physics, a generation or family is a division of the elementary particles. Between generations, particles differ by their flavour quantum number and mass, but their electric and strong interactions are identical. There are three generations according to the Standard Model of particle physics. Each generation contains two types of leptons and two types of quarks. The two leptons may be classified into one with electric charge −1 (electron-like) and neutral (neutrino); the two quarks may be classified into one with charge −1⁄3 (down-type) and one with charge +2⁄3 (up-type). The basic features of quark–lepton generation or families, such as their masses and mixings etc., can be described by some of the proposed family symmetries. == Overview == Each member of a higher generation has greater mass than the corresponding particle of the previous generation, with the possible exception of the neutrinos (whose small but non-zero masses have not been accurately determined). For example, the first-generation electron has a mass of only 0.511 MeV/c2, the second-generation muon has a mass of 106 MeV/c2, and the third-generation tau has a mass of 1777 MeV/c2 (almost twice as heavy as a proton). This mass hierarchy causes particles of higher generations to decay to the first generation, which explains why everyday matter (atoms) is made of particles from the first generation only. Electrons surround a nucleus made of protons and neutrons, which contain up and down quarks. The second and third generations of charged particles do not occur in normal matter and are only seen in extremely high-energy environments such as cosmic rays or particle accelerators. The term generation was first introduced by Haim Harari in Les Houches Summer School, 1976. Neutrinos of all generations stream throughout the universe but rarely interact with other matter. It is hoped that a comprehensive understanding of the relationship between the generations of the leptons may eventually explain the ratio of masses of the fundamental particles, and shed further light on the nature of mass generally, from a quantum perspective. == Fourth generation == Fourth and further generations are considered unlikely by many (but not all) theoretical physicists. Some arguments against the possibility of a fourth generation are based on the subtle modifications of precision electroweak observables that extra generations would induce; such modifications are strongly disfavored by measurements. There are functions used to generalize terms for introduction in a new quark that is an isosinglet and is responsible for generating Flavour-Changing-Neutral-Currents' (FCNC) at tree level in the electroweak sectors. Nonetheless, searches at high-energy colliders for particles from a fourth generation continue, but as yet no evidence has been observed. In such searches, fourth-generation particles are denoted by the same symbols as third-generation ones with an added prime (e.g. b′ and t′). A fourth generation with a 'light' neutrino (one with a mass less than about 45 GeV/c2) was ruled out by measurements of the decay widths of the Z boson at CERN's Large Electron–Positron Collider (LEP) as early as 1989. The lower bound for a fourth generation neutrino (ν'τ) mass as of 2010 was at about 60 GeV (millions of times larger than the upper bound for the other 3 neutrino masses). As of 2024, no evidence of a fourth-generation neutrino has ever been observed in neutrino oscillation studies either. Because even in the third generation (tau) neutrino ντ, mass is extremely small (making ντ the only third-generation particle that outside highly most energetic conditions will not readily decay), a fourth-generation neutrino ν'τ that observes the general rules for the known 3 neutrino generations should both be easily within current particle accelerators' energy levels, and occur during the regular and highly predictable switching-of-generations (oscillation) neutrinos perform. If the Koide formula continues to hold, the masses of the fourth generation charged lepton would be 44 GeV (ruled out) and b′ and t′ should be 3.6 TeV and 84 TeV respectively (The maximum possible energy for protons in the LHC is about 6 TeV). The lower bound for a fourth generation of quark (b′, t′) masses as of 2019 was at 1.4 TeV from experiments at the LHC. The lower bound for a fourth generation charged lepton (τ') mass in 2012 was 100GeV, with a proposed upper bound of 1.2 TeV from unitarity considerations. == Origin == The origin of multiple generations of fermions, and the particular count of 3, is an unsolved problem of physics. String theory provides a cause for multiple generations, but the particular number depends on the details of the compactification of the D-brane intersections. Additionally, E8 grand unified theories in 10 dimensions compactified on certain orbifolds down to 4 D naturally contain 3 generations of matter. This includes many heterotic string theory models. In standard quantum field theory, under certain assumptions, a single fermion field can give rise to multiple fermion poles with mass ratios of around eπ ≈ 23 and e2π ≈ 535 potentially explaining the large ratios of fermion masses between successive generations and their origin. The existence of precisely three generations with the correct structure was at least tentatively derived from first principles through a connection with gravity. The result implies a unification of gauge forces into SU(5). The question regarding the masses is unsolved, but this is a logically separate question, related to the Higgs sector of the theory. == See also == Grand Unified Theory Koide formula Neutrino mass hierarchy == References ==
Wikipedia/Generation_(physics)
In physical cosmology, leptogenesis is the generic term for hypothetical physical processes that produced an asymmetry between leptons and antileptons in the very early universe, resulting in the present-day dominance of leptons over antileptons. In the currently accepted Standard Model, lepton number is nearly conserved at temperatures below the TeV scale, but tunneling processes can change this number; at higher temperature it may change through interactions with sphalerons, particle-like entities. In both cases, the process involved is related to the weak nuclear force, and is an example of chiral anomaly. Such processes could have hypothetically created leptons in the early universe. In these processes baryon number is also non-conserved, and thus baryons should have been created along with leptons. Such non-conservation of baryon number is indeed assumed to have happened in the early universe, and is known as baryogenesis. However, in some theoretical models, it is suggested that leptogenesis also occurred prior to baryogenesis; thus the term leptogenesis is often used to imply the non-conservation of leptons without corresponding non-conservation of baryons. In the Standard Model, the difference between the lepton number and the baryon number is precisely conserved, so that leptogenesis without baryogenesis is impossible. Thus such leptogenesis implies extensions to the Standard Model. The lepton and baryon asymmetries affect the much better understood Big Bang nucleosynthesis at later times, during which light atomic nuclei began to form. Successful synthesis of the light elements requires that there be an imbalance in the number of baryons and antibaryons to one part in a billion when the universe is a few minutes old. An asymmetry in the number of leptons and antileptons is not mandatory for Big Bang nucleosynthesis. However, charge conservation suggests that any asymmetry in the charged leptons and antileptons (electrons, muons and tau particles) should be of the same order of magnitude as the baryon asymmetry. Observations of the primordial helium-4 abundance place an upper limit on any lepton asymmetry residing in the neutrino sector, which is not very stringent. Leptogenesis theories employ sub-disciplines of physics such as quantum field theory, and statistical physics, to describe such possible mechanisms. Baryogenesis, the generation of a baryon–antibaryon asymmetry, and leptogenesis can be connected by processes that convert baryon number and lepton number into each other. The (non-perturbative) quantum Adler–Bell–Jackiw anomaly can result in sphalerons, which can convert leptons into baryons and vice versa. Thus, the Standard Model is in principle able to provide a mechanism to create baryons and leptons. A simple modification of the Standard Model that is instead able to realize the program of Sakharov is the one suggested by M. Fukugita and T. Yanagida. The Standard Model is extended by adding right-handed neutrinos, permitting implementation of the see-saw mechanism and providing the neutrinos with mass. At the same time, the extended model is able to spontaneously generate leptons from the decays of right-handed neutrinos. Finally, the sphalerons are able to convert the spontaneously generated lepton asymmetry into the observed baryonic asymmetry. Due to its popularity, this entire process is sometimes referred to simply as leptogenesis. == See also == Baryogenesis – Hypothesized early universe process == References == == Further reading == Leptogenesis Wilfried Buchmüller, Scholarpedia, 9(3):11471. doi:10.4249/scholarpedia.11471 == External links == Planck satellite cosmic recipe
Wikipedia/Leptogenesis_(physics)
The Harari–Shupe preon model (also known as rishon model, RM) is the earliest effort to develop a preon model to explain the phenomena appearing in the Standard Model (SM) of particle physics. It was first developed independently by Haim Harari and by Michael A. Shupe and later expanded by Harari and his then-student Nathan Seiberg. == Model == The model has two kinds of fundamental particles called rishons (ראשון, rishon means "first" in Hebrew). They are T ("Third" since it has an electric charge of +⁠1/3⁠ e), or Tohu and V ("Vanishes", since it is electrically neutral), or Vohu. The terms tohu and vohu are picked from the Biblical phrase Tohu va-Vohu, for which the King James Version translation is "without form, and void". All leptons and all flavours of quarks are three-rishon ordered triplets. These groups of three rishons have spin-⁠1/2⁠. They are as follows: TTT = positron (anti-electron); VVV = electron neutrino; TTV, TVT and VTT = three colours of up quarks; VVT, VTV and TVV = three colours of down antiquarks. Each rishon has a corresponding antiparticle. Hence: TTT = electron; VVV = electron antineutrino; TTV, TVT, VTT = three colours of up antiquarks; VVT, VTV, TVV = three colours of down quarks. The W+ boson = TTTVVV; The W− boson = TTTVVV. Note that: Matter and antimatter are equally abundant in nature in the RM. This still leaves open the question of why TTT, TVV, and TTV etc. are common whereas TTT, TVV, and TTV etc. are rare. Higher generation leptons and quarks are presumed to be excited states of first generation leptons and quarks, but those states are not specified. The simple RM does not provide an explanation of the mass-spectrum of the leptons and quarks. Baryon number (B) and lepton number (L) are not conserved, but the quantity B − L is conserved. A baryon number violating process (such as proton decay) in the model would be In the expanded Harari–Seiberg version the rishons possess color and hypercolor, explaining why the only composites are the observed quarks and leptons. Under certain assumptions, it is possible to show that the model allows exactly for three generations of quarks and leptons. == Evidence == Currently, there is no scientific evidence for the existence of substructure within quarks and leptons, but there is no profound reason why such a substructure may not be revealed at shorter distances. In 2008, Piotr Zenczykowski (Żenczykowski) has derived the RM by starting from a non-relativistic O(6) phase space. Such model is based on fundamental principles and the structure of Clifford algebras, and fully recovers the RM by naturally explaining several obscure and otherwise artificial features of the original model. == In popular culture == Science fiction author Vonda McIntyre, in her novelizations of the scripts of the movies Star Trek II: The Wrath of Khan and Star Trek III: The Search for Spock suggested that the Genesis effect was a result of a newly discovered rishon-like substructure to matter. Science fiction author James P. Hogan in his novel Voyage from Yesteryear explicitly postulated a rishon-like model in the development of antimatter weapons and energy sources. == References ==
Wikipedia/Rishon_model
The asymptotic safety approach to quantum gravity provides a nonperturbative notion of renormalization in order to find a consistent and predictive quantum field theory of the gravitational interaction and spacetime geometry. It is based upon a nontrivial fixed point of the corresponding renormalization group (RG) flow such that the running coupling constants approach this fixed point in the ultraviolet (UV) limit. This suffices to avoid divergences in physical observables. Moreover, it has predictive power: Generically an arbitrary starting configuration of coupling constants given at some RG scale does not run into the fixed point for increasing scale, but a subset of configurations might have the desired UV properties. For this reason it is possible that — assuming a particular set of couplings has been measured in an experiment — the requirement of asymptotic safety fixes all remaining couplings in such a way that the UV fixed point is approached. Asymptotic safety, if realized in Nature, has far reaching consequences in all areas where quantum effects of gravity are to be expected. Their exploration, however, is still in its infancy. By now there are some phenomenological studies concerning the implications of asymptotic safety in particle physics, astrophysics and cosmology, for instance. == Standard Model == === Mass of the Higgs boson === The Standard Model in combination with asymptotic safety might be valid up to arbitrarily high energies. Based on the assumption that this is indeed correct it is possible to make a statement about the Higgs boson mass. The first concrete results were obtained by Mikhail Shaposhnikov and Christof Wetterich in 2010. Depending on the sign of the gravity induced anomalous dimension A λ {\displaystyle A_{\lambda }} there are two possibilities: For A λ < 0 {\displaystyle A_{\lambda }<0} the Higgs mass m H {\displaystyle m_{\text{H}}} is restricted to the window 126 GeV < m H < 174 GeV {\displaystyle 126\,{\text{GeV}}<m_{\text{H}}<174\,{\text{GeV}}} . If, on the other hand, A λ > 0 {\displaystyle A_{\lambda }>0} which is the favored possibility, m H {\displaystyle m_{\text{H}}} must take the value m H = 126 GeV , {\displaystyle m_{\text{H}}=126\,{\text{GeV}},} with an uncertainty of a few GeV only. In this spirit one can consider m H {\displaystyle m_{\text{H}}} a prediction of asymptotic safety. The result is in surprisingly good agreement with the latest experimental data measured at CERN in 2013 by the ATLAS and CMS collaborations, where a value of m H = 125.10 ± 0.14 GeV {\displaystyle m_{\text{H}}=125.10\ \pm 0.14\,{\text{GeV}}} has been determined. === Fine structure constant === By taking into account the gravitational correction to the running of the fine structure constant α {\displaystyle \alpha } of quantum electrodynamics, Ulrich Harst and Martin Reuter were able to study the impacts of asymptotic safety on the infrared (renormalized) value of α {\displaystyle \alpha } . They found two fixed points suitable for the asymptotic safety construction both of which imply a well-behaved UV limit, without running into a Landau pole type singularity. The first one is characterized by a vanishing α {\displaystyle \alpha } , and the infrared value α IR {\displaystyle \alpha _{\text{IR}}} is a free parameter. In the second case, however, the fixed point value of α {\displaystyle \alpha } is non-zero, and its infrared value is a computable prediction of the theory. In a more recent study, Nicolai Christiansen and Astrid Eichhorn showed that quantum fluctuations of gravity generically generate self-interactions for gauge theories, which have to be included in a discussion of a potential ultraviolet completion. Depending on the gravitational and gauge parameters, they conclude that the fine structure constant α {\displaystyle \alpha } might be asymptotically free and not run into a Landau pole, while the induced coupling for the gauge self-interaction is irrelevant and thus its value can be predicted. This is an explicit example where Asymptotic Safety solves a problem of the Standard Model - the triviality of the U(1) sector - without introducing new free parameters. == Astrophysics and cosmology == Phenomenological consequences of asymptotic safety can be expected also for astrophysics and cosmology. Alfio Bonanno and Reuter investigated the horizon structure of "renormalization group improved" black holes and computed quantum gravity corrections to the Hawking temperature and the corresponding thermodynamical entropy. By means of an RG improvement of the Einstein–Hilbert action, Reuter and Holger Weyer obtained a modified version of the Einstein equations which in turn results in a modification of the Newtonian limit, providing a possible explanation for the observed flat galaxy rotation curves without having to postulate the presence of dark matter. As for cosmology, Bonanno and Reuter argued that asymptotic safety modifies the very early Universe, possibly leading to a resolution to the horizon and flatness problem of standard cosmology. Furthermore, asymptotic safety provides the possibility of inflation without the need of an inflaton field (while driven by the cosmological constant). It was reasoned that the scale invariance related to the non-Gaussian fixed point underlying asymptotic safety is responsible for the near scale invariance of the primordial density perturbations. Using different methods, asymptotically safe inflation was analyzed further by Weinberg. == See also == Asymptotic safety in quantum gravity Quantum gravity UV fixed point == References ==
Wikipedia/Physics_applications_of_asymptotically_safe_gravity
Alternative models to the Standard Higgs Model are models which are considered by many particle physicists to solve some of the Higgs boson's existing problems. Two of the most currently researched models are quantum triviality, and Higgs hierarchy problem. == Overview == In particle physics, elementary particles and forces give rise to the world around us. Physicists explain the behaviors of these particles and how they interact using the Standard Model—a widely accepted framework believed to explain most of the world we see around us. Initially, when these models were being developed and tested, it seemed that the mathematics behind those models, which were satisfactory in areas already tested, would also forbid elementary particles from having any mass, which showed clearly that these initial models were incomplete. In 1964 three groups of physicists almost simultaneously released papers describing how masses could be given to these particles, using approaches known as symmetry breaking. This approach allowed the particles to obtain a mass, without breaking other parts of particle physics theory that were already believed reasonably correct. This idea became known as the Higgs mechanism, and later experiments confirmed that such a mechanism does exist—but they could not show exactly how it happens. The simplest theory for how this effect takes place in nature, and the theory that became incorporated into the Standard Model, was that if one or more of a particular kind of "field" (known as a Higgs field) happened to permeate space, and if it could interact with elementary particles in a particular way, then this would give rise to a Higgs mechanism in nature. In the basic Standard Model there is one field and one related Higgs boson; in some extensions to the Standard Model there are multiple fields and multiple Higgs bosons. In the years since the Higgs field and boson were proposed as a way to explain the origins of symmetry breaking, several alternatives have been proposed that suggest how a symmetry breaking mechanism could occur without requiring a Higgs field to exist. Models which do not include a Higgs field or a Higgs boson are known as Higgsless models. In these models, strongly interacting dynamics rather than an additional (Higgs) field produce the non-zero vacuum expectation value that breaks electroweak symmetry. == List of alternative models == A partial list of proposed alternatives to a Higgs field as a source for symmetry breaking includes: Technicolor models break electroweak symmetry through new gauge interactions, which were originally modeled on quantum chromodynamics. Extra-dimensional Higgsless models use the fifth component of the gauge fields to play the role of the Higgs fields. It is possible to produce electroweak symmetry breaking by imposing certain boundary conditions on the extra dimensional fields, increasing the unitarity breakdown scale up to the energy scale of the extra dimension. Through the AdS/QCD correspondence this model can be related to technicolor models and to "UnHiggs" models in which the Higgs field is of unparticle nature. Models of composite W and Z vector bosons. Top quark condensate. "Unitary Weyl gauge". By adding a suitable gravitational term to the standard model action in curved spacetime, the theory develops a local conformal (Weyl) invariance. The conformal gauge is fixed by choosing a reference mass scale based on the gravitational coupling constant. This approach generates the masses for the vector bosons and matter fields similar to the Higgs mechanism without traditional spontaneous symmetry breaking. Asymptotically safe weak interactions based on some nonlinear sigma models. Preon and models inspired by preons such as Ribbon model of Standard Model particles by Sundance Bilson-Thompson, based in braid theory and compatible with loop quantum gravity and similar theories. This model not only explains mass but leads to an interpretation of electric charge as a topological quantity (twists carried on the individual ribbons) and colour charge as modes of twisting. Symmetry breaking driven by non-equilibrium dynamics of quantum fields above the electroweak scale. Unparticle physics and the unhiggs. These are models that posit that the Higgs sector and Higgs boson are scaling invariant, also known as unparticle physics. In theory of superfluid vacuum masses of elementary particles can arise as a result of interaction with the physical vacuum, similarly to the gap generation mechanism in superconductors. UV-completion by classicalization, in which the unitarization of the WW scattering happens by creation of classical configurations. == See also == Composite Higgs models == References == == External links == Higgsless model on arxiv.org
Wikipedia/Higgsless_model
Near-surface geophysics is the use of geophysical methods to investigate small-scale features in the shallow (tens of meters) subsurface. It is closely related to applied geophysics or exploration geophysics. Methods used include seismic refraction and reflection, gravity, magnetic, electric, and electromagnetic methods. Many of these methods were developed for oil and mineral exploration but are now used for a great variety of applications, including archaeology, environmental science, forensic science, military intelligence, geotechnical investigation, treasure hunting, and hydrogeology. In addition to the practical applications, near-surface geophysics includes the study of biogeochemical cycles. == Overview == In studies of the solid Earth, the main feature that distinguishes geophysics from geology is that it involves remote sensing. Various physical phenomena are used to probe below the surface where scientists cannot directly access the rock. Applied geophysics projects typically have the following elements: data acquisition, data reduction, data processing, modeling, and geological interpretation. This all requires various types of geophysical surveys. These may include surveys of gravity, magnetism, seismicity, or magnetotellurics. === Data acquisition === A geophysical survey is a set of measurements made with a geophysical instrument. Often a set of measurements are along a line, or traverse. Many surveys have a set of parallel traverses and another set perpendicular to it to get good spatial coverage. Technologies used for geophysical surveys include: Seismic methods, such as reflection seismology, seismic refraction, and seismic tomography. Seismoelectrical method Geodesy and gravity techniques, including gravimetry and gravity gradiometry. Magnetic techniques, including aeromagnetic surveys and magnetometers. Electrical techniques, including electrical resistivity tomography, induced polarization and spontaneous potential. Electromagnetic methods, such as magnetotellurics, ground penetrating radar and transient/time-domain electromagnetics. Borehole geophysics, also called well logging. Remote sensing techniques, including hyperspectral imaging. === Data reduction === The raw data from a geophysical survey must often be converted to a more useful form. This may involve correcting the data for unwanted variations; for example, a gravity survey would be corrected for surface topography. Seismic travel times would be converted to depths. Often a target of the survey will be revealed as an anomaly, a region that has data values above or below the surrounding region. === Data processing === The reduced data may not provide a good enough image because of background noise. The signal-to-noise ratio may be improved by repeated measurements of the same quantity followed by some sort of averaging such as stacking or signal processing. === Modeling === Once a good profile is obtained of the physical property that is directly measured, it must be converted to a model of the property that is being investigated. For example, gravity measurements are used to obtain a model of the density profile under the surface. This is called an inverse problem. Given a model of the density, the gravity measurements at the surface can be predicted; but in an inverse problem the gravity measurements are known and the density must be inferred. This problem has uncertainties due to the noise and limited coverage of the surface, but even with perfect coverage many possible models of the interior could fit the data. Thus, additional assumptions must be made to constrain the model. Depending on the data coverage, the model may only be a 2D model of a profile. Or a set of parallel transects may be interpreted using a 2½D model, which assumes that relevant features are elongated. For more complex features, a 3D model may be obtained using tomography. === Geological interpretation === The final step in a project is the geological interpretation. A positive gravity anomaly may be an igneous intrusion, a negative anomaly a salt dome or void. A region of higher electrical conductivity may have water or galena. For a good interpretation the geophysics model must be combined with geological knowledge of the area. == Seismology == Seismology makes use of the ability of vibrations to travel through rock as seismic waves. These waves come in two types: pressure waves (P-waves) and shear waves (S-waves). P-waves travel faster than S-waves, and both have trajectories that bend as the wave speeds change with depth. Refraction seismology makes use of these curved trajectories. In addition, if there are discontinuities between layers in the rock or sediment, seismic waves are reflected. Reflection seismology identifies these layer boundaries by the reflections. === Reflection seismology === Seismic reflection is used for imaging of nearly horizontal layers in the Earth. The method is much like echo sounding. It can be used to identify folding and faulting, and to search for oil and gas fields. On a regional scale, profiles can be combined to get sequence stratigraphy, making it possible to date sedimentary layers and identify eustatic sea level rise. === Refraction seismology === Seismic refraction can be used not only to identify layers in rocks by the trajectories of the seismic waves, but also to infer the wave speeds in each layer, thereby providing some information on the material in each layer. == Magnetic surveying == Magnetic surveying can be done on a planetary scale (for example, the survey of Mars by the Mars Global Surveyor) or on a scale of meters. In the near-surface, it is used to map geological boundaries and faults, find certain ores, buried igneous dykes, locating buried pipes and old mine workings, and detecting some kinds of land mines. It is also used to look for human artifacts. Magnetometers are used to search for anomalies produced by targets with a lot of magnetically hard material such as ferrites. == Microgravity surveying == High precision gravity measurements can be used to detect near surface density anomalies, such as those associated with sinkholes and old mine workings, with repeat monitoring allowing near-surface changes over these to be quantified. == Ground-penetrating radar == Ground-penetrating radar is one of the most popularly used near-surface geophysics in forensic archaeology, forensic geophysics, geotechnical investigation, treasure hunting, and hydrogeology, with typical penetration depths down to 10 m (33 ft) below ground level, depending upon local soil and rock conditions, although this depends upon the central frequency transmitter/receiver antennae utilised. == Bulk ground conductivity == Bulk ground conductivity typically uses transmitter/receiver pairs to obtain primary/secondary EM signals from the surrounding environment (note potential difficulty in urban areas with above-ground EM sources of interference), with collection areas depending upon the antennae spacing and equipment used. There are airborne, land- and water-based systems currently available. They are particularly useful for initial ground reconnaissance work in geotechnical, archaeology and forensic geophysics investigations. == Electrical resistivity == The reciprocal of conductivity, electrical resistivity surveys measure the resistance of material (usually soil) between electrical probes, with typical penetration depths one to two times the electrode separations. There are various electrode configurations of equipment, the most typical using two current and two potential electrodes in a dipole-dipole array. They are used for geotechnical, archaeology and forensic geophysics investigations and have better resolution than most conductivity surveys. They do experience significant changes with soil moisture content, a difficulty in most site investigations with heterogeneous ground and differing vegetation distributions. == Applications == Milsom & Eriksen (2011) provide a useful field book for field geophysics. === Archaeology === Geophysical methods can be used to find or map an archaeological site remotely, avoiding unnecessary digging. They can also be used to date artifacts. In surveys of a potential archaeological site, features cut into the ground (such as ditches, pits and postholes) may be detected, even after filled in, by electrical resistivity and magnetic methods. The infill may also be detectable using ground-penetrating radar. Foundations and walls may also have a magnetic or electrical signature. Furnaces, fireplaces and kilns may have a strong magnetic anomaly because a thermoremanent magnetization has been baked into magnetic minerals. Geophysical methods were extensively used in recent work on the submerged remains of ancient Alexandria as well as three nearby submerged cities (Herakleion, Canopus and Menouthis). Methods that included side-scan sonar, magnetic surveys and seismic profiles uncovered a story of bad site location and a failure to protect buildings against geohazards. In addition, they helped to locate structures that may be the lost Great Lighthouse and palace of Cleopatra, although these claims are contested. === Forensics === Forensic geophysics is increasingly being used to detect near-surface objects/materials related to either a criminal or civil investigation. The most high-profile objects in criminal investigations are clandestine burials of murder victims, but forensic geophysics can also include locating unmarked burials in graveyards and cemeteries, a weapon used in a crime, or buried drugs or money stashes. Civil investigations are more often trying to determine the location, amount and (more tricky) the timing of illegally dumped waste, which include physical (e.g. fly-tipping) and liquid contaminants (e.g. hydrocarbons). There are many geophysical methods that could be employed, depending upon the target and background host materials. Most commonly ground-penetrating radar is used but this may not always be an optimal search detection technique. === Geotechnical investigations === Geotechnical investigations use near-surface geophysics as a standard tool, both for initial site characterisation and to gauge where to subsequently undertake intrusive site investigation (S.I.) which involves boreholes and trial pits. In rural areas conventional SI methods may be employed but in urban areas or in difficult sites, targeted geophysical techniques can rapidly characterise a site for follow-up, intensive surface or near-surface investigative methods. Most common is searching for buried utilities and still-active cables, cleared building foundations, determining soil type(s) and bedrock depth below ground level, solid/liquid waste contamination, mineshafts and relict mines below ground locations and even differing ground conditions. Indoor geophysical investigations have even been undertaken. Techniques vary depending upon the target and host materials as mentioned. == References == == Bibliography == == External links == Near Surface Geophysics Focus Group (AGU) The Near-Surface Geophysics Section of the Society of Exploration Geophysicists (SEG) Near Surface Geophysics: A Resource for all Things Geophysical Near Surface Geophysics Specialist Sub-Group of the Geological Society of London
Wikipedia/Near-surface_geophysics
Earth science or geoscience includes all fields of natural science related to the planet Earth. This is a branch of science dealing with the physical, chemical, and biological complex constitutions and synergistic linkages of Earth's four spheres: the biosphere, hydrosphere/cryosphere, atmosphere, and geosphere (or lithosphere). Earth science can be considered to be a branch of planetary science but with a much older history. == Geology == Geology is broadly the study of Earth's structure, substance, and processes. Geology is largely the study of the lithosphere, or Earth's surface, including the crust and rocks. It includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. It incorporates aspects of chemistry, physics, and biology as elements of geology interact. Historical geology is the application of geology to interpret Earth history and how it has changed over time. Geochemistry studies the chemical components and processes of the Earth. Geophysics studies the physical properties of the Earth. Paleontology studies fossilized biological material in the lithosphere. Planetary geology studies geoscience as it pertains to extraterrestrial bodies. Geomorphology studies the origin of landscapes. Structural geology studies the deformation of rocks to produce mountains and lowlands. Resource geology studies how energy resources can be obtained from minerals. Environmental geology studies how pollution and contaminants affect soil and rock. Mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. Petrology is the study of rocks, including the formation and composition of rocks. Petrography is a branch of petrology that studies the typology and classification of rocks. == Earth's interior == Plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the Earth's crust. Beneath the Earth's crust lies the mantle which is heated by the radioactive decay of heavy elements. The mantle is not quite solid and consists of magma which is in a state of semi-perpetual convection. This convection process causes the lithospheric plates to move, albeit slowly. The resulting process is known as plate tectonics. Areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the Earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform (or conservative) boundaries. Earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. Plate tectonics might be thought of as the process by which the Earth is resurfaced. As the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. Through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. Volcanoes result primarily from the melting of subducted crust material. Crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface—giving birth to volcanoes. == Atmospheric science == Atmospheric science initially developed in the late-19th century as a means to forecast the weather through meteorology, the study of weather. Atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. Climatology studies the climate and climate change. The troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up Earth's atmosphere. 75% of the mass in the atmosphere is located within the troposphere, the lowest layer. In all, the atmosphere is made up of about 78.0% nitrogen, 20.9% oxygen, and 0.92% argon, and small amounts of other gases including CO2 and water vapor. Water vapor and CO2 cause the Earth's atmosphere to catch and hold the Sun's energy through the greenhouse effect. This makes Earth's surface warm enough for liquid water and life. In addition to trapping heat, the atmosphere also protects living organisms by shielding the Earth's surface from cosmic rays. The magnetic field—created by the internal motions of the core—produces the magnetosphere which protects Earth's atmosphere from the solar wind. As the Earth is 4.5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. == Earth's magnetic field == == Hydrology == Hydrology is the study of the hydrosphere and the movement of water on Earth. It emphasizes the study of how humans use and interact with freshwater supplies. Study of water's movement is closely related to geomorphology and other branches of Earth science. Applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. Subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. Oceanography is the study of oceans. Hydrogeology is the study of groundwater. It includes the mapping of groundwater supplies and the analysis of groundwater contaminants. Applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. The earliest exploitation of groundwater resources dates back to 3000 BC, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. Ecohydrology is the study of ecological systems in the hydrosphere. It can be divided into the physical study of aquatic ecosystems and the biological study of aquatic organisms. Ecohydrology includes the effects that organisms and aquatic ecosystems have on one another as well as how these ecoystems are affected by humans. Glaciology is the study of the cryosphere, including glaciers and coverage of the Earth by ice and snow. Concerns of glaciology include access to glacial freshwater, mitigation of glacial hazards, obtaining resources that exist beneath frozen land, and addressing the effects of climate change on the cryosphere. == Ecology == Ecology is the study of the biosphere. This includes the study of nature and of how living things interact with the Earth and one another and the consequences of that. It considers how living things use resources such as oxygen, water, and nutrients from the Earth to sustain themselves. It also considers how humans and other living creatures cause changes to nature. == Physical geography == Physical geography is the study of Earth's systems and how they interact with one another as part of a single self-contained system. It incorporates astronomy, mathematical geography, meteorology, climatology, geology, geomorphology, biology, biogeography, pedology, and soils geography. Physical geography is distinct from human geography, which studies the human populations on Earth, though it does include human effects on the environment. == Methodology == Methodologies vary depending on the nature of the subjects being studied. Studies typically fall into one of three categories: observational, experimental, or theoretical. Earth scientists often conduct sophisticated computer analysis or visit an interesting location to study earth phenomena (e.g. Antarctica or hot spot island chains). A foundational idea in Earth science is the notion of uniformitarianism, which states that "ancient geologic features are interpreted by understanding active processes that are readily observed." In other words, any geologic processes at work in the present have operated in the same ways throughout geologic time. This enables those who study Earth history to apply knowledge of how the Earth's processes operate in the present to gain insight into how the planet has evolved and changed throughout long history. == Earth's spheres == In Earth science, it is common to conceptualize the Earth's surface as consisting of several distinct layers, often referred to as spheres: the lithosphere, the hydrosphere, the atmosphere, and the biosphere, this concept of spheres is a useful tool for understanding the Earth's surface and its various processes these correspond to rocks, water, air and life. Also included by some are the cryosphere (corresponding to ice) as a distinct portion of the hydrosphere and the pedosphere (corresponding to soil) as an active and intermixed sphere. The following fields of science are generally categorized within the Earth sciences: Geology describes the rocky parts of the Earth's crust (or lithosphere) and its historic development. Major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology. Physical geography focuses on geography as an Earth science. Physical geography is the study of Earth's seasons, climate, atmosphere, soil, streams, landforms, and oceans. Physical geography can be divided into several branches or related fields, as follows: geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. Geophysics and geodesy investigate the shape of the Earth, its reaction to forces and its magnetic and gravity fields. Geophysicists explore the Earth's core and mantle as well as the tectonic and seismic activity of the lithosphere. Geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. Seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. Geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. Geochemists use the tools and principles of chemistry to study the Earth's composition, structure, processes, and other physical aspects. Major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. Soil science covers the outermost layer of the Earth's crust that is subject to soil formation processes (or pedosphere). Major subdivisions in this field of study include edaphology and pedology. Ecology covers the interactions between organisms and their environment. This field of study differentiates the study of Earth from other planets in the Solar System, Earth being the only planet teeming with life. Hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involve all the components of the hydrologic cycle on the Earth and its atmosphere (or hydrosphere). "Sub-disciplines of hydrology include hydrometeorology, surface water hydrology, hydrogeology, watershed science, forest hydrology, and water chemistry." Glaciology covers the icy parts of the Earth (or cryosphere). Atmospheric sciences cover the gaseous parts of the Earth (or atmosphere) between the surface and the exosphere (about 1000 km). Major subdisciplines include meteorology, climatology, atmospheric chemistry, and atmospheric physics. === Earth science breakup === == See also == == References == === Sources === == Further reading == == External links == Earth Science Picture of the Day, a service of Universities Space Research Association, sponsored by NASA Goddard Space Flight Center. Geoethics in Planetary and Space Exploration. Geology Buzz: Earth Science Archived 2021-11-04 at the Wayback Machine
Wikipedia/Earth_sciences
Seismic tomography or seismotomography is a technique for imaging the subsurface of the Earth using seismic waves. The properties of seismic waves are modified by the material through which they travel. By comparing the differences in seismic waves recorded at different locations, it is possible to create a model of the subsurface structure. Most commonly, these seismic waves are generated by earthquakes or man-made sources such as explosions. Different types of waves, including P, S, Rayleigh, and Love waves can be used for tomographic images, though each comes with their own benefits and downsides and are used depending on the geologic setting, seismometer coverage, distance from nearby earthquakes, and required resolution. The model created by tomographic imaging is almost always a seismic velocity model, and features within this model may be interpreted as structural, thermal, or compositional variations. Geoscientists apply seismic tomography to a wide variety of settings in which the subsurface structure is of interest, ranging in scale from whole-Earth structure to the upper few meters below the surface. == Theory == Tomography is solved as an inverse problem. Seismic data are compared to an initial Earth model and the model is modified until the best possible fit between the model predictions and observed data is found. Seismic waves would travel in straight lines if Earth was of uniform composition, but structural, chemical, and thermal variations affect the properties of seismic waves, most importantly their velocity, leading to the reflection and refraction of these waves. The location and magnitude of variations in the subsurface can be calculated by the inversion process, although solutions to tomographic inversions are non-unique. Most commonly, only the travel time of the seismic waves is considered in the inversion. However, advances in modeling techniques and computing power have allowed different parts, or the entirety, of the measured seismic waveform to be fit during the inversion. Seismic tomography is similar to medical x-ray computed tomography (CT scan) in that a computer processes receiver data to produce a 3D image, although CT scans use attenuation instead of travel-time difference. Seismic tomography has to deal with the analysis of curved ray paths which are reflected and refracted within the Earth, and potential uncertainty in the location of the earthquake hypocenter. CT scans use linear x-rays and a known source. == History == In the early 20th century, seismologists first used travel time variations in seismic waves from earthquakes to make discoveries such as the existence of the Moho and the depth to the outer core. While these findings shared some underlying principles with seismic tomography, modern tomography itself was not developed until the 1970s with the expansion of global seismic networks. Networks like the World-Wide Standardized Seismograph Network were initially motivated by underground nuclear tests, but quickly showed the benefits of their accessible, standardized datasets for geoscience. These developments occurred concurrently with advancements in modeling techniques and computing power that were required to solve large inverse problems and generate theoretical seismograms, which are required to test the accuracy of a model. As early as 1972, researchers successfully used some of the underlying principles of modern seismic tomography to search for fast and slow areas in the subsurface. The first widely cited publication that largely resembles modern seismic tomography was published in 1976 and used local earthquakes to determine the 3D velocity structure beneath Southern California. The following year, P wave delay times were used to create 2D velocity maps of the whole Earth at several depth ranges, representing an early 3D model. The first model using iterative techniques, which improve upon an initial model in small steps and are required when there are a large number of unknowns, was done in 1984. The model was made possible by iterating upon the first radially anisotropic Earth model, created in 1981. A radially anisotropic Earth model describes changes in material properties, specifically seismic velocity, along a radial path through the Earth, and assumes this profile is valid for every path from the core to the surface. This 1984 study was also the first to apply the term "tomography" to seismology, as the term had originated in the medical field with X-ray tomography. Seismic tomography has continued to improve in the past several decades since its initial conception. The development of adjoint inversions, which are able to combine several different types of seismic data into a single inversion, help negate some of the trade-offs associated with any individual data type. Historically, seismic waves have been modeled as 1D rays, a method referred to as "ray theory" that is relatively simple to model and can usually fit travel-time data well. However, recorded seismic waveforms contain much more information than just travel-time and are affected by a much wider path than is assumed by ray theory. Methods like the finite-frequency method attempt to account for this within the framework of ray theory. More recently, the development of "full waveform" or "waveform" tomography has abandoned ray theory entirely. This method models seismic wave propagation in its full complexity and can yield more accurate images of the subsurface. Originally these inversions were developed in exploration seismology in the 1980s and 1990s and were too computationally complex for global and regional scale studies, but development of numerical modeling methods to simulate seismic waves has allowed waveform tomography to become more common. == Process == Seismic tomography uses seismic records to create 2D and 3D models of the subsurface through an inverse problem that minimizes the difference between the created model and the observed seismic data. Various methods are used to resolve anomalies in the crust, lithosphere, mantle, and core based on the availability of data and types of seismic waves that pass through the region. Longer wavelengths penetrate deeper into the Earth, but seismic waves are not sensitive to features significantly smaller than their wavelength and therefore provide a lower resolution. Different methods also make different assumptions, which can have a large effect on the image created. For example, commonly used tomographic methods work by iteratively improving an initial input model, and thus can produce unrealistic results if the initial model is unreasonable. P wave data are used in most local models and global models in areas with sufficient earthquake and seismograph density. S and surface wave data are used in global models when this coverage is not sufficient, such as in ocean basins and away from subduction zones. First-arrival times are the most widely used, but models utilizing reflected and refracted phases are used in more complex models, such as those imaging the core. Differential traveltimes between wave phases or types are also used. === Local tomography === Local tomographic models are often based on a temporary seismic array targeting specific areas, unless in a seismically active region with extensive permanent network coverage. These allow for the imaging of the crust and upper mantle. Diffraction and wave equation tomography use the full waveform, rather than just the first arrival times. The inversion of amplitude and phases of all arrivals provide more detailed density information than transmission traveltime alone. Despite the theoretical appeal, these methods are not widely employed because of the computing expense and difficult inversions. Reflection tomography originated with exploration geophysics. It uses an artificial source to resolve small-scale features at crustal depths. Wide-angle tomography is similar, but with a wide source to receiver offset. This allows for the detection of seismic waves refracted from sub-crustal depths and can determine continental architecture and details of plate margins. These two methods are often used together. Local earthquake tomography is used in seismically active regions with sufficient seismometer coverage. Given the proximity between source and receivers, a precise earthquake focus location must be known. This requires the simultaneous iteration of both structure and focus locations in model calculations. Teleseismic tomography uses waves from distant earthquakes that deflect upwards to a local seismic array. The models can reach depths similar to the array aperture, typically to depths for imaging the crust and lithosphere (a few hundred kilometers). The waves travel near 30° from vertical, creating a vertical distortion to compact features. === Regional or global tomography === Regional to global scale tomographic models are generally based on long wavelengths. Various models have better agreement with each other than local models due to the large feature size they image, such as subducted slabs and superplumes. The trade off from whole mantle to whole Earth coverage is the coarse resolution (hundreds of kilometers) and difficulty imaging small features (e.g. narrow plumes). Although often used to image different parts of the subsurface, P and S wave derived models broadly agree where there is image overlap. These models use data from both permanent seismic stations and supplementary temporary arrays. First arrival traveltime P wave data are used to generate the highest resolution tomographic images of the mantle. These models are limited to regions with sufficient seismograph coverage and earthquake density, therefore cannot be used for areas such as inactive plate interiors and ocean basins without seismic networks. Other phases of P waves are used to image the deeper mantle and core. In areas with limited seismograph or earthquake coverage, multiple phases of S waves can be used for tomographic models. These are of lower resolution than P wave models, due to the distances involved and fewer bounce-phase data available. S waves can also be used in conjunction with P waves for differential arrival time models. Surface waves can be used for tomography of the crust and upper mantle where no body wave (P and S) data are available. Both Rayleigh and Love waves can be used. The low frequency waves lead to low resolution models, therefore these models have difficulty with crustal structure. Free oscillations, or normal mode seismology, are the long wavelength, low frequency movements of the surface of the Earth which can be thought of as a type of surface wave. The frequencies of these oscillations can be obtained through Fourier transformation of seismic data. The models based on this method are of broad scale, but have the advantage of relatively uniform data coverage as compared to data sourced directly from earthquakes. Attenuation tomography attempts to extract the anelastic signal from the elastic-dominated waveform of seismic waves. Generally, it is assumed that seismic waves behave elastically, meaning individual rock particles that are displaced by the seismic wave eventually return to their original position. However, a comparatively small amount of permanent deformation does occur, which adds up to significant energy loss over large distances. This anelastic behavior is called attenuation, and in certain conditions can become just as important as the elastic response. It has been shown that the contribution of anelasticity to seismic velocity is highly sensitive to temperature, so attenuation tomography can help determine if a velocity feature is caused by a thermal or chemical variation, which can be ambiguous when assuming a purely elastic response. Ambient noise tomography uses random seismic waves generated by oceanic and atmospheric disturbances to recover the velocities of surface waves. Assuming ambient seismic noise is equal in amplitude and frequency content from all directions, cross-correlating the ambient noise recorded at two seismometers for the same time period should produce only seismic energy that travels from one station to the other. This allows one station to be treated as a "virtual source" of surface waves sent to the other station, the "virtual receiver". These surface waves are sensitive to the seismic velocity of the Earth at different depths depending on their period. A major advantage of this method is that it does not require an earthquake or man-made source. A disadvantage of the method is that an individual cross-correlation can be quite noisy due to the complexity of the real ambient noise field. Thus, many individual correlations over a shorter time period, typically one day, need to be created and averaged to improve the signal-to-noise ratio. While this has often required very large amounts of seismic data recorded over multiple years, more recent studies have successfully used much shorter time periods to create tomographic images with ambient noise. Waveforms are usually modeled as rays due to ray theory being significantly less complex to model than the full seismic wave equations. However, seismic waves are affected by the material properties of a wide area surrounding the ray path, not just the material through which the ray passes directly. The finite frequency effect is the result the surrounding medium has on a seismic record. Finite frequency tomography accounts for this in determining both travel time and amplitude anomalies, increasing image resolution. This has the ability to resolve much larger variations (i.e. 10–30%) in material properties. == Applications == Seismic tomography can resolve anisotropy, anelasticity, density, and bulk sound velocity. Variations in these parameters may be a result of thermal or chemical differences, which are attributed to processes such as mantle plumes, subducting slabs, and mineral phase changes. Larger scale features that can be imaged with tomography include the high velocities beneath continental shields and low velocities under ocean spreading centers. === Hotspots === The mantle plume hypothesis proposes that areas of volcanism not readily explained by plate tectonics, called hotspots, are a result of thermal upwelling within the mantle. Some researchers have proposed an upper mantle source above the 660km discontinuity for these plumes, while others propose a much deeper source, possibly at the core-mantle boundary. While the source of mantle plumes has been highly debated since they were first proposed in the 1970s, most modern studies argue in favor of mantle plumes originating at or near the core-mantle boundary. This is in large part due to tomographic images that reveal both the plumes themselves as well as large low-velocity zones in the deep mantle that likely contribute to the formation of mantle plumes. These large low-shear velocity provinces as well as smaller ultra low velocity zones have been consistently observed across many tomographic models of the deep Earth === Subduction Zones === Subducting plates are colder than the mantle into which they are moving. This creates a fast anomaly that is visible in tomographic images. Tomographic images have been made of most subduction zones around the world and have provided insight into the geometries of the crust and upper mantle in these areas. These images have revealed that subducting plates vary widely in how steeply they move into the mantle. Tomographic images have also seen features such as deeper portions of the subducting plate tearing off from the upper portion. === Other applications === Tomography can be used to image faults to better understand their seismic hazard. This can be through imaging the fault itself by seeing differences in seismic velocity across the fault boundary or by determining near-surface velocity structure, which can have a large impact on the magnitude on the amplitude of ground-shaking during an earthquake due to site amplification effects. Near-surface velocity structure from tomographic images can also be useful for other hazards, such as monitoring of landslides for changes in near-surface moisture content which has an effect on both seismic velocity and potential for future landslides. Tomographic images of volcanoes have yielded new insights into properties of the underlying magmatic system. These images have most commonly been used to estimate the depth and volume of magma stored in the crust, but have also been used to constrain properties such as the geometry, temperature, or chemistry of the magma. It is important to note that both lab experiments and tomographic imaging studies have shown that recovering these properties from seismic velocity alone can be difficult due to the complexity of seismic wave propagation through focused zones of hot, potentially melted rocks. While comparatively primitive to tomography on Earth, seismic tomography has been proposed on other bodies in the Solar System and successfully used on the Moon. Data collected from four seismometers placed by the Apollo missions have been used many times to create 1-D velocity profiles for the moon, and less commonly 3-D tomographic models. Tomography relies on having multiple seismometers, but tomography-adjacent methods for constraining Earth structure have been used on other planets. While on Earth these methods are often used in combination with seismic tomography models to better constrain the locations of subsurface features, they can still provide useful information about the interiors of other planetary bodies when only a single seismometer is available. For example, data gathered by the SEIS (Seismic Experiment for Interior Structure) instrument on InSight on Mars has been able to detect the Martian core. == Limitations == Global seismic networks have expanded steadily since the 1960s, but are still concentrated on continents and in seismically active regions. Oceans, particularly in the southern hemisphere, are under-covered. Temporary seismic networks have helped improve tomographic models in regions of particular interest, but typically only collect data for months to a few years. The uneven distribution of earthquakes biases tomographic models towards seismically active regions. Methods that do not rely on earthquakes such as active source surveys or ambient noise tomography have helped image areas with little to no seismicity, though these both have their own limitations as compared to earthquake-based tomography. The type of seismic wave used in a model limits the resolution it can achieve. Longer wavelengths are able to penetrate deeper into the Earth, but can only be used to resolve large features. Finer resolution can be achieved with surface waves, with the trade off that they cannot be used in models deeper than the crust and upper mantle. The disparity between wavelength and feature scale causes anomalies to appear of reduced magnitude and size in images. P and S wave models respond differently to the types of anomalies. Models based solely on the wave that arrives first naturally prefer faster pathways, causing models based on these data to have lower resolution of slow (often hot) features. This can prove to be a significant issue in areas such as volcanoes where rocks are much hotter than their surroundings and oftentimes partially melted. Shallow models must also consider the significant lateral velocity variations in continental crust. Because seismometers have only been deployed in large numbers since the late-20th century, tomography is only capable of viewing changes in velocity structure over decades. For example, tectonic plates only move at millimeters per year, so the total amount of change in geologic structure due to plate tectonics since the development of seismic tomography is several orders of magnitude lower than the finest resolution possible with modern seismic networks. However, seismic tomography has still been used to view near-surface velocity structure changes at time scales of years to months. Tomographic solutions are non-unique. Although statistical methods can be used to analyze the validity of a model, unresolvable uncertainty remains. This contributes to difficulty comparing the validity of different model results. Computing power limits the amount of seismic data, number of unknowns, mesh size, and iterations in tomographic models. This is of particular importance in ocean basins, which due to limited network coverage and earthquake density require more complex processing of distant data. Shallow oceanic models also require smaller model mesh size due to the thinner crust. Tomographic images are typically presented with a color ramp representing the strength of the anomalies. This has the consequence of making equal changes appear of differing magnitude based on visual perceptions of color, such as the change from orange to red being more subtle than blue to yellow. The degree of color saturation can also visually skew interpretations. These factors should be considered when analyzing images. == See also == Banana doughnut theory EarthScope == References == == External links == SubMachine is a collection of web-based tools for the interactive visualisation, analysis, and quantitative comparison of global-scale, volumetric (3-D) data sets of the subsurface, with supporting tools for interacting with other, complementary models and data sets. EarthScope Education and Outreach: Seismic Tomography Background. Incorporated Research Institutions for Seismology (IRIS). Retrieved 17 January 2013. Tomography Animation. Incorporated Research Institutions for Seismology (IRIS). Retrieved 17 January 2013.
Wikipedia/Seismic_tomography
Reviews of Geophysics is a quarterly peer-reviewed scientific journal published by Wiley-Blackwell on behalf of the American Geophysical Union. The current editor-in-chief is Fabio Florindo (National Institute of Geophysics and Volcanology–Rome). == History == Reviews of Geophysics (ISSN 0096-1043) was established in 1963. Between February 1970 and November 1984 it was named Reviews of Geophysics and Space Physics (ISSN 0034-6853). Throughout the years its frequency varied. It was quarterly in 1970–74, five per year in 1975, quarterly again in 1976–78, eight issues in 1979, quarterly in 1980–82, and finally eight issues in 1983, before being renamed to Reviews of Geophysics in 1984. == Aims and scope == As a review publication by invitation only, Reviews of Geophysics provides an overview of geophysics research. It integrates summations of previous scientific investigations with active research areas. Critical analysis of the progress in, and direction of, geophysics is provided. Topical coverage includes solid-earth geophysics, cryospheric science, oceans and atmospheric studies, meteorology, and oceanography), along with the physics of the Solar System and its processes. For example, coverage includes processes that tend to concern the public such as rainfall rates, trends in marine fisheries, earthquake probabilities, and volcanic eruption potentials. == Notable articles == The three most highly cited articles published in Reviews of Geophysics are: Mellor, G. L.; Yamada, T. (1982). "Development of a turbulence closure-model for geophysical fluid problems". Reviews of Geophysics. 20 (4): 851–875. Bibcode:1982RvGSP..20..851M. doi:10.1029/RG020i004p00851. Farr, Tom G.; Rosen, Paul A.; Caro, Edward; et al. (2007). "The shuttle radar topography mission". Reviews of Geophysics. https://doi.org/10.1029/2005RG000183. Stuart Ross Taylor and Scott M. McLennan (1995). "The geochemical evolution of the continental crust". Reviews of Geophysics. https://doi.org/10.1029/95RG00262. == Abstracting and indexing == The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2020 impact factor of 22.000. == See also == List of scientific journals in earth and atmospheric sciences List of scientific journals in physics == References == == External links == Official website
Wikipedia/Reviews_of_Geophysics
The Adams–Williamson equation, named after Leason H. Adams and E. D. Williamson, is an equation used to determine density as a function of radius, more commonly used to determine the relation between the velocities of seismic waves and the density of the Earth's interior. Given the average density of rocks at the Earth's surface and profiles of the P-wave and S-wave speeds as function of depth, it can predict how density increases with depth. It assumes that the compression is adiabatic and that the Earth is spherically symmetric, homogeneous, and in hydrostatic equilibrium. It can also be applied to spherical shells with that property. It is an important part of models of the Earth's interior such as the Preliminary reference Earth model (PREM). == History == Williamson and Adams first developed the theory in 1923. They concluded that "It is therefore impossible to explain the high density of the Earth on the basis of compression alone. The dense interior cannot consist of ordinary rocks compressed to a small volume; we must therefore fall back on the only reasonable alternative, namely, the presence of a heavier material, presumably some metal, which, to judge from its abundance in the Earth's crust, in meteorites and in the Sun, is probably iron." == Theory == The two types of seismic body waves are compressional waves (P-waves) and shear waves (S-waves). Both have speeds that are determined by the elastic properties of the medium they travel through, in particular the bulk modulus K, the shear modulus μ, and the density ρ. In terms of these parameters, the P-wave speed vp and the S-wave speed vs are v p = K + ( 4 / 3 ) μ ρ v s = μ ρ . {\displaystyle {\begin{aligned}v_{p}&={\sqrt {\frac {K+(4/3)\mu }{\rho }}}\\v_{s}&={\sqrt {\frac {\mu }{\rho }}}.\end{aligned}}} These two speeds can be combined in a seismic parameter The definition of the bulk modulus, K = − V d P d V , {\displaystyle K=-V{\frac {dP}{dV}},} is equivalent to Suppose a region at a distance r from the Earth's center can be considered a fluid in hydrostatic equilibrium, it is acted on by gravitational attraction from the part of the Earth that is below it and pressure from the part above it. Also suppose that the compression is adiabatic (so thermal expansion does not contribute to density variations). The pressure P(r) varies with r as where g(r) is the gravitational acceleration at radius r. Combining 1,2 and 3 gives the Adams–Williamson equation: d ρ d r = − ρ ( r ) g ( r ) Φ ( r ) . {\displaystyle {\frac {d\rho }{dr}}=-{\frac {\rho (r)g(r)}{\Phi (r)}}.} This equation can be integrated to obtain ln ⁡ ( ρ ρ 0 ) = − ∫ r 0 r g ( r ) Φ ( r ) d r , {\displaystyle \ln \left({\frac {\rho }{\rho _{0}}}\right)=-\int _{r_{0}}^{r}{\frac {g(r)}{\Phi (r)}}dr,} where r0 is the radius at the Earth's surface and ρ0 is the density at the surface. Given ρ0 and profiles of the P- and S-wave speeds, the radial dependence of the density can be determined by numerical integration. == References ==
Wikipedia/Adams–Williamson_equation
In physics and chemistry, an equation of state is a thermodynamic equation relating state variables, which describe the state of matter under a given set of physical conditions, such as pressure, volume, temperature, or internal energy. Most modern equations of state are formulated in the Helmholtz free energy. Equations of state are useful in describing the properties of pure substances and mixtures in liquids, gases, and solid states as well as the state of matter in the interior of stars. Though there are many equations of state, none accurately predicts properties of substances under all conditions. The quest for a universal equation of state has spanned three centuries. == Overview == At present, there is no single equation of state that accurately predicts the properties of all substances under all conditions. An example of an equation of state correlates densities of gases and liquids to temperatures and pressures, known as the ideal gas law, which is roughly accurate for weakly polar gases at low pressures and moderate temperatures. This equation becomes increasingly inaccurate at higher pressures and lower temperatures, and fails to predict condensation from a gas to a liquid. The general form of an equation of state may be written as f ( p , V , T ) = 0 {\displaystyle f(p,V,T)=0} where p {\displaystyle p} is the pressure, V {\displaystyle V} the volume, and T {\displaystyle T} the temperature of the system. Yet also other variables may be used in that form. It is directly related to Gibbs phase rule, that is, the number of independent variables depends on the number of substances and phases in the system. An equation used to model this relationship is called an equation of state. In most cases this model will comprise some empirical parameters that are usually adjusted to measurement data. Equations of state can also describe solids, including the transition of solids from one crystalline state to another. Equations of state are also used for the modeling of the state of matter in the interior of stars, including neutron stars, dense matter (quark–gluon plasmas) and radiation fields. A related concept is the perfect fluid equation of state used in cosmology. Equations of state are applied in many fields such as process engineering and petroleum industry as well as pharmaceutical industry. Any consistent set of units may be used, although SI units are preferred. Absolute temperature refers to the use of the Kelvin (K), with zero being absolute zero. n {\displaystyle n} , number of moles of a substance V m {\displaystyle V_{m}} , V n {\displaystyle {\frac {V}{n}}} , molar volume, the volume of 1 mole of gas or liquid R {\displaystyle R} , ideal gas constant ≈ 8.3144621 J/mol·K p c {\displaystyle p_{c}} , pressure at the critical point V c {\displaystyle V_{c}} , molar volume at the critical point T c {\displaystyle T_{c}} , absolute temperature at the critical point == Historical background == Equations of state essentially begin three centuries ago with the history of the ideal gas law: p V = n R T {\displaystyle pV=nRT} Boyle's law was one of the earliest formulation of an equation of state. In 1662, the Irish physicist and chemist Robert Boyle performed a series of experiments employing a J-shaped glass tube, which was sealed on one end. Mercury was added to the tube, trapping a fixed quantity of air in the short, sealed end of the tube. Then the volume of gas was measured as additional mercury was added to the tube. The pressure of the gas could be determined by the difference between the mercury level in the short end of the tube and that in the long, open end. Through these experiments, Boyle noted that the gas volume varied inversely with the pressure. In mathematical form, this can be stated as: p V = c o n s t a n t . {\displaystyle pV=\mathrm {constant} .} The above relationship has also been attributed to Edme Mariotte and is sometimes referred to as Mariotte's law. However, Mariotte's work was not published until 1676. In 1787 the French physicist Jacques Charles found that oxygen, nitrogen, hydrogen, carbon dioxide, and air expand to roughly the same extent over the same 80-kelvin interval. This is known today as Charles's law. Later, in 1802, Joseph Louis Gay-Lussac published results of similar experiments, indicating a linear relationship between volume and temperature: V 1 T 1 = V 2 T 2 . {\displaystyle {\frac {V_{1}}{T_{1}}}={\frac {V_{2}}{T_{2}}}.} Dalton's law (1801) of partial pressure states that the pressure of a mixture of gases is equal to the sum of the pressures of all of the constituent gases alone. Mathematically, this can be represented for n {\displaystyle n} species as: p total = p 1 + p 2 + ⋯ + p n = ∑ i = 1 n p i . {\displaystyle p_{\text{total}}=p_{1}+p_{2}+\cdots +p_{n}=\sum _{i=1}^{n}p_{i}.} In 1834, Émile Clapeyron combined Boyle's law and Charles' law into the first statement of the ideal gas law. Initially, the law was formulated as pVm = R(TC + 267) (with temperature expressed in degrees Celsius), where R is the gas constant. However, later work revealed that the number should actually be closer to 273.2, and then the Celsius scale was defined with 0 ∘ C = 273.15 K {\displaystyle 0~^{\circ }\mathrm {C} =273.15~\mathrm {K} } , giving: p V m = R ( T C + 273.15 ∘ C ) . {\displaystyle pV_{m}=R\left(T_{C}+273.15\ {}^{\circ }{\text{C}}\right).} In 1873, J. D. van der Waals introduced the first equation of state derived by the assumption of a finite volume occupied by the constituent molecules. His new formula revolutionized the study of equations of state, and was the starting point of cubic equations of state, which most famously continued via the Redlich–Kwong equation of state and the Soave modification of Redlich-Kwong. The van der Waals equation of state can be written as ( P + a 1 V m 2 ) ( V m − b ) = R T {\displaystyle \left(P+a{\frac {1}{V_{m}^{2}}}\right)(V_{m}-b)=RT} where a {\displaystyle a} is a parameter describing the attractive energy between particles and b {\displaystyle b} is a parameter describing the volume of the particles. == Ideal gas law == === Classical ideal gas law === The classical ideal gas law may be written p V = n R T . {\displaystyle pV=nRT.} In the form shown above, the equation of state is thus f ( p , V , T ) = p V − n R T = 0. {\displaystyle f(p,V,T)=pV-nRT=0.} If the calorically perfect gas approximation is used, then the ideal gas law may also be expressed as follows p = ρ ( γ − 1 ) e {\displaystyle p=\rho (\gamma -1)e} where ρ {\displaystyle \rho } is the number density of the gas (number of atoms/molecules per unit volume), γ = C p / C v {\displaystyle \gamma =C_{p}/C_{v}} is the (constant) adiabatic index (ratio of specific heats), e = C v T {\displaystyle e=C_{v}T} is the internal energy per unit mass (the "specific internal energy"), C v {\displaystyle C_{v}} is the specific heat capacity at constant volume, and C p {\displaystyle C_{p}} is the specific heat capacity at constant pressure. === Quantum ideal gas law === Since for atomic and molecular gases, the classical ideal gas law is well suited in most cases, let us describe the equation of state for elementary particles with mass m {\displaystyle m} and spin s {\displaystyle s} that takes into account quantum effects. In the following, the upper sign will always correspond to Fermi–Dirac statistics and the lower sign to Bose–Einstein statistics. The equation of state of such gases with N {\displaystyle N} particles occupying a volume V {\displaystyle V} with temperature T {\displaystyle T} and pressure p {\displaystyle p} is given by p = ( 2 s + 1 ) 2 m 3 k B 5 T 5 3 π 2 ℏ 3 ∫ 0 ∞ z 3 / 2 d z e z − μ / ( k B T ) ± 1 {\displaystyle p={\frac {(2s+1){\sqrt {2m^{3}k_{\text{B}}^{5}T^{5}}}}{3\pi ^{2}\hbar ^{3}}}\int _{0}^{\infty }{\frac {z^{3/2}\,\mathrm {d} z}{e^{z-\mu /(k_{\text{B}}T)}\pm 1}}} where k B {\displaystyle k_{\text{B}}} is the Boltzmann constant and μ ( T , N / V ) {\displaystyle \mu (T,N/V)} the chemical potential is given by the following implicit function N V = ( 2 s + 1 ) ( m k B T ) 3 / 2 2 π 2 ℏ 3 ∫ 0 ∞ z 1 / 2 d z e z − μ / ( k B T ) ± 1 . {\displaystyle {\frac {N}{V}}={\frac {(2s+1)(mk_{\text{B}}T)^{3/2}}{{\sqrt {2}}\pi ^{2}\hbar ^{3}}}\int _{0}^{\infty }{\frac {z^{1/2}\,\mathrm {d} z}{e^{z-\mu /(k_{\text{B}}T)}\pm 1}}.} In the limiting case where e μ / ( k B T ) ≪ 1 {\displaystyle e^{\mu /(k_{\text{B}}T)}\ll 1} , this equation of state will reduce to that of the classical ideal gas. It can be shown that the above equation of state in the limit e μ / ( k B T ) ≪ 1 {\displaystyle e^{\mu /(k_{\text{B}}T)}\ll 1} reduces to p V = N k B T [ 1 ± π 3 / 2 2 ( 2 s + 1 ) N ℏ 3 V ( m k B T ) 3 / 2 + ⋯ ] {\displaystyle pV=Nk_{\text{B}}T\left[1\pm {\frac {\pi ^{3/2}}{2(2s+1)}}{\frac {N\hbar ^{3}}{V(mk_{\text{B}}T)^{3/2}}}+\cdots \right]} With a fixed number density N / V {\displaystyle N/V} , decreasing the temperature causes in Fermi gas, an increase in the value for pressure from its classical value implying an effective repulsion between particles (this is an apparent repulsion due to quantum exchange effects not because of actual interactions between particles since in ideal gas, interactional forces are neglected) and in Bose gas, a decrease in pressure from its classical value implying an effective attraction. The quantum nature of this equation is in it dependence on s and ħ. == Cubic equations of state == Cubic equations of state are called such because they can be rewritten as a cubic function of V m {\displaystyle V_{m}} . Cubic equations of state originated from the van der Waals equation of state. Hence, all cubic equations of state can be considered 'modified van der Waals equation of state'. There is a very large number of such cubic equations of state. For process engineering, cubic equations of state are today still highly relevant, e.g. the Peng Robinson equation of state or the Soave Redlich Kwong equation of state. == Virial equations of state == === Virial equation of state === p V m R T = A + B V m + C V m 2 + D V m 3 + ⋯ {\displaystyle {\frac {pV_{m}}{RT}}=A+{\frac {B}{V_{m}}}+{\frac {C}{V_{m}^{2}}}+{\frac {D}{V_{m}^{3}}}+\cdots } Although usually not the most convenient equation of state, the virial equation is important because it can be derived directly from statistical mechanics. This equation is also called the Kamerlingh Onnes equation. If appropriate assumptions are made about the mathematical form of intermolecular forces, theoretical expressions can be developed for each of the coefficients. A is the first virial coefficient, which has a constant value of 1 and makes the statement that when volume is large, all fluids behave like ideal gases. The second virial coefficient B corresponds to interactions between pairs of molecules, C to triplets, and so on. Accuracy can be increased indefinitely by considering higher order terms. The coefficients B, C, D, etc. are functions of temperature only. === The BWR equation of state === p = ρ R T + ( B 0 R T − A 0 − C 0 T 2 + D 0 T 3 − E 0 T 4 ) ρ 2 + ( b R T − a − d T ) ρ 3 + α ( a + d T ) ρ 6 + c ρ 3 T 2 ( 1 + γ ρ 2 ) exp ⁡ ( − γ ρ 2 ) {\displaystyle {\begin{aligned}p=\rho RT&+\left(B_{0}RT-A_{0}-{\frac {C_{0}}{T^{2}}}+{\frac {D_{0}}{T^{3}}}-{\frac {E_{0}}{T^{4}}}\right)\rho ^{2}+\left(bRT-a-{\frac {d}{T}}\right)\rho ^{3}\\[2pt]&+\alpha \left(a+{\frac {d}{T}}\right)\rho ^{6}+{\frac {c\rho ^{3}}{T^{2}}}\left(1+\gamma \rho ^{2}\right)\exp \left(-\gamma \rho ^{2}\right)\end{aligned}}} where p {\displaystyle p} is pressure ρ {\displaystyle \rho } is molar density Values of the various parameters can be found in reference materials. The BWR equation of state has also frequently been used for the modelling of the Lennard-Jones fluid. There are several extensions and modifications of the classical BWR equation of state available. The Benedict–Webb–Rubin–Starling equation of state is a modified BWR equation of state and can be written as p = ρ R T + ( B 0 R T − A 0 − C 0 T 2 + D 0 T 3 − E 0 T 4 ) ρ 2 + ( b R T − a − d T + c T 2 ) ρ 3 + α ( a + d T ) ρ 6 {\displaystyle {\begin{aligned}p=\rho RT&+\left(B_{0}RT-A_{0}-{\frac {C_{0}}{T^{2}}}+{\frac {D_{0}}{T^{3}}}-{\frac {E_{0}}{T^{4}}}\right)\rho ^{2}\\[2pt]&+\left(bRT-a-{\frac {d}{T}}+{\frac {c}{T^{2}}}\right)\rho ^{3}+\alpha \left(a+{\frac {d}{T}}\right)\rho ^{6}\end{aligned}}} Note that in this virial equation, the fourth and fifth virial terms are zero. The second virial coefficient is monotonically decreasing as temperature is lowered. The third virial coefficient is monotonically increasing as temperature is lowered. The Lee–Kesler equation of state is based on the corresponding states principle, and is a modification of the BWR equation of state. p = R T V ( 1 + B V r + C V r 2 + D V r 5 + c 4 T r 3 V r 2 ( β + γ V r 2 ) exp ⁡ ( − γ V r 2 ) ) {\displaystyle p={\frac {RT}{V}}\left(1+{\frac {B}{V_{r}}}+{\frac {C}{V_{r}^{2}}}+{\frac {D}{V_{r}^{5}}}+{\frac {c_{4}}{T_{r}^{3}V_{r}^{2}}}\left(\beta +{\frac {\gamma }{V_{r}^{2}}}\right)\exp \left(-{\frac {\gamma }{V_{r}^{2}}}\right)\right)} == Physically based equations of state == There is a large number of physically based equations of state available today. Most of those are formulated in the Helmholtz free energy as a function of temperature, density (and for mixtures additionally the composition). The Helmholtz energy is formulated as a sum of multiple terms modelling different types of molecular interaction or molecular structures, e.g. the formation of chains or dipolar interactions. Hence, physically based equations of state model the effect of molecular size, attraction and shape as well as hydrogen bonding and polar interactions of fluids. In general, physically based equations of state give more accurate results than traditional cubic equations of state, especially for systems containing liquids or solids. Most physically based equations of state are built on monomer term describing the Lennard-Jones fluid or the Mie fluid. === Perturbation theory-based models === Perturbation theory is frequently used for modelling dispersive interactions in an equation of state. There is a large number of perturbation theory based equations of state available today, e.g. for the classical Lennard-Jones fluid. The two most important theories used for these types of equations of state are the Barker-Henderson perturbation theory and the Weeks–Chandler–Andersen perturbation theory. === Statistical associating fluid theory (SAFT) === An important contribution for physically based equations of state is the statistical associating fluid theory (SAFT) that contributes the Helmholtz energy that describes the association (a.k.a. hydrogen bonding) in fluids, which can also be applied for modelling chain formation (in the limit of infinite association strength). The SAFT equation of state was developed using statistical mechanical methods (in particular the perturbation theory of Wertheim) to describe the interactions between molecules in a system. The idea of a SAFT equation of state was first proposed by Chapman et al. in 1988 and 1989. Many different versions of the SAFT models have been proposed, but all use the same chain and association terms derived by Chapman et al. == Multiparameter equations of state == Multiparameter equations of state are empirical equations of state that can be used to represent pure fluids with high accuracy. Multiparameter equations of state are empirical correlations of experimental data and are usually formulated in the Helmholtz free energy. The functional form of these models is in most parts not physically motivated. They can be usually applied in both liquid and gaseous states. Empirical multiparameter equations of state represent the Helmholtz energy of the fluid as the sum of ideal gas and residual terms. Both terms are explicit in temperature and density: a ( T , ρ ) R T = a i d e a l g a s ( τ , δ ) + a residual ( τ , δ ) R T {\displaystyle {\frac {a(T,\rho )}{RT}}={\frac {a^{\mathrm {ideal\,gas} }(\tau ,\delta )+a^{\textrm {residual}}(\tau ,\delta )}{RT}}} with τ = T r T , δ = ρ ρ r {\displaystyle \tau ={\frac {T_{r}}{T}},\delta ={\frac {\rho }{\rho _{r}}}} The reduced density ρ r {\displaystyle \rho _{r}} and reduced temperature T r {\displaystyle T_{r}} are in most cases the critical values for the pure fluid. Because integration of the multiparameter equations of state is not required and thermodynamic properties can be determined using classical thermodynamic relations, there are few restrictions as to the functional form of the ideal or residual terms. Typical multiparameter equations of state use upwards of 50 fluid specific parameters, but are able to represent the fluid's properties with high accuracy. Multiparameter equations of state are available currently for about 50 of the most common industrial fluids including refrigerants. The IAPWS95 reference equation of state for water is also a multiparameter equations of state. Mixture models for multiparameter equations of state exist, as well. Yet, multiparameter equations of state applied to mixtures are known to exhibit artifacts at times. One example of such an equation of state is the form proposed by Span and Wagner. a r e s i d u a l = ∑ i = 1 8 ∑ j = − 8 12 n i , j δ i τ j / 8 + ∑ i = 1 5 ∑ j = − 8 24 n i , j δ i τ j / 8 exp ⁡ ( − δ ) + ∑ i = 1 5 ∑ j = 16 56 n i , j δ i τ j / 8 exp ⁡ ( − δ 2 ) + ∑ i = 2 4 ∑ j = 24 38 n i , j δ i τ j / 2 exp ⁡ ( − δ 3 ) {\displaystyle {\begin{aligned}a^{\mathrm {residual} }={}&\sum _{i=1}^{8}\sum _{j=-8}^{12}n_{i,j}\delta ^{i}\tau ^{j/8}+\sum _{i=1}^{5}\sum _{j=-8}^{24}n_{i,j}\delta ^{i}\tau ^{j/8}\exp \left(-\delta \right)\\&+\sum _{i=1}^{5}\sum _{j=16}^{56}n_{i,j}\delta ^{i}\tau ^{j/8}\exp \left(-\delta ^{2}\right)+\sum _{i=2}^{4}\sum _{j=24}^{38}n_{i,j}\delta ^{i}\tau ^{j/2}\exp \left(-\delta ^{3}\right)\end{aligned}}} This is a somewhat simpler form that is intended to be used more in technical applications. Equations of state that require a higher accuracy use a more complicated form with more terms. == List of further equations of state == === Stiffened equation of state === When considering water under very high pressures, in situations such as underwater nuclear explosions, sonic shock lithotripsy, and sonoluminescence, the stiffened equation of state is often used: p = ρ ( γ − 1 ) e − γ p 0 {\displaystyle p=\rho (\gamma -1)e-\gamma p^{0}\,} where e {\displaystyle e} is the internal energy per unit mass, γ {\displaystyle \gamma } is an empirically determined constant typically taken to be about 6.1, and p 0 {\displaystyle p^{0}} is another constant, representing the molecular attraction between water molecules. The magnitude of the correction is about 2 gigapascals (20,000 atmospheres). The equation is stated in this form because the speed of sound in water is given by c 2 = γ ( p + p 0 ) / ρ {\displaystyle c^{2}=\gamma \left(p+p^{0}\right)/\rho } . Thus water behaves as though it is an ideal gas that is already under about 20,000 atmospheres (2 GPa) pressure, and explains why water is commonly assumed to be incompressible: when the external pressure changes from 1 atmosphere to 2 atmospheres (100 kPa to 200 kPa), the water behaves as an ideal gas would when changing from 20,001 to 20,002 atmospheres (2000.1 MPa to 2000.2 MPa). This equation mispredicts the specific heat capacity of water but few simple alternatives are available for severely nonisentropic processes such as strong shocks. === Morse oscillator equation of state === An equation of state of Morse oscillator has been derived, and it has the following form: p = Γ 1 ν + Γ 2 ν 2 {\displaystyle p=\Gamma _{1}\nu +\Gamma _{2}\nu ^{2}} Where Γ 1 {\displaystyle \Gamma _{1}} is the first order virial parameter and it depends on the temperature, Γ 2 {\displaystyle \Gamma _{2}} is the second order virial parameter of Morse oscillator and it depends on the parameters of Morse oscillator in addition to the absolute temperature. ν {\displaystyle \nu } is the fractional volume of the system. === Ultrarelativistic equation of state === An ultrarelativistic fluid has equation of state p = ρ m c s 2 {\displaystyle p=\rho _{m}c_{s}^{2}} where p {\displaystyle p} is the pressure, ρ m {\displaystyle \rho _{m}} is the mass density, and c s {\displaystyle c_{s}} is the speed of sound. === Ideal Bose equation of state === The equation of state for an ideal Bose gas is p V m = R T Li α + 1 ⁡ ( z ) ζ ( α ) ( T T c ) α {\displaystyle pV_{m}=RT~{\frac {\operatorname {Li} _{\alpha +1}(z)}{\zeta (\alpha )}}\left({\frac {T}{T_{c}}}\right)^{\alpha }} where α is an exponent specific to the system (e.g. in the absence of a potential field, α = 3/2), z is exp(μ/kBT) where μ is the chemical potential, Li is the polylogarithm, ζ is the Riemann zeta function, and Tc is the critical temperature at which a Bose–Einstein condensate begins to form. === Jones–Wilkins–Lee equation of state for explosives (JWL equation) === The equation of state from Jones–Wilkins–Lee is used to describe the detonation products of explosives. p = A ( 1 − ω R 1 V ) exp ⁡ ( − R 1 V ) + B ( 1 − ω R 2 V ) exp ⁡ ( − R 2 V ) + ω e 0 V {\displaystyle p=A\left(1-{\frac {\omega }{R_{1}V}}\right)\exp(-R_{1}V)+B\left(1-{\frac {\omega }{R_{2}V}}\right)\exp \left(-R_{2}V\right)+{\frac {\omega e_{0}}{V}}} The ratio V = ρ e / ρ {\displaystyle V=\rho _{e}/\rho } is defined by using ρ e {\displaystyle \rho _{e}} , which is the density of the explosive (solid part) and ρ {\displaystyle \rho } , which is the density of the detonation products. The parameters A {\displaystyle A} , B {\displaystyle B} , R 1 {\displaystyle R_{1}} , R 2 {\displaystyle R_{2}} and ω {\displaystyle \omega } are given by several references. In addition, the initial density (solid part) ρ 0 {\displaystyle \rho _{0}} , speed of detonation V D {\displaystyle V_{D}} , Chapman–Jouguet pressure P C J {\displaystyle P_{CJ}} and the chemical energy per unit volume of the explosive e 0 {\displaystyle e_{0}} are given in such references. These parameters are obtained by fitting the JWL-EOS to experimental results. Typical parameters for some explosives are listed in the table below. === Others === Tait equation for water and other liquids. Several equations are referred to as the Tait equation. Murnaghan equation of state Birch–Murnaghan equation of state Stacey–Brennan–Irvine equation of state Modified Rydberg equation of state Adapted polynomial equation of state Johnson–Holmquist equation of state Mie–Grüneisen equation of state Anton-Schmidt equation of state State-transition equation == See also == Gas laws Departure function Table of thermodynamic equations Real gas Cluster expansion Polytrope == References == == External links ==
Wikipedia/Equations_of_state
Electrical resistivity tomography (ERT) or electrical resistivity imaging (ERI) is a geophysical technique for imaging sub-surface structures from electrical resistivity measurements made at the surface, or by electrodes in one or more boreholes. If the electrodes are suspended in the boreholes, deeper sections can be investigated. It is closely related to the medical imaging technique electrical impedance tomography (EIT), and mathematically is the same inverse problem. In contrast to medical EIT, however, ERT is essentially a direct current method. A related geophysical method, induced polarization (or spectral induced polarization), measures the transient response and aims to determine the subsurface chargeability properties. Electrical resistivity measurements can be used for identification and quantification of depth of groundwater, detection of clays, and measurement of groundwater conductivity. == History == The technique evolved from techniques of electrical prospecting that predate digital computers, where layers or anomalies were sought rather than images. Early work on the mathematical problem in the 1930s assumed a layered medium (see for example Langer, Slichter). Andrey Nikolayevich Tikhonov who is best known for his work on regularization of inverse problems also worked on this problem. He explains in detail how to solve the ERT problem in a simple case of 2-layered medium. During the 1940s, he collaborated with geophysicists and without the aid of computers they discovered large deposits of copper. As a result, they were awarded a State Prize of Soviet Union. When adequate computers became widely available, the inverse problem of ERT could be solved numerically. The work of Loke and Barker at Birmingham University was among the first such solution and their approach is still widely used. With the advancement in the field of Electrical Resistivity Tomography (ERT) from 1D to 2D and nowadays 3D, ERT has explored many fields. The applications of ERT include fault investigation, ground water table investigation, soil moisture content determination and many others. In industrial process imaging ERT can be used in a similar fashion to medical EIT, to image the distribution of conductivity in mixing vessels and pipes. In this context it is usually called Electrical Resistance Tomography, emphasising the quantity that is measured rather than imaged. == Operating procedure == Soil resistivity, measured in ohm-centimeters (Ω⋅cm), varies with moisture content and temperature changes. In general, an increase in soil moisture results in a reduction in soil resistivity. The pore fluid provides the only electrical path in sands, while both the pore fluid and the surface charged particles provide electrical paths in clays. Resistivities of wet fine-grained soils are generally much lower than those of wet coarse-grained soils. The difference in resistivity between a soil in a dry and in a saturated condition may be several orders of magnitude. The method of measuring subsurface resistivity involves placing four electrodes in the ground in a line at equal spacing, applying a measured AC current to the outer two electrodes, and measuring the AC voltage between the inner two electrodes. A measured resistance is calculated by dividing the measured voltage by the measured current. This resistance is then multiplied by a geometric factor that includes the spacing between each electrode to determine the apparent resistivity. Electrode spacings of 0.75, 1.5, 3.0, 6.0, and 12.0 m are typically used for shallow depths (<10 m) of investigations. Greater electrode spacings of 1.5, 3.0, 6.0, 15.0, 30.0, 100.0, and 150.0 m are typically used for deeper investigations. The depth of investigation is typically less than the maximum electrode spacing. Water is introduced to the electrode holes as the electrodes are driven into the ground to improve electrical contact. == Applications == ERT is used to create images of various subsurface conditions and structures. It has applications in numerous fields, including: Environmental Studies: Groundwater Exploration: ERT helps locate underground aquifers and assess water quality. Contaminant Mapping: ERT is used to monitor and delineate the spread of contaminants in soil and groundwater. Landfill Monitoring: ERT monitors landfill conditions, gas generation and migration, and leachate pathways. Geotechnical Engineering: Site Investigation: ERT is used to survey soil and rock properties and existing underground infrastructure in construction projects. Foundation Assessment: ERT can evaluate the condition of foundations, detect voids, and assess load-bearing capacity. Sinkhole Detection: ERT can identify subsurface voids that may lead to sinkholes. Archaeology and Cultural Heritage: Buried Archaeological Features: ERT can detect buried structures, artefacts, and archaeological sites. Structural Integrity of Monuments: ERT helps assess the condition of historic buildings and structures. Mining and Mineral Exploration: Mineral Deposits: ERT can delineate the boundaries and characteristics of ore bodies. Cave Detection: ERT is used to locate caves and karst features in mining areas. Hydrogeology: Aquifer Mapping: ERT is employed to create detailed maps of subsurface aquifers and their properties. Saltwater Intrusion Monitoring: ERT helps detect and monitor the encroachment of saltwater into freshwater aquifers. Engineering and Infrastructure: Tunnel and Dam Assessment: ERT assesses the structural integrity of tunnels and dams. Pipeline and Cable Route Surveys: It helps identify subsurface utilities and potential hazards. Landslide Hazard Assessment: ERT can detect subsurface slip planes and unstable slopes. Levee and Embankment Assessment: It assesses the structural integrity of levees and embankments. Building Health Inspections: ERT is used to examine the condition of foundations and other underground parts of buildings to guide upkeep and renovations. Oil and Gas Exploration: Reservoir Characterization: ERT assists in understanding subsurface reservoir properties. Monitoring Fluid Migration: ERT is used to track the movement of fluids in the subsurface during drilling and production. Agriculture: Soil Moisture Mapping: ERT helps assess soil moisture content for precision agriculture. Root Zone Imaging: ERT is used to visualize plant root structures and soil-root interactions. == See also == Electrical capacitance tomography Electrical impedance tomography Three-dimensional electrical capacitance tomography Magnetotellurics Seismo-electromagnetics Telluric current Vertical electrical sounding Geophysical Imaging Ground-penetrating radar == References == Langer, R. E. (1933-10-01). "An inverse problem in differential equations" (PDF). Bulletin of the American Mathematical Society. 39 (10). American Mathematical Society (AMS): 814–821. doi:10.1090/s0002-9904-1933-05752-x. ISSN 0002-9904. Slichter, L. B. (1933). "The Interpretation of the Resistivity Prospecting Method for Horizontal Structures". Physics. Vol. 4, no. 9. AIP Publishing. pp. 307–322. doi:10.1063/1.1745198. ISSN 0148-6349. Langer, R. E. (1936-10-01). "On the determination of earth conductivity from observed surface potentials" (PDF). Bulletin of the American Mathematical Society. 42 (10). American Mathematical Society (AMS): 747–755. doi:10.1090/s0002-9904-1936-06420-7. ISSN 0002-9904. Tikhonov, A. N. (1949). О единственности решения задачи электроразведки. Doklady Akademii Nauk SSSR (in Russian). 69 (6): 797–800. A.P. Calderón, On an inverse boundary value problem, in Seminar on Numerical Analysis and its Applications to Continuum Physics, Rio de Janeiro. 1980. Scanned copy of paper Loke, M.H. (2004). Tutorial: 2-D and 3-D electrical imaging surveys (PDF). Retrieved 2007-06-11. Loke, M.H.; Barker, R.D. (1996). "Rapid least-squares inversion of apparent resistivity pseudosections by a quasi-Newton method". Geophysical Prospecting. 44 (1). Wiley: 131–152. Bibcode:1996GeopP..44..131L. doi:10.1111/j.1365-2478.1996.tb00142.x. ISSN 0016-8025. Loke, M.H.; Barker, R.D. (1996). "Practical techniques for 3D resistivity surveys and data inversion". Geophysical Prospecting. 44 (3). Wiley: 499–523. Bibcode:1996GeopP..44..499L. doi:10.1111/j.1365-2478.1996.tb00162.x. ISSN 0016-8025.
Wikipedia/Electrical_resistivity_tomography
The preliminary reference Earth model (PREM) plots the average of Earth's properties by depth. It includes a table of Earth properties, including elastic properties, attenuation, density, pressure, and gravity. PREM has been widely used as the basis for seismic tomography and related global geophysical models. It incorporates anelastic dispersion and anisotropy and therefore it is frequency-dependent and transversely isotropic for the upper mantle. PREM was developed by Adam M. Dziewonski and Don L. Anderson in response to guidelines of a "Standard Earth Model Committee" of the International Association of Geodesy (IAG) and the International Association of Seismology and Physics of the Earth's Interior (IASPEI) Other Earth reference models include iasp91 and ak135. == References == == External links == IRIS EMC - Reference Earth Models Table of values in PREM model Model formulas, inputs, and overview
Wikipedia/Preliminary_reference_Earth_model
Space physics, also known as space plasma physics, is the study of naturally occurring plasmas within Earth's upper atmosphere and the rest of the Solar System. It includes the topics of aeronomy, aurorae, planetary ionospheres and magnetospheres, radiation belts, and space weather (collectively known as solar-terrestrial physics). It also encompasses the discipline of heliophysics, which studies the solar physics of the Sun, its solar wind, the coronal heating problem, solar energetic particles, and the heliosphere. Space physics is both a pure science and an applied science, with applications in radio transmission, spacecraft operations (particularly communications and weather satellites), and in meteorology. Important physical processes in space physics include magnetic reconnection, synchrotron radiation, ring currents, Alfvén waves and plasma instabilities. It is studied using direct in situ measurements by sounding rockets and spacecraft, indirect remote sensing of electromagnetic radiation produced by the plasmas, and theoretical magnetohydrodynamics. Closely related fields include plasma physics, which studies more fundamental physics and artificial plasmas; atmospheric physics, which investigates lower levels of Earth's atmosphere; and astrophysical plasmas, which are natural plasmas beyond the Solar System. == History == Space physics can be traced to the Chinese who discovered the principle of the compass, but did not understand how it worked. During the 16th century, in De Magnete, William Gilbert gave the first description of the Earth's magnetic field, showing that the Earth itself is a great magnet, which explained why a compass needle points north. Deviations of the compass needle magnetic declination were recorded on navigation charts, and a detailed study of the declination near London by watchmaker George Graham resulted in the discovery of irregular magnetic fluctuations that we now call magnetic storms, so named by Alexander Von Humboldt. Gauss and William Weber made very careful measurements of Earth's magnetic field which showed systematic variations and random fluctuations. This suggested that the Earth was not an isolated body, but was influenced by external forces – especially from the Sun and the appearance of sunspots. A relationship between individual aurora and accompanying geomagnetic disturbances was noticed by Anders Celsius and Olof Peter Hiorter in 1747. In 1860, Elias Loomis (1811–1889) showed that the highest incidence of aurora is seen inside an oval of 20 - 25 degrees around the magnetic pole. In 1881, Hermann Fritz published a map of the "isochasms" or lines of constant magnetic field. In the late 1870s, Henri Becquerel offered the first physical explanation for the statistical correlations that had been recorded: sunspots must be a source of fast protons. They are guided to the poles by the Earth's magnetic field. In the early twentieth century, these ideas led Kristian Birkeland to build a terrella, or laboratory device which simulates the Earth's magnetic field in a vacuum chamber, and which uses a cathode ray tube to simulate the energetic particles which compose the solar wind. A theory began to be formulated about the interaction between the Earth's magnetic field and the solar wind. Space physics began in earnest with the first in situ measurements in the early 1950s, when a team led by Van Allen launched the first rockets to a height around 110 km. Geiger counters on board the second Soviet satellite, Sputnik 2, and the first US satellite, Explorer 1, detected the Earth's radiation belts, later named the Van Allen belts. The boundary between the Earth's magnetic field and interplanetary space was studied by Explorer 10. Future space craft would travel outside Earth orbit and study the composition and structure of the solar wind in much greater detail. These include WIND (spacecraft), (1994), Advanced Composition Explorer (ACE), Ulysses, the Interstellar Boundary Explorer (IBEX) in 2008, and Parker Solar Probe. Other spacecraft would study the sun, such as STEREO and Solar and Heliospheric Observatory (SOHO). == See also == Effects of spaceflight on the human body Space environment Space science Weightlessness == References == == Further reading == Kallenrode, May-Britt (2004). Space Physics: An Introduction to Plasmas and Particles in the Heliosphere and Magnetospheres. Springer. ISBN 978-3-540-20617-0. Gombosi, Tamas (1998). Physics of the Space Environment. New York: Cambridge University Press. ISBN 978-0-521-59264-2. == External links == Media related to Space physics at Wikimedia Commons
Wikipedia/Solar-terrestrial_physics
Earth science or geoscience includes all fields of natural science related to the planet Earth. This is a branch of science dealing with the physical, chemical, and biological complex constitutions and synergistic linkages of Earth's four spheres: the biosphere, hydrosphere/cryosphere, atmosphere, and geosphere (or lithosphere). Earth science can be considered to be a branch of planetary science but with a much older history. == Geology == Geology is broadly the study of Earth's structure, substance, and processes. Geology is largely the study of the lithosphere, or Earth's surface, including the crust and rocks. It includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. It incorporates aspects of chemistry, physics, and biology as elements of geology interact. Historical geology is the application of geology to interpret Earth history and how it has changed over time. Geochemistry studies the chemical components and processes of the Earth. Geophysics studies the physical properties of the Earth. Paleontology studies fossilized biological material in the lithosphere. Planetary geology studies geoscience as it pertains to extraterrestrial bodies. Geomorphology studies the origin of landscapes. Structural geology studies the deformation of rocks to produce mountains and lowlands. Resource geology studies how energy resources can be obtained from minerals. Environmental geology studies how pollution and contaminants affect soil and rock. Mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. Petrology is the study of rocks, including the formation and composition of rocks. Petrography is a branch of petrology that studies the typology and classification of rocks. == Earth's interior == Plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the Earth's crust. Beneath the Earth's crust lies the mantle which is heated by the radioactive decay of heavy elements. The mantle is not quite solid and consists of magma which is in a state of semi-perpetual convection. This convection process causes the lithospheric plates to move, albeit slowly. The resulting process is known as plate tectonics. Areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the Earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform (or conservative) boundaries. Earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. Plate tectonics might be thought of as the process by which the Earth is resurfaced. As the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. Through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. Volcanoes result primarily from the melting of subducted crust material. Crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface—giving birth to volcanoes. == Atmospheric science == Atmospheric science initially developed in the late-19th century as a means to forecast the weather through meteorology, the study of weather. Atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. Climatology studies the climate and climate change. The troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up Earth's atmosphere. 75% of the mass in the atmosphere is located within the troposphere, the lowest layer. In all, the atmosphere is made up of about 78.0% nitrogen, 20.9% oxygen, and 0.92% argon, and small amounts of other gases including CO2 and water vapor. Water vapor and CO2 cause the Earth's atmosphere to catch and hold the Sun's energy through the greenhouse effect. This makes Earth's surface warm enough for liquid water and life. In addition to trapping heat, the atmosphere also protects living organisms by shielding the Earth's surface from cosmic rays. The magnetic field—created by the internal motions of the core—produces the magnetosphere which protects Earth's atmosphere from the solar wind. As the Earth is 4.5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. == Earth's magnetic field == == Hydrology == Hydrology is the study of the hydrosphere and the movement of water on Earth. It emphasizes the study of how humans use and interact with freshwater supplies. Study of water's movement is closely related to geomorphology and other branches of Earth science. Applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. Subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. Oceanography is the study of oceans. Hydrogeology is the study of groundwater. It includes the mapping of groundwater supplies and the analysis of groundwater contaminants. Applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. The earliest exploitation of groundwater resources dates back to 3000 BC, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. Ecohydrology is the study of ecological systems in the hydrosphere. It can be divided into the physical study of aquatic ecosystems and the biological study of aquatic organisms. Ecohydrology includes the effects that organisms and aquatic ecosystems have on one another as well as how these ecoystems are affected by humans. Glaciology is the study of the cryosphere, including glaciers and coverage of the Earth by ice and snow. Concerns of glaciology include access to glacial freshwater, mitigation of glacial hazards, obtaining resources that exist beneath frozen land, and addressing the effects of climate change on the cryosphere. == Ecology == Ecology is the study of the biosphere. This includes the study of nature and of how living things interact with the Earth and one another and the consequences of that. It considers how living things use resources such as oxygen, water, and nutrients from the Earth to sustain themselves. It also considers how humans and other living creatures cause changes to nature. == Physical geography == Physical geography is the study of Earth's systems and how they interact with one another as part of a single self-contained system. It incorporates astronomy, mathematical geography, meteorology, climatology, geology, geomorphology, biology, biogeography, pedology, and soils geography. Physical geography is distinct from human geography, which studies the human populations on Earth, though it does include human effects on the environment. == Methodology == Methodologies vary depending on the nature of the subjects being studied. Studies typically fall into one of three categories: observational, experimental, or theoretical. Earth scientists often conduct sophisticated computer analysis or visit an interesting location to study earth phenomena (e.g. Antarctica or hot spot island chains). A foundational idea in Earth science is the notion of uniformitarianism, which states that "ancient geologic features are interpreted by understanding active processes that are readily observed." In other words, any geologic processes at work in the present have operated in the same ways throughout geologic time. This enables those who study Earth history to apply knowledge of how the Earth's processes operate in the present to gain insight into how the planet has evolved and changed throughout long history. == Earth's spheres == In Earth science, it is common to conceptualize the Earth's surface as consisting of several distinct layers, often referred to as spheres: the lithosphere, the hydrosphere, the atmosphere, and the biosphere, this concept of spheres is a useful tool for understanding the Earth's surface and its various processes these correspond to rocks, water, air and life. Also included by some are the cryosphere (corresponding to ice) as a distinct portion of the hydrosphere and the pedosphere (corresponding to soil) as an active and intermixed sphere. The following fields of science are generally categorized within the Earth sciences: Geology describes the rocky parts of the Earth's crust (or lithosphere) and its historic development. Major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology. Physical geography focuses on geography as an Earth science. Physical geography is the study of Earth's seasons, climate, atmosphere, soil, streams, landforms, and oceans. Physical geography can be divided into several branches or related fields, as follows: geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. Geophysics and geodesy investigate the shape of the Earth, its reaction to forces and its magnetic and gravity fields. Geophysicists explore the Earth's core and mantle as well as the tectonic and seismic activity of the lithosphere. Geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. Seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. Geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. Geochemists use the tools and principles of chemistry to study the Earth's composition, structure, processes, and other physical aspects. Major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. Soil science covers the outermost layer of the Earth's crust that is subject to soil formation processes (or pedosphere). Major subdivisions in this field of study include edaphology and pedology. Ecology covers the interactions between organisms and their environment. This field of study differentiates the study of Earth from other planets in the Solar System, Earth being the only planet teeming with life. Hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involve all the components of the hydrologic cycle on the Earth and its atmosphere (or hydrosphere). "Sub-disciplines of hydrology include hydrometeorology, surface water hydrology, hydrogeology, watershed science, forest hydrology, and water chemistry." Glaciology covers the icy parts of the Earth (or cryosphere). Atmospheric sciences cover the gaseous parts of the Earth (or atmosphere) between the surface and the exosphere (about 1000 km). Major subdisciplines include meteorology, climatology, atmospheric chemistry, and atmospheric physics. === Earth science breakup === == See also == == References == === Sources === == Further reading == == External links == Earth Science Picture of the Day, a service of Universities Space Research Association, sponsored by NASA Goddard Space Flight Center. Geoethics in Planetary and Space Exploration. Geology Buzz: Earth Science Archived 2021-11-04 at the Wayback Machine
Wikipedia/Geosciences
Differential Global Positioning Systems (DGPSs) supplement and enhance the positional data available from global navigation satellite systems (GNSSs). A DGPS can increase accuracy of positional data by about a thousandfold, from approximately 15 metres (49 ft) to 1–3 centimetres (1⁄2–1+1⁄4 in). DGPSs consist of networks of fixed position, ground-based reference stations. Each reference station calculates the difference between its highly accurate known position and its less accurate satellite-derived position. The stations broadcast this data locally—typically using ground-based transmitters of shorter range. Non-fixed (mobile) receivers use it to correct their position by the same amount, thereby improving their accuracy. The United States Coast Guard (USCG) previously ran DGPS in the United States on longwave radio frequencies between 285 kHz and 325 kHz near major waterways and harbors. It was discontinued in March 2022. The USCG's DGPS was known as NDGPS (Nationwide DGPS) and was jointly administered by the Coast Guard and the Army Corps of Engineers. It consisted of broadcast sites located throughout the inland and coastal portions of the United States including Alaska, Hawaii and Puerto Rico. The Canadian Coast Guard (CCG) also ran a separate DGPS system, but discontinued its use on December 15, 2022. Other countries have their own DGPS. A similar system which transmits corrections from orbiting satellites instead of ground-based transmitters is called a Wide-Area DGPS (WADGPS) satellite-based augmentation system. == History == When GPS was first being put into service, the US military was concerned about the possibility of enemy forces using the globally available GPS signals to guide their own weapon systems. Originally, the government thought the "coarse acquisition" (C/A) signal would give only about 100-metre (330 ft) accuracy, but with improved receiver designs, the actual accuracy was 20 to 30 metres (66 to 98 ft). Starting in March 1990,: 11  to avoid providing such unexpected accuracy, the C/A signal transmitted on the L1 frequency (1575.42 MHz) was deliberately degraded by offsetting its clock signal by a random amount, equivalent to about 100 metres (330 ft) of distance. This technique, known as Selective Availability, or SA for short, seriously degraded the usefulness of the GPS signal for non-military users. More accurate guidance was possible for users of dual-frequency GPS receivers which also received the L2 frequency (1227.6 MHz), but the L2 transmission, intended for military use, was encrypted and was available only to authorized users with the decryption keys. This presented a problem for civilian users who relied upon ground-based radio navigation systems such as LORAN, VOR and NDB systems costing millions of dollars each year to maintain. The advent of a global navigation satellite system (GNSS) could provide greatly improved accuracy and performance at a fraction of the cost. The accuracy inherent in the SA however, was too poor to make this realistic. The military received multiple requests from the Federal Aviation Administration (FAA), United States Coast Guard (USCG) and United States Department of Transportation (DOT) to set SA aside to enable civilian use of GNSS, but remained steadfast in its objection on grounds of security. Throughout the early to mid 1980s, a number of agencies worked to develop a solution to the SA "problem". Since the SA signal was changed slowly, the effect of its offset on positioning was relatively fixed – that is, if the offset was "100 meters to the east", that offset would be true over a relatively wide area. This suggested that broadcasting this offset to local GPS receivers could eliminate the effects of SA, resulting in measurements closer to GPS's theoretical performance, around 15 metres (49 ft). Additionally, another major source of errors in a GPS fix is due to transmission delays in the ionosphere, which could also be measured and corrected for in the broadcast. This offered an improvement to about 5 metres (16 ft) accuracy, more than enough for most civilian needs.[1] The US Coast Guard was one of the more aggressive proponents of the DGPS, experimenting with the system on an ever-wider basis throughout the late 1980s and early 1990s. These signals are broadcast on marine longwave frequencies, which could be received on existing radiotelephones and fed into suitably equipped GPS receivers. Almost all major GPS vendors offered units with DGPS inputs, not only for the USCG signals, but also aviation units on either VHF or commercial AM radio bands. "Production quality" DGPS signals began to be sent out on a limited basis in 1996, and the network was rapidly expanded to cover most US ports of call, as well as the Saint Lawrence Seaway in partnership with the Canadian Coast Guard. Plans were put into place to expand the system across the US, but this would not be easy. The quality of the DGPS corrections generally fell with distance, and large transmitters capable of covering large areas tend to cluster near cities. This meant that lower-population areas, notably in the midwest and Alaska, would have little coverage by ground-based GPS. As of November 2013 the USCG's national DGPS consisted of 85 broadcast sites which provide dual coverage to almost the entire US coastline and inland navigable waterways including Alaska, Hawaii, and Puerto Rico. In addition the system provided single or dual coverage to a majority of the inland portion of United States. Instead, the FAA (and others) started studying broadcasting the signals across the entire hemisphere from communications satellites in geostationary orbit. This led to the Wide Area Augmentation System (WAAS) and similar systems, although these are generally not referred to as DGPS, or alternatively, "wide-area DGPS". WAAS offers accuracy similar to the USCG's ground-based DGPS networks, and there has been some argument that the latter will be turned off as WAAS becomes fully operational. By the mid-1990s it was clear that the SA system was no longer useful in its intended role. DGPS would render it ineffective over the US, where it was considered most needed. Additionally, during the Gulf War of 1990–1991 SA had been temporarily turned off because Allied troops were using commercial GPS receivers. This showed that leaving SA turned off could be useful to the United States. In 2000, an executive order by President Bill Clinton turned it off permanently. Nevertheless, by this point DGPS had evolved into a system for providing more accuracy than even a non-SA GPS signal could provide on its own. There are several other sources of error which share the same characteristics as SA in that they are the same over large areas and for "reasonable" amounts of time. These include the ionospheric effects mentioned earlier, as well as errors in the satellite position ephemeris data and clock drift on the satellites. Depending on the amount of data being sent in the DGPS correction signal, correcting for these effects can reduce the error significantly, the best implementations offering accuracies of under 10 centimetres (3.9 in). In addition to continued deployments of the USCG and FAA sponsored systems, a number of vendors have created commercial DGPS services, selling their signal (or receivers for it) to users who require better accuracy than the nominal 15 meters GPS offers. Almost all commercial GPS units, even hand-held units, now offer DGPS data inputs, and many also support WAAS directly. To some degree, a form of DGPS is now a natural part of most GPS operations. == Operation == A reference station calculates differential corrections for its own location and time. Users may be up to 200 nautical miles (370 km) from the station, however, and some of the compensated errors vary with space: specifically, satellite ephemeris errors and those introduced by ionospheric and tropospheric distortions. For this reason, the accuracy of DGPS decreases with distance from the reference station. The problem can be aggravated if the user and the station lack "inter visibility"—when they are unable to see the same satellites. == Accuracy == The United States Federal Radionavigation Plan and the IALA Recommendation on the Performance and Monitoring of DGNSS Services in the Band 283.5–325 kHz cite the United States Department of Transportation's 1993 estimated error growth of 0.67 metres per 100 kilometres (3.5 ft/100 mi) from the broadcast site but measurements of accuracy across the Atlantic, in Portugal, suggest a degradation of just 0.22 m/100 km (1.2 ft/100 mi). == Variations == DGPS can refer to any type of Ground-Based Augmentation System (GBAS). There are many operational systems in use throughout the world, according to the US Coast Guard, 47 countries operate systems similar to the US NDGPS (Nationwide Differential Global Positioning System). A list can be found at the World DGPS Database for Dxers. === European DGPS Network === European DGPS network has been developed mainly by the Finnish and Swedish maritime administrations in order to improve safety in the archipelago between the two countries. In the UK and Ireland, the system was implemented as a maritime navigation aid to fill the gap left by the demise of the Decca Navigator System in 2000. With a network of 12 transmitters sited around the coastline and three control stations, it was set up in 1998 by the countries' respective General Lighthouse Authorities (GLA) — Trinity House covering England, Wales and the Channel Islands, the Northern Lighthouse Board covering Scotland and the Isle of Man and the Commissioners of Irish Lights, covering the whole of Ireland. Transmitting on the 300-kHz band, the system underwent testing and two additional transmitters were added before the system was declared operational in 2002. The system was decommissioned in March 2022. Effective Solutions provides details and a map of European Differential Beacon Transmitters. === United States NDGPS === The United States Department of Transportation, in conjunction with the Federal Highway Administration, the Federal Railroad Administration and the National Geodetic Survey appointed the United States Coast Guard as the maintaining agency for the U.S. Nationwide DGPS network (NDGPS). The system is an expansion of the previous Maritime Differential GPS (MDGPS), which the Coast Guard began in the late 1980s and completed in March 1999. MDGPS covered only coastal waters, the Great Lakes, and the Mississippi River inland waterways, while NDGPS expands this to include complete coverage of the continental United States. The centralized Command and Control unit is the USCG Navigation Center, based in Alexandria, VA. There are currently 85 NDGPS sites in the US network, administered by the U.S. Department of Homeland Security Navigation Center. In 2015, the USCG and the United States Army Corps of Engineers (USACE) sought comments on a planned phasing-out of the U.S. DGPS. In response to the comments received, a subsequent 2016 Federal Register notice announced that 46 stations would remain in service and "available to users in the maritime and coastal regions". In spite of this decision, USACE decommissioned its remaining 7 sites and, in March 2018, the USCG announced that it would decommission its remaining stations by 2020. As of June 2020, all NDGPS service has been discontinued as it is no longer deemed a necessity owing to the removal of selective availability in 2000 and also the introduction of newer generation of GPS satellites. === Canadian DGPS === The Canadian system was similar to the US system and was primarily for maritime usage covering the Atlantic and Pacific coast as well as the Great Lakes and Saint Lawrence Seaway. It was discontinued as a service December 15, 2022. === Australia === Australia runs three DGPSes: one is mainly for marine navigation, broadcasting its signal on the long-wave band; another is used for land surveys and land navigation, and has corrections broadcast on the Commercial FM radio band. The third at Sydney airport is currently undergoing testing for precision landing of aircraft (2011), as a backup to the Instrument Landing System at least until 2015. It is called the Ground Based Augmentation System. Corrections to aircraft position are broadcast via the aviation VHF band. The marine DGPS service of 16 ground stations covering the Australian coast was discontinued effective July 1, 2020. Improved multichannel GPS capabilities, and signal sources from multiple providers (GPS, GLONASS, Galileo and BeiDou) was cited as providing better navigational accuracy than could be obtained from GPS + DGPS. An Australian Satellite-Based Augmentation System (SBAS), the Southern Positioning Augmentation Network (SouthPAN) offers higher accuracy positioning for GNSS users. == Post-processing == Post-processing is used in Differential GPS to obtain precise positions of unknown points by relating them to known points such as survey markers. The GPS measurements are usually stored in computer memory in the GPS receivers, and are subsequently transferred to a computer running the GPS post-processing software. The software computes baselines using simultaneous measurement data from two or more GPS receivers. The baselines represent a three-dimensional line drawn between the two points occupied by each pair of GPS antennas. The post-processed measurements allow more precise positioning, because most GPS errors affect each receiver nearly equally, and therefore can be cancelled out in the calculations. Differential GPS measurements can also be computed in real time by some GPS receivers if they receive a correction signal using a separate radio receiver, for example in Real Time Kinematic (RTK) surveying or navigation. The improvement of GPS positioning doesn't require simultaneous measurements of two or more receivers in any case, but can also be done by special use of a single device. In the 1990s when even handheld receivers were quite expensive, some methods of quasi-differential GPS were developed, using the receiver in quick turns of positions or loops of 3-10 survey points. == See also == RTCM SC-104 - a standard for transferring dGPS data to a GPS receiver Assisted GPS (A-GPS) - System used primarily in GPS-equipped cellular devices to improve start-up performance GNSS augmentation GNSS enhancement == References == == External links == SiReNT information page Archived 2018-05-05 at the Wayback Machine US NDGPS fact sheet USCG Navigation Center National DGPS system USCG coverage maps Canadian Coast Guard DGPS information (English) Canadian Coast Guard DGPS information (French) Product Survey on RTK DGPS receivers for (mainly) hydrographic use DGPS Decoding Software Useful DGPS Links, Databases and Resources Worldwide database of IALA DGPS Reference stations on an interactive map Archived 2015-07-17 at the Wayback Machine
Wikipedia/Differential_GPS
Magnetostratigraphy is a geophysical correlation technique used to date sedimentary and volcanic sequences. The method works by collecting oriented samples at measured intervals throughout the section. The samples are analyzed to determine their characteristic remanent magnetization (ChRM), that is, the polarity of Earth's magnetic field at the time a stratum was deposited. This is possible because volcanic flows acquire a thermoremanent magnetization and sediments acquire a depositional remanent magnetization, both of which reflect the direction of the Earth's field at the time of formation. This technique is typically used to date sequences that generally lack fossils or interbedded igneous rock. It is particularly useful in high-resolution correlation of deep marine stratigraphy where it allowed the validation of the Vine–Matthews–Morley hypothesis related to the theory of plate tectonics. == Technique == When measurable magnetic properties of rocks vary stratigraphically they may be the basis for related but different kinds of stratigraphic units known collectively as magnetostratigraphic units (magnetozones). The magnetic property most useful in stratigraphic work is the change in the direction of the remanent magnetization of the rocks, caused by reversals in the polarity of the Earth's magnetic field. The direction of the remnant magnetic polarity recorded in the stratigraphic sequence can be used as the basis for the subdivision of the sequence into units characterized by their magnetic polarity. Such units are called "magnetostratigraphic polarity units" or chrons. If the ancient magnetic field was oriented similar to today's field (North Magnetic Pole near the Geographic North Pole) the strata retains a normal polarity. If the data indicates that the North Magnetic Pole was near the Geographic South Pole, the strata exhibits reversed polarity. === Polarity chron === A polarity chron, or in context chron, is the time interval between polarity reversals of Earth's magnetic field. It is the time interval represented by a magnetostratigraphic polarity unit. It represents a certain time period in geologic history where the Earth's magnetic field was in predominantly a "normal" or "reversed" position. Chrons are numbered in order starting from today and increasing in number into the past. As well as a number, each chron is divided into two parts, labelled "n" and "r", thereby showing the position of the field's polarity. Chrons are also referred by a capital letter of a reference sequence such as "C". A chron is the time equivalent to a chronozone or a polarity zone. It was called a "polarity subchron" when the interval is less than 200,000 years long, although the term was redefined in 2020 to an approximate duration between 10,000 and 100,000 years and polarity chron for an approximate duration between 100,000 years and a million years. Other terms used are Megachron for a duration between 108 and 109 years, Superchron for a duration between 107 and 108 years and Crytochron for a duration less than 3×104 years. ==== Chron nomenclature ==== The nomenclature for the succession of polarity intervals, especially when changes are of short durations, or not universal (the earth's magnetic field is complex) is challenging, as each new discovery has to be inserted (or if not validated, removed). The two standardised marine magnetic anomalies sequences are the "C-sequence" and "M-sequence" and cover from the Middle Jurassic to date. Accordingly, the main C polarity chrons series extend backwards from the current C1n, commonly termed Brunhes, with the most recent transition at C1r, commonly termed Matuyama, at 0.773 Ma which is the Brunhes–Matuyama reversal. The C (for Cenozoic) sequence ends in the Cretaceous Normal Superchron termed C34n which on age calibration occurred at 120.964 Ma and lasted to Chron C33r at 83.650 Ma that defined the Santonian geologic age. The M series is defined from M0, with full label M0r, at 121.400 Ma, which is the beginning of the Aptian to M44n.2r which is before 171.533 Ma in the Aalenian. Subdivisions in the sequencies also have specific nomenclature so C8n.2n is the second oldest normal polarity subchron comprising normal-polarity Chron C8n and the youngest cryptochron, the Emperor cryptochron, is named C1n-1. Certain terms in the literature such as M-1r to describe a postulated brief reversal at about 118 Ma are provisional. === Sampling procedures === Oriented paleomagnetic samples are collected in the field using a rock core drill, or as hand samples (chunks broken off the rock face). To average out sampling errors, a minimum of three samples is taken from each sample site. Spacing of the sample sites within a stratigraphic section depends on the rate of deposition and the age of the section. In sedimentary layers, the preferred lithologies are mudstones, claystones, and very fine-grained siltstones because the magnetic grains are finer and more likely to orient with the ambient field during deposition. === Analytical procedures === Samples are first analyzed in their natural state to obtain their natural remanent magnetization (NRM). The NRM is then stripped away in a stepwise manner using thermal or alternating field demagnetization techniques to reveal the stable magnetic component. Magnetic orientations of all samples from a site are then compared and their average magnetic polarity is determined with directional statistics, most commonly Fisher statistics or bootstrapping. The statistical significance of each average is evaluated. The latitudes of the Virtual Geomagnetic Poles from those sites determined to be statistically significant are plotted against the stratigraphic level at which they were collected. These data are then abstracted to the standard black and white magnetostratigraphic columns in which black indicates normal polarity and white is reversed polarity. === Correlation and ages === Because the polarity of a stratum can only be normal or reversed, variations in the rate at which the sediment accumulated can cause the thickness of a given polarity zone to vary from one area to another. This presents the problem of how to correlate zones of like polarities between different stratigraphic sections. To avoid confusion at least one isotopic age needs to be collected from each section. In sediments, this is often obtained from layers of volcanic ash. Failing that, one can tie a polarity to a biostratigraphic event that has been correlated elsewhere with isotopic ages. With the aid of the independent isotopic age or ages, the local magnetostratigraphic column is correlated with the Global Magnetic Polarity Time Scale (GMPTS). Because the age of each reversal shown on the GMPTS is relatively well known, the correlation establishes numerous time lines through the stratigraphic section. These ages provide relatively precise dates for features in the rocks such as fossils, changes in sedimentary rock composition, changes in depositional environment, etc. They also constrain the ages of cross-cutting features such as faults, dikes, and unconformities. ==== Sediment accumulation rates ==== Perhaps the most powerful application of these data is to determine the rate at which the sediment accumulated. This is accomplished by plotting the age of each reversal (in millions of years ago) vs. the stratigraphic level at which the reversal is found (in meters). This provides the rate in meters per million years which is usually rewritten in terms of millimeters per year (which is the same as kilometers per million years). These data are also used to model basin subsidence rates. Knowing the depth of a hydrocarbon source rock beneath the basin-filling strata allows calculation of the age at which the source rock passed through the generation window and hydrocarbon migration began. Because the ages of cross-cutting trapping structures can usually be determined from magnetostratigraphic data, a comparison of these ages will assist reservoir geologists in their determination of whether or not a play is likely in a given trap. Changes in sedimentation rate revealed by magnetostratigraphy are often related to either climatic factors or to tectonic developments in nearby or distant mountain ranges. Evidence to strengthen this interpretation can often be found by looking for subtle changes in the composition of the rocks in the section. Changes in sandstone composition are often used for this type of interpretation. === Siwalik magnetostratigraphy === The Siwalik fluvial sequence (~6000 m thick, ~20 to 0.5 Ma) represents a good example of magnetostratigraphy application in resolving confusion in continental fossil based records. == See also == Biostratigraphy Chemostratigraphy Chronostratigraphy Cyclostratigraphy Lithostratigraphy Tectonostratigraphy Paleomagnetism == Notes == == References == == External links == International Stratigraphic Guide
Wikipedia/Magnetostratigraphy
A seismometer is an instrument that responds to ground displacement and shaking such as caused by quakes, volcanic eruptions, and explosions. They are usually combined with a timing device and a recording device to form a seismograph. The output of such a device—formerly recorded on paper (see picture) or film, now recorded and processed digitally—is a seismogram. Such data is used to locate and characterize earthquakes, and to study the internal structure of Earth. == Basic principles == A simple seismometer, sensitive to up-down motions of the Earth, is like a weight hanging from a spring, both suspended from a frame that moves along with any motion detected. The relative motion between the weight (called the mass) and the frame provides a measurement of the vertical ground motion. A rotating drum is attached to the frame and a pen is attached to the weight, thus recording any ground motion in a seismogram. Any movement from the ground moves the frame. The mass tends not to move because of its inertia, and by measuring the movement between the frame and the mass, the motion of the ground can be determined. Early seismometers used optical levers or mechanical linkages to amplify the small motions involved, recording on soot-covered paper or photographic paper. Modern instruments use electronics. In some systems, the mass is held nearly motionless relative to the frame by an electronic negative feedback loop. The motion of the mass relative to the frame is measured, and the feedback loop applies a magnetic or electrostatic force to keep the mass nearly motionless. The voltage needed to produce this force is the output of the seismometer, which is recorded digitally. In other systems the weight is allowed to move, and its motion produces an electrical charge in a coil attached to the mass which voltage moves through the magnetic field of a magnet attached to the frame. This design is often used in a geophone, which is used in exploration for oil and gas. Seismic observatories usually have instruments measuring three axes: north-south (y-axis), east–west (x-axis), and vertical (z-axis). If only one axis is measured, it is usually the vertical because it is less noisy and gives better records of some seismic waves. The foundation of a seismic station is critical. A professional station is sometimes mounted on bedrock. The best mountings may be in deep boreholes, which avoid thermal effects, ground noise and tilting from weather and tides. Other instruments are often mounted in insulated enclosures on small buried piers of unreinforced concrete. Reinforcing rods and aggregates would distort the pier as the temperature changes. A site is always surveyed for ground noise with a temporary installation before pouring the pier and laying conduit. Originally, European seismographs were placed in a particular area after a destructive earthquake. Today, they are spread to provide appropriate coverage (in the case of weak-motion seismology) or concentrated in high-risk regions (strong-motion seismology). == Nomenclature == The word derives from the Greek σεισμός, seismós, a shaking or quake, from the verb σείω, seíō, to shake; and μέτρον, métron, to measure, and was coined by David Milne-Home in 1841, to describe an instrument designed by Scottish physicist James David Forbes. Seismograph is another Greek term from seismós and γράφω, gráphō, to draw. It is often used to mean seismometer, though it is more applicable to the older instruments in which the measuring and recording of ground motion were combined, than to modern systems, in which these functions are separated. Both types provide a continuous record of ground motion; this record distinguishes them from seismoscopes, which merely indicate that motion has occurred, perhaps with some simple measure of how large it was. The technical discipline concerning such devices is called seismometry, a branch of seismology. The concept of measuring the "shaking" of something means that the word "seismograph" might be used in a more general sense. For example, a monitoring station that tracks changes in electromagnetic noise affecting amateur radio waves presents an rf seismograph. And helioseismology studies the "quakes" on the Sun. == History == The first seismometer was made in China during the 2nd century. It was invented by Zhang Heng, a Chinese mathematician and astronomer. The first Western description of the device comes from the French physicist and priest Jean de Hautefeuille in 1703. The modern seismometer was developed in the 19th century. Seismometers were placed on the Moon starting in 1969 as part of the Apollo Lunar Surface Experiments Package. In December 2018, a seismometer was deployed on the planet Mars by the InSight lander, the first time a seismometer was placed onto the surface of another planet. === Ancient era === In Ancient Egypt, Amenhotep, son of Hapu invented a precursor of seismometer, a vertical wooden poles connected with wooden gutters on the central axis functioned to fill water into a vessel until full to detect earthquakes. In AD 132, Zhang Heng of China's Han dynasty is said to have invented the first seismoscope (by the definition above), which was called Houfeng Didong Yi (translated as, "instrument for measuring the seasonal winds and the movements of the Earth"). The description we have, from the History of the Later Han Dynasty, says that it was a large bronze vessel, about 2 meters in diameter; at eight points around the top were dragon's heads holding bronze balls. When there was an earthquake, one of the dragons' mouths would open and drop its ball into a bronze toad at the base, making a sound and supposedly showing the direction of the earthquake. On at least one occasion, probably at the time of a large earthquake in Gansu in AD 143, the seismoscope indicated an earthquake even though one was not felt. The available text says that inside the vessel was a central column that could move along eight tracks; this is thought to refer to a pendulum, though it is not known exactly how this was linked to a mechanism that would open only one dragon's mouth. The first earthquake recorded by this seismoscope was supposedly "somewhere in the east". Days later, a rider from the east reported this earthquake. === Early designs (1259–1839) === By the 13th century, seismographic devices existed in the Maragheh observatory (founded 1259) in Persia, though it is unclear whether these were constructed independently or based on the first seismoscope. French physicist and priest Jean de Hautefeuille described a seismoscope in 1703, which used a bowl filled with mercury which would spill into one of eight receivers equally spaced around the bowl, though there is no evidence that he actually constructed the device. A mercury seismoscope was constructed in 1784 or 1785 by Atanasio Cavalli, a copy of which can be found at the University Library in Bologna, and a further mercury seismoscope was constructed by Niccolò Cacciatore in 1818. James Lind also built a seismological tool of unknown design or efficacy (known as an earthquake machine) in the late 1790s. Pendulum devices were developing at the same time. Neapolitan naturalist Nicola Cirillo set up a network of pendulum earthquake detectors following the 1731 Puglia Earthquake, where the amplitude was detected using a protractor to measure the swinging motion. Benedictine monk Andrea Bina further developed this concept in 1751, having the pendulum create trace marks in sand under the mechanism, providing both magnitude and direction of motion. Neapolitan clockmaker Domenico Salsano produced a similar pendulum which recorded using a paintbrush in 1783, labelling it a geo-sismometro, possibly the first use of a similar word to seismometer. Naturalist Nicolo Zupo devised an instrument to detect electrical disturbances and earthquakes at the same time (1784). The first moderately successful device for detecting the time of an earthquake was devised by Ascanio Filomarino in 1796, who improved upon Salsano's pendulum instrument, using a pencil to mark, and using a hair attached to the mechanism to inhibit the motion of a clock's balance wheel. This meant that the clock would only start once an earthquake took place, allowing determination of the time of incidence. After an earthquake taking place on October 4, 1834, Luigi Pagani observed that the mercury seismoscope held at Bologna University had completely spilled over, and did not provide useful information. He therefore devised a portable device that used lead shot to detect the direction of an earthquake, where the lead fell into four bins arranged in a circle, to determine the quadrant of earthquake incidence. He completed the instrument in 1841. === Early Modern designs (1839–1880) === In response to a series of earthquakes near Comrie in Scotland in 1839, a committee was formed in the United Kingdom in order to produce better detection devices for earthquakes. The outcome of this was an inverted pendulum seismometer constructed by James David Forbes, first presented in a report by David Milne-Home in 1842, which recorded the measurements of seismic activity through the use of a pencil placed on paper above the pendulum. The designs provided did not prove effective, according to Milne's reports. It was Milne who coined the word seismometer in 1841, to describe this instrument. In 1843, the first horizontal pendulum was used in a seismometer, reported by Milne (though it is unclear if he was the original inventor). After these inventions, Robert Mallet published an 1848 paper where he suggested ideas for seismometer design, suggesting that such a device would need to register time, record amplitudes horizontally and vertically, and ascertain direction. His suggested design was funded, and construction was attempted, but his final design did not fulfill his expectations and suffered from the same problems as the Forbes design, being inaccurate and not self-recording. Karl Kreil constructed a seismometer in Prague between 1848 and 1850, which used a point-suspended rigid cylindrical pendulum covered in paper, drawn upon by a fixed pencil. The cylinder was rotated every 24 hours, providing an approximate time for a given quake. Luigi Palmieri, influenced by Mallet's 1848 paper, invented a seismometer in 1856 that could record the time of an earthquake. This device used metallic pendulums which closed an electric circuit with vibration, which then powered an electromagnet to stop a clock. Palmieri seismometers were widely distributed and used for a long time. By 1872, a committee in the United Kingdom led by James Bryce expressed their dissatisfaction with the current available seismometers, still using the large 1842 Forbes device located in Comrie Parish Church, and requested a seismometer which was compact, easy to install and easy to read. In 1875 they settled on a large example of the Mallet device, consisting of an array of cylindrical pins of various sizes installed at right angles to each other on a sand bed, where larger earthquakes would knock down larger pins. This device was constructed in 'Earthquake House' near Comrie, which can be considered the world's first purpose-built seismological observatory. As of 2013, no earthquake has been large enough to cause any of the cylinders to fall in either the original device or replicas. === The first seismographs (1880-) === The first seismographs were invented in the 1870s and 1880s. The first seismograph was produced by Filippo Cecchi in around 1875. A seismoscope would trigger the device to begin recording, and then a recording surface would produce a graphical illustration of the tremors automatically (a seismogram). However, the instrument was not sensitive enough, and the first seismogram produced by the instrument was in 1887, by which time John Milne had already demonstrated his design in Japan. In 1880, the first horizontal pendulum seismometer was developed by the team of John Milne, James Alfred Ewing and Thomas Gray, who worked as foreign-government advisors in Japan, from 1880 to 1895. Milne, Ewing and Gray, all having been hired by the Meiji Government in the previous five years to assist Japan's modernization efforts, founded the Seismological Society of Japan in response to an Earthquake that took place on February 22, 1880, at Yokohama (Yokohama earthquake). Two instruments were constructed by Ewing over the next year, one being a common-pendulum seismometer and the other being the first seismometer using a damped horizontal pendulum. The innovative recording system allowed for a continuous record, the first to do so. The first seismogram was recorded on 3 November 1880 on both of Ewing's instruments. Modern seismometers would eventually descend from these designs. Milne has been referred to as the 'Father of modern seismology' and his seismograph design has been called the first modern seismometer. This produced the first effective measurement of horizontal motion. Gray would produce the first reliable method for recording vertical motion, which produced the first effective 3-axis recordings. An early special-purpose seismometer consisted of a large, stationary pendulum, with a stylus on the bottom. As the earth started to move, the heavy mass of the pendulum had the inertia to stay still within the frame. The result is that the stylus scratched a pattern corresponding with the Earth's movement. This type of strong-motion seismometer recorded upon a smoked glass (glass with carbon soot). While not sensitive enough to detect distant earthquakes, this instrument could indicate the direction of the pressure waves and thus help find the epicenter of a local quake. Such instruments were useful in the analysis of the 1906 San Francisco earthquake. Further analysis was performed in the 1980s, using these early recordings, enabling a more precise determination of the initial fault break location in Marin county and its subsequent progression, mostly to the south. Later, professional suites of instruments for the worldwide standard seismographic network had one set of instruments tuned to oscillate at fifteen seconds, and the other at ninety seconds, each set measuring in three directions. Amateurs or observatories with limited means tuned their smaller, less sensitive instruments to ten seconds. The basic damped horizontal pendulum seismometer swings like the gate of a fence. A heavy weight is mounted on the point of a long (from 10 cm to several meters) triangle, hinged at its vertical edge. As the ground moves, the weight stays unmoving, swinging the "gate" on the hinge. The advantage of a horizontal pendulum is that it achieves very low frequencies of oscillation in a compact instrument. The "gate" is slightly tilted, so the weight tends to slowly return to a central position. The pendulum is adjusted (before the damping is installed) to oscillate once per three seconds, or once per thirty seconds. The general-purpose instruments of small stations or amateurs usually oscillate once per ten seconds. A pan of oil is placed under the arm, and a small sheet of metal mounted on the underside of the arm drags in the oil to damp oscillations. The level of oil, position on the arm, and angle and size of sheet is adjusted until the damping is "critical", that is, almost having oscillation. The hinge is very low friction, often torsion wires, so the only friction is the internal friction of the wire. Small seismographs with low proof masses are placed in a vacuum to reduce disturbances from air currents. Zollner described torsionally suspended horizontal pendulums as early as 1869, but developed them for gravimetry rather than seismometry. Early seismometers had an arrangement of levers on jeweled bearings, to scratch smoked glass or paper. Later, mirrors reflected a light beam to a direct-recording plate or roll of photographic paper. Briefly, some designs returned to mechanical movements to save money. In mid-twentieth-century systems, the light was reflected to a pair of differential electronic photosensors called a photomultiplier. The voltage generated in the photomultiplier was used to drive galvanometers which had a small mirror mounted on the axis. The moving reflected light beam would strike the surface of the turning drum, which was covered with photo-sensitive paper. The expense of developing photo-sensitive paper caused many seismic observatories to switch to ink or thermal-sensitive paper. After World War II, the seismometers developed by Milne, Ewing and Gray were adapted into the widely used Press-Ewing seismometer. == Modern instruments == Modern instruments use electronic sensors, amplifiers, and recording devices. Most are broadband covering a wide range of frequencies. Some seismometers can measure motions with frequencies from 500 Hz to 0.00118 Hz (1/500 = 0.002 seconds per cycle, to 1/0.00118 = 850 seconds per cycle). The mechanical suspension for horizontal instruments remains the garden-gate described above. Vertical instruments use some kind of constant-force suspension, such as the LaCoste suspension. The LaCoste suspension uses a zero-length spring to provide a long period (high sensitivity). Some modern instruments use a "triaxial" or "Galperin" design, in which three identical motion sensors are set at the same angle to the vertical but 120 degrees apart on the horizontal. Vertical and horizontal motions can be computed from the outputs of the three sensors. Seismometers unavoidably introduce some distortion into the signals they measure, but professionally designed systems have carefully characterized frequency transforms. Modern sensitivities come in three broad ranges: geophones, 50 to 750 V/m; local geologic seismographs, about 1,500 V/m; and teleseismographs, used for world survey, about 20,000 V/m. Instruments come in three main varieties: short-period, long-period and broadband. The short- and long-period instruments measure velocity and are very sensitive; however they 'clip' the signal or go off-scale for ground motion that is strong enough to be felt by people. A 24-bit analog-to-digital conversion channel is commonplace. Practical devices are linear to roughly one part per million. Delivered seismometers come with two styles of output: analog and digital. Analog seismographs require analog recording equipment, possibly including an analog-to-digital converter. The output of a digital seismograph can be simply input to a computer. It presents the data in a standard digital format (often "SE2" over Ethernet). === Teleseismometers === The modern broadband seismograph can record a very broad range of frequencies. It consists of a small "proof mass", confined by electrical forces, driven by sophisticated electronics. As the earth moves, the electronics attempt to hold the mass steady through a feedback circuit. The amount of force necessary to achieve this is then recorded. In most designs the electronics holds a mass motionless relative to the frame. This device is called a "force balance accelerometer". It measures acceleration instead of velocity of ground movement. Basically, the distance between the mass and some part of the frame is measured very precisely, by a linear variable differential transformer. Some instruments use a linear variable differential capacitor. That measurement is then amplified by electronic amplifiers attached to parts of an electronic negative feedback loop. One of the amplified currents from the negative feedback loop drives a coil very like a loudspeaker. The result is that the mass stays nearly motionless. Most instruments measure directly the ground motion using the distance sensor. The voltage generated in a sense coil on the mass by the magnet directly measures the instantaneous velocity of the ground. The current to the drive coil provides a sensitive, accurate measurement of the force between the mass and frame, thus measuring directly the ground's acceleration (using f=ma where f=force, m=mass, a=acceleration). One of the continuing problems with sensitive vertical seismographs is the buoyancy of their masses. The uneven changes in pressure caused by wind blowing on an open window can easily change the density of the air in a room enough to cause a vertical seismograph to show spurious signals. Therefore, most professional seismographs are sealed in rigid gas-tight enclosures. For example, this is why a common Streckeisen model has a thick glass base that must be glued to its pier without bubbles in the glue. It might seem logical to make the heavy magnet serve as a mass, but that subjects the seismograph to errors when the Earth's magnetic field moves. This is also why seismograph's moving parts are constructed from a material that interacts minimally with magnetic fields. A seismograph is also sensitive to changes in temperature so many instruments are constructed from low expansion materials such as nonmagnetic invar. The hinges on a seismograph are usually patented, and by the time the patent has expired, the design has been improved. The most successful public domain designs use thin foil hinges in a clamp. Another issue is that the transfer function of a seismograph must be accurately characterized, so that its frequency response is known. This is often the crucial difference between professional and amateur instruments. Most are characterized on a variable frequency shaking table. === Strong-motion seismometers === Another type of seismometer is a digital strong-motion seismometer, or accelerograph. The data from such an instrument is essential to understand how an earthquake affects man-made structures, through earthquake engineering. The recordings of such instruments are crucial for the assessment of seismic hazard, through engineering seismology. A strong-motion seismometer measures acceleration. This can be mathematically integrated later to give velocity and position. Strong-motion seismometers are not as sensitive to ground motions as teleseismic instruments but they stay on scale during the strongest seismic shaking. Strong motion sensors are used for intensity meter applications. === Other forms === Accelerographs and geophones are often heavy cylindrical magnets with a spring-mounted coil inside. As the case moves, the coil tends to stay stationary, so the magnetic field cuts the wires, inducing current in the output wires. They receive frequencies from several hundred hertz down to 1 Hz. Some have electronic damping, a low-budget way to get some of the performance of the closed-loop wide-band geologic seismographs. Strain-beam accelerometers constructed as integrated circuits are too insensitive for geologic seismographs (2002), but are widely used in geophones. Some other sensitive designs measure the current generated by the flow of a non-corrosive ionic fluid through an electret sponge or a conductive fluid through a magnetic field. === Interconnected seismometers === Seismometers spaced in a seismic array can also be used to precisely locate, in three dimensions, the source of an earthquake, using the time it takes for seismic waves to propagate away from the hypocenter, the initiating point of fault rupture (See also Earthquake location). Interconnected seismometers are also used, as part of the International Monitoring System to detect underground nuclear test explosions, as well as for Earthquake early warning systems. These seismometers are often used as part of a large-scale governmental or scientific project, but some organizations such as the Quake-Catcher Network, can use residential size detectors built into computers to detect earthquakes as well. In reflection seismology, an array of seismometers image sub-surface features. The data are reduced to images using algorithms similar to tomography. The data reduction methods resemble those of computer-aided tomographic medical imaging X-ray machines (CAT-scans), or imaging sonars. A worldwide array of seismometers can actually image the interior of the Earth in wave-speed and transmissivity. This type of system uses events such as earthquakes, impact events or nuclear explosions as wave sources. The first efforts at this method used manual data reduction from paper seismograph charts. Modern digital seismograph records are better adapted to direct computer use. With inexpensive seismometer designs and internet access, amateurs and small institutions have even formed a "public seismograph network". Seismographic systems used for petroleum or other mineral exploration historically used an explosive and a wireline of geophones unrolled behind a truck. Now most short-range systems use "thumpers" that hit the ground, and some small commercial systems have such good digital signal processing that a few sledgehammer strikes provide enough signal for short-distance refractive surveys. Exotic cross or two-dimensional arrays of geophones are sometimes used to perform three-dimensional reflective imaging of subsurface features. Basic linear refractive geomapping software (once a black art) is available off-the-shelf, running on laptop computers, using strings as small as three geophones. Some systems now come in an 18" (0.5 m) plastic field case with a computer, display and printer in the cover. Small seismic imaging systems are now sufficiently inexpensive to be used by civil engineers to survey foundation sites, locate bedrock, and find subsurface water. === Fiber optic cables as seismometers === A new technique for detecting earthquakes has been found, using fiber optic cables. In 2016 a team of metrologists running frequency metrology experiments in England observed noise with a wave-form resembling the seismic waves generated by earthquakes. This was found to match seismological observations of an Mw6.0 earthquake in Italy, ~1400 km away. Further experiments in England, Italy, and with a submarine fiber optic cable to Malta detected additional earthquakes, including one 4,100 km away, and an ML3.4 earthquake 89 km away from the cable. Seismic waves are detectable because they cause micrometer-scale changes in the length of the cable. As the length changes so does the time it takes a packet of light to traverse to the far end of the cable and back (using a second fiber). Using ultra-stable metrology-grade lasers, these extremely minute shifts of timing (on the order of femtoseconds) appear as phase-changes. The point of the cable first disturbed by an earthquake's p wave (essentially a sound wave in rock) can be determined by sending packets in both directions in the looped pair of optical fibers; the difference in the arrival times of the first pair of perturbed packets indicates the distance along the cable. This point is also the point closest to the earthquake's epicenter, which should be on a plane perpendicular to the cable. The difference between the P wave/S wave arrival times provides a distance (under ideal conditions), constraining the epicenter to a circle. A second detection on a non-parallel cable is needed to resolve the ambiguity of the resulting solution. Additional observations constrain the location of the earthquake's epicenter, and may resolve the depth. This technique is expected to be a boon in observing earthquakes, especially the smaller ones, in vast portions of the global ocean where there are no seismometers, and at much lower cost than ocean-bottom seismometers. === Deep-Learning === Researchers at Stanford University created a deep-learning algorithm called UrbanDenoiser which can detect earthquakes, particularly in urban cities. The algorithm filters out the background noise from the seismic noise gathered from busy cities in urban areas to detect earthquakes. == Recording == Today, the most common recorder is a computer with an analog-to-digital converter, a disk drive and an internet connection; for amateurs, a PC with a sound card and associated software is adequate. Most systems record continuously, but some record only when a signal is detected, as shown by a short-term increase in the variation of the signal, compared to its long-term average (which can vary slowly because of changes in seismic noise), also known as a STA/LTA trigger. Prior to the availability of digital processing of seismic data in the late 1970s, the records were done in a few different forms on different types of media. A "Helicorder" drum was a device used to record data into photographic paper or in the form of paper and ink. A "Develocorder" was a machine that record data from up to 20 channels into a 16-mm film. The recorded film can be viewed by a machine. The reading and measuring from these types of media can be done by hand. After the digital processing has been used, the archives of the seismic data were recorded in magnetic tapes. Due to the deterioration of older magnetic tape medias, large number of waveforms from the archives are not recoverable. == See also == Accelerometer Galitzine, Boris Borisovich Geophone Inge Lehmann IRIS Consortium John Milne Pacific Northwest Seismic Network Plate tectonics Quake-Catcher Network Wood-Anderson seismometer == References == == External links == The history of early seismometers The Lehman amateur seismograph, from Scientific American Archived 2009-02-04 at the Wayback Machine- not designed for calibrated measurement. Sean Morrisey's professional design of an amateur teleseismograph Also see Keith Payea's version Both accessed 2010-9-29 Morrissey was a professional seismographic instrument engineer. This superior design uses a zero-length spring to achieve a 60-second period, active feedback and a uniquely convenient variable reluctance differential transducer, with parts scavenged from a hardware store. The frequency transform is carefully designed, unlike most amateur instruments. Morrisey is deceased, but the site remains up as a public service. SeisMac is a free tool for recent Macintosh laptop computers that implements a real-time three-axis seismograph. The Development Of Very-Broad-Band Seismography: Quanterra And The Iris Collaboration Archived 2016-08-10 at the Wayback Machine discusses the history of development of the primary technology in global earthquake research. Video of seismograph at Hawaiian Volcano Observatory – on Flickr – retrieved on 2009-06-15. Seismoscope – Research References 2012 Iris EDU – How Does A Seismometer Work? Seismometers, seismographs, seismograms – what's the difference? How do they work? – USGS
Wikipedia/Seismograph
A stratigraphic section is a sequence of layers of rocks in the order they were deposited. It is based on the principle of original horizontality, which states that layers of sediment are originally deposited horizontally under the action of gravity. Biostratigraphers estimate the age of stratigraphic sections by using the faunal assemblages contained within rock samples from outcrop and drill cores. Geochronologists precisely date rocks within the stratigraphic section to provide better absolute bounds on the timing and rates of deposition. Magnetic stratigraphers look for signs of magnetic reversals in igneous rock units within the drill cores. Other scientists perform stable-isotope studies on the rocks to gain information about past climate. Stratigraphic sections can also be used to locate areas for water, coal, and hydrocarbon extraction, particularly petroleum and natural gas. A Global Boundary Stratotype Section and Point (GSSP) is an internationally agreed upon reference point on a stratigraphic section which defines the lower boundaries of stages on the geologic time scale. (Recently this has been used to define the base of a system) == Gallery == == References ==
Wikipedia/Stratigraphic_section
Coalescence is the process by which two or more droplets, bubbles, or particles merge during contact to form a single daughter droplet, bubble, or particle. Coalescence manifests itself from a microscopic scale in meteorology to a macroscopic scale in astrophysics. For example, it is seen in the formation of raindrops as well as planetary and star formation. In meteorology, its role is crucial in the formation of rain. As droplets are carried by the updrafts and downdrafts in a cloud, they collide and coalesce to form larger droplets. When the droplets become too large to be sustained on the air currents, they begin to fall as rain. Adding to this process, the cloud may be seeded with ice from higher altitudes, either via the cloud tops reaching −40 °C (−40 °F), or via the cloud being seeded by ice from cirrus clouds. Contrast-enhanced ultrasound in medicine applies microscopic bubbles for imaging and therapy. Coalescence of ultrasound contrast agent microbubbles is studied to prevent embolies or to block tumour vessels. Microbubble coalescence has been studied with the aid of high-speed photography. In cloud physics the main mechanism of collision is the different terminal velocity between the droplets. The terminal velocity is a function of the droplet size. The other factors that determine the collision rate are the droplet concentration and turbulence. == See also == Accretion (astrophysics) Accretion (meteorology) Bergeron process Coalescer == References == == External links == American Meteorological Society, Glossary of Meteorology: Coalescence Schlumberger Oilfield Glossary Archived 2012-05-31 at the Wayback Machine The Bergeron Process The Coalescence of Bubbles - A Numerical Study - Archived 2011-04-01 at the Wayback Machine
Wikipedia/Coalescence_(physics)
In condensed matter physics, the resonating valence bond theory (RVB) is a theoretical model that attempts to describe high-temperature superconductivity, and in particular the superconductivity in cuprate compounds. It was proposed by P. W. Anderson and Ganapathy Baskaran in 1987. The theory states that in copper oxide lattices, electrons from neighboring copper atoms interact to form a valence bond, which locks them in place. However, with doping, these electrons can act as mobile Cooper pairs and are able to superconduct. Anderson observed in his 1987 paper that the origins of superconductivity in doped cuprates was in the Mott insulator nature of crystalline copper oxide. RVB builds on the Hubbard and t-J models used in the study of strongly correlated materials. In 2014, evidence showing that fractional particles can happen in quasi two-dimensional magnetic materials, was found by EPFL scientists, lending support for Anderson's theory. == Description == The physics of Mott insulators is described by the repulsive Hubbard model Hamiltonian: H = − t ∑ ⟨ i j ⟩ ( c i σ † c j σ + h.c. ) + U ∑ i n i ↑ n i ↓ {\displaystyle H=-t\sum _{\langle ij\rangle }(c_{i\sigma }^{\dagger }c_{j\sigma }+{\text{h.c.}})+U\sum _{i}n_{i\uparrow }n_{i\downarrow }} In 1971, Anderson first suggested that this Hamiltonian can have a non-degenerate ground state that is composed of disordered spin states. Shortly after the high-temperature superconductors were discovered, Anderson and Kivelson et al. proposed a resonating valence bond ground state for these materials, written as | RVB ⟩ = ∑ C | C ⟩ {\displaystyle |{\text{RVB}}\rangle =\sum _{C}|C\rangle } where C {\displaystyle C} represented a covering of a lattice by nearest neighbor dimers. Each such covering is weighted equally. In a mean field approximation, the RVB state can be written in terms of a Gutzwiller projection, and displays a superconducting phase transition per the Kosterlitz–Thouless mechanism. However, a rigorous proof for the existence of a superconducting ground state in either the Hubbard or the t-J Hamiltonian is not yet known. Further the stability of the RVB ground state has not yet been confirmed. == References ==
Wikipedia/Resonating_valence_bond_theory
In theoretical physics, the Bogoliubov transformation, also known as the Bogoliubov–Valatin transformation, was independently developed in 1958 by Nikolay Bogolyubov and John George Valatin for finding solutions of BCS theory in a homogeneous system. The Bogoliubov transformation is an isomorphism of either the canonical commutation relation algebra or canonical anticommutation relation algebra. This induces an autoequivalence on the respective representations. The Bogoliubov transformation is often used to diagonalize Hamiltonians, which yields the stationary solutions of the corresponding Schrödinger equation. The Bogoliubov transformation is also important for understanding the Unruh effect, Hawking radiation, Davies-Fulling radiation (moving mirror model), pairing effects in nuclear physics, and many other topics. The Bogoliubov transformation is often used to diagonalize Hamiltonians, with a corresponding transformation of the state function. Operator eigenvalues calculated with the diagonalized Hamiltonian on the transformed state function thus are the same as before. == Single bosonic mode example == Consider the canonical commutation relation for bosonic creation and annihilation operators in the harmonic oscillator basis [ a ^ , a ^ † ] = 1. {\displaystyle \left[{\hat {a}},{\hat {a}}^{\dagger }\right]=1.} Define a new pair of operators b ^ = u a ^ + v a ^ † , {\displaystyle {\hat {b}}=u{\hat {a}}+v{\hat {a}}^{\dagger },} b ^ † = u ∗ a ^ † + v ∗ a ^ , {\displaystyle {\hat {b}}^{\dagger }=u^{*}{\hat {a}}^{\dagger }+v^{*}{\hat {a}},} for complex numbers u and v, where the latter is the Hermitian conjugate of the first. The Bogoliubov transformation is the canonical transformation mapping the operators a ^ {\displaystyle {\hat {a}}} and a ^ † {\displaystyle {\hat {a}}^{\dagger }} to b ^ {\displaystyle {\hat {b}}} and b ^ † {\displaystyle {\hat {b}}^{\dagger }} . To find the conditions on the constants u and v such that the transformation is canonical, the commutator is evaluated, namely, [ b ^ , b ^ † ] = [ u a ^ + v a ^ † , u ∗ a ^ † + v ∗ a ^ ] = ⋯ = ( | u | 2 − | v | 2 ) [ a ^ , a ^ † ] . {\displaystyle \left[{\hat {b}},{\hat {b}}^{\dagger }\right]=\left[u{\hat {a}}+v{\hat {a}}^{\dagger },u^{*}{\hat {a}}^{\dagger }+v^{*}{\hat {a}}\right]=\cdots =\left(|u|^{2}-|v|^{2}\right)\left[{\hat {a}},{\hat {a}}^{\dagger }\right].} It is then evident that | u | 2 − | v | 2 = 1 {\displaystyle |u|^{2}-|v|^{2}=1} is the condition for which the transformation is canonical. Since the form of this condition is suggestive of the hyperbolic identity cosh 2 ⁡ x − sinh 2 ⁡ x = 1 , {\displaystyle \cosh ^{2}x-\sinh ^{2}x=1,} the constants u and v can be readily parametrized as u = e i θ 1 cosh ⁡ r , {\displaystyle u=e^{i\theta _{1}}\cosh r,} v = e i θ 2 sinh ⁡ r . {\displaystyle v=e^{i\theta _{2}}\sinh r.} This is interpreted as a linear symplectic transformation of the phase space. By comparing to the Bloch–Messiah decomposition, the two angles θ 1 {\displaystyle \theta _{1}} and θ 2 {\displaystyle \theta _{2}} correspond to the orthogonal symplectic transformations (i.e., rotations) and the squeezing factor r {\displaystyle r} corresponds to the diagonal transformation. === Applications === The most prominent application is by Nikolai Bogoliubov himself in the context of superfluidity. Other applications comprise Hamiltonians and excitations in the theory of antiferromagnetism. When calculating quantum field theory in curved spacetimes the definition of the vacuum changes, and a Bogoliubov transformation between these different vacua is possible. This is used in the derivation of Hawking radiation. Bogoliubov transforms are also used extensively in quantum optics, particularly when working with gaussian unitaries (such as beamsplitters, phase shifters, and squeezing operations). == Fermionic mode == For the anticommutation relations { a ^ , a ^ } = 0 , { a ^ , a ^ † } = 1 , {\displaystyle \left\{{\hat {a}},{\hat {a}}\right\}=0,\left\{{\hat {a}},{\hat {a}}^{\dagger }\right\}=1,} the Bogoliubov transformation is constrained by u v = 0 , | u | 2 + | v | 2 = 1 {\displaystyle uv=0,|u|^{2}+|v|^{2}=1} . Therefore, the only non-trivial possibility is u = 0 , | v | = 1 , {\displaystyle u=0,|v|=1,} corresponding to particle–antiparticle interchange (or particle–hole interchange in many-body systems) with the possible inclusion of a phase shift. Thus, for a single particle, the transformation can only be implemented (1) for a Dirac fermion, where particle and antiparticle are distinct (as opposed to a Majorana fermion or chiral fermion), or (2) for multi-fermionic systems, in which there is more than one type of fermion. === Applications === The most prominent application is again by Nikolai Bogoliubov himself, this time for the BCS theory of superconductivity. The point where the necessity to perform a Bogoliubov transform becomes obvious is that in mean-field approximation the Hamiltonian of the system can be written in both cases as a sum of bilinear terms in the original creation and destruction operators, involving finite ⟨ a i + a j + ⟩ {\displaystyle \langle a_{i}^{+}a_{j}^{+}\rangle } terms, i.e. one must go beyond the usual Hartree–Fock method. In particular, in the mean-field Bogoliubov–de Gennes Hamiltonian formalism with a superconducting pairing term such as Δ a i + a j + + h.c. {\displaystyle \Delta a_{i}^{+}a_{j}^{+}+{\text{h.c.}}} , the Bogoliubov transformed operators b , b † {\displaystyle b,b^{\dagger }} annihilate and create quasiparticles (each with well-defined energy, momentum and spin but in a quantum superposition of electron and hole state), and have coefficients u {\displaystyle u} and v {\displaystyle v} given by eigenvectors of the Bogoliubov–de Gennes matrix. Also in nuclear physics, this method is applicable, since it may describe the "pairing energy" of nucleons in a heavy element. == Multimode example == The Hilbert space under consideration is equipped with these operators, and henceforth describes a higher-dimensional quantum harmonic oscillator (usually an infinite-dimensional one). The ground state of the corresponding Hamiltonian is annihilated by all the annihilation operators: ∀ i a i | 0 ⟩ = 0. {\displaystyle \forall i\qquad a_{i}|0\rangle =0.} All excited states are obtained as linear combinations of the ground state excited by some creation operators: ∏ k = 1 n a i k † | 0 ⟩ . {\displaystyle \prod _{k=1}^{n}a_{i_{k}}^{\dagger }|0\rangle .} One may redefine the creation and the annihilation operators by a linear redefinition: a i ′ = ∑ j ( u i j a j + v i j a j † ) , {\displaystyle a'_{i}=\sum _{j}(u_{ij}a_{j}+v_{ij}a_{j}^{\dagger }),} where the coefficients u i j , v i j {\displaystyle u_{ij},v_{ij}} must satisfy certain rules to guarantee that the annihilation operators and the creation operators a i ′ † {\displaystyle a_{i}^{\prime \dagger }} , defined by the Hermitian conjugate equation, have the same commutators for bosons and anticommutators for fermions. The equation above defines the Bogoliubov transformation of the operators. The ground state annihilated by all a i ′ {\displaystyle a'_{i}} is different from the original ground state | 0 ⟩ {\displaystyle |0\rangle } , and they can be viewed as the Bogoliubov transformations of one another using the operator–state correspondence. They can also be defined as squeezed coherent states. BCS wave function is an example of squeezed coherent state of fermions. == Unified matrix description == Because Bogoliubov transformations are linear recombination of operators, it is more convenient and insightful to write them in terms of matrix transformations. If a pair of annihilators ( a , b ) {\displaystyle (a,b)} transform as ( α β ) = U ( a b ) {\displaystyle {\begin{pmatrix}\alpha \\\beta \end{pmatrix}}=U{\begin{pmatrix}a\\b\end{pmatrix}}} where U {\displaystyle U} is a 2 × 2 {\displaystyle 2\times 2} matrix. Then naturally ( α † β † ) = U ∗ ( a † b † ) {\displaystyle {\begin{pmatrix}\alpha ^{\dagger }\\\beta ^{\dagger }\end{pmatrix}}=U^{*}{\begin{pmatrix}a^{\dagger }\\b^{\dagger }\end{pmatrix}}} For fermion operators, the requirement of commutation relations reflects in two requirements for the form of matrix U {\displaystyle U} U = ( u v − v ∗ u ∗ ) {\displaystyle U={\begin{pmatrix}u&v\\-v^{*}&u^{*}\end{pmatrix}}} and | u | 2 + | v | 2 = 1 {\displaystyle |u|^{2}+|v|^{2}=1} For boson operators, the commutation relations require U = ( u v v ∗ u ∗ ) {\displaystyle U={\begin{pmatrix}u&v\\v^{*}&u^{*}\end{pmatrix}}} and | u | 2 − | v | 2 = 1 {\displaystyle |u|^{2}-|v|^{2}=1} These conditions can be written uniformly as U Γ ± U † = Γ ± {\displaystyle U\Gamma _{\pm }U^{\dagger }=\Gamma _{\pm }} where Γ ± = ( 1 0 0 ± 1 ) {\displaystyle \Gamma _{\pm }={\begin{pmatrix}1&0\\0&\pm 1\end{pmatrix}}} where Γ ± {\displaystyle \Gamma _{\pm }} applies to fermions and bosons, respectively. === Diagonalizing a quadratic Hamiltonian using matrix description === Bogoliubov transformation lets us diagonalize a quadratic Hamiltonian H ^ = ( a † b † ) H ( a b ) {\displaystyle {\hat {H}}={\begin{pmatrix}a^{\dagger }&b^{\dagger }\end{pmatrix}}H{\begin{pmatrix}a\\b\end{pmatrix}}} by just diagonalizing the matrix Γ ± H {\displaystyle \Gamma _{\pm }H} . In the notations above, it is important to distinguish the operator H ^ {\displaystyle {\hat {H}}} and the numeric matrix H {\displaystyle H} . This fact can be seen by rewriting H ^ {\displaystyle {\hat {H}}} as H ^ = ( α † β † ) Γ ± U ( Γ ± H ) U − 1 ( α β ) {\displaystyle {\hat {H}}={\begin{pmatrix}\alpha ^{\dagger }&\beta ^{\dagger }\end{pmatrix}}\Gamma _{\pm }U(\Gamma _{\pm }H)U^{-1}{\begin{pmatrix}\alpha \\\beta \end{pmatrix}}} and Γ ± U ( Γ ± H ) U − 1 = D {\displaystyle \Gamma _{\pm }U(\Gamma _{\pm }H)U^{-1}=D} if and only if U {\displaystyle U} diagonalizes Γ ± H {\displaystyle \Gamma _{\pm }H} , i.e. U ( Γ ± H ) U − 1 = Γ ± D {\displaystyle U(\Gamma _{\pm }H)U^{-1}=\Gamma _{\pm }D} . Useful properties of Bogoliubov transformations are listed below. == Other applications == === Fermionic condensates === Bogoliubov transformations are a crucial mathematical tool for understanding and describing fermionic condensates. They provide a way to diagonalize the Hamiltonian of an interacting fermion system in the presence of a condensate, allowing us to identify the elementary excitations, or quasiparticles, of the system. In a system where fermions can form pairs, the standard approach of filling single-particle energy levels (the Fermi sea) is insufficient. The presence of a condensate implies a coherent superposition of states with different particle numbers, making the usual creation and annihilation operators inadequate. The Hamiltonian of such a system typically contains terms that create or annihilate pairs of fermions, such as: H ∼ ∑ k ϵ k c k † c k + ∑ k Δ k c k † c − k † + Δ k ∗ c − k c k {\displaystyle H\sim \sum _{k}\epsilon _{k}c_{k}^{\dagger }c_{k}+\sum _{k}\Delta _{k}c_{k}^{\dagger }c_{-k}^{\dagger }+\Delta _{k}^{*}c_{-k}c_{k}} where c k † {\displaystyle c_{k}^{\dagger }} and c k {\displaystyle c_{k}} are the creation and annihilation operators for a fermion with momentum k {\displaystyle k} , ϵ k {\displaystyle \epsilon _{k}} is the single-particle energy, and Δ k {\displaystyle \Delta _{k}} is the pairing amplitude, which characterizes the strength of the condensate. This Hamiltonian is not diagonal in terms of the original fermion operators, making it difficult to directly interpret the physical properties of the system. Bogoliubov transformations provide a solution by introducing a new set of quasiparticle operators, γ k † {\displaystyle \gamma _{k}^{\dagger }} and γ k {\displaystyle \gamma _{k}} , which are linear combinations of the original fermion operators: γ k = u k c k − v k c − k † γ k † = u k ∗ c k † − v k ∗ c − k {\displaystyle {\begin{aligned}\gamma _{k}&=u_{k}c_{k}-v_{k}c_{-k}^{\dagger }\\\gamma _{k}^{\dagger }&=u_{k}^{*}c_{k}^{\dagger }-v_{k}^{*}c_{-k}\end{aligned}}} where u k {\displaystyle u_{k}} and v k {\displaystyle v_{k}} are complex coefficients that satisfy the normalization condition | u k | 2 + | v k | 2 = 1 {\displaystyle |u_{k}|^{2}+|v_{k}|^{2}=1} . This transformation mixes particle and hole creation operators, reflecting the fact that the quasiparticles are a superposition of particles and holes due to the pairing interaction. This transformation was first introduced by N. N. Bogoliubov in his seminal work on superfluidity. The coefficients u k {\displaystyle u_{k}} and v k {\displaystyle v_{k}} are chosen such that the Hamiltonian, when expressed in terms of the quasiparticle operators, becomes diagonal: H = E 0 + ∑ k E k γ k † γ k {\displaystyle H=E_{0}+\sum _{k}E_{k}\gamma _{k}^{\dagger }\gamma _{k}} where E 0 {\displaystyle E_{0}} is the ground state energy and E k {\displaystyle E_{k}} is the energy of the quasiparticle with momentum k {\displaystyle k} . The diagonalization process involves solving the Bogoliubov-de Gennes equations, which are a set of self-consistent equations for the coefficients u k {\displaystyle u_{k}} , v k {\displaystyle v_{k}} , and the pairing amplitude Δ k {\displaystyle \Delta _{k}} . A detailed discussion of the Bogoliubov-de Gennes equations can be found in de Gennes' book on superconductivity. ==== Physical interpretation ==== The Bogoliubov transformation reveals several key features of fermion condensates: Quasiparticles: The elementary excitations of the system are not individual fermions but quasiparticles, which are coherent superpositions of particles and holes. These quasiparticles have a modified energy spectrum E k = ϵ k 2 + | Δ k | 2 {\displaystyle E_{k}={\sqrt {\epsilon _{k}^{2}+|\Delta _{k}|^{2}}}} , which includes a gap of size | Δ k | {\displaystyle |\Delta _{k}|} at zero momentum. This gap represents the energy required to break a Cooper pair and is a hallmark of superconductivity and other fermionic condensate phenomena. Ground state: The ground state of the system is not simply an empty Fermi sea but a state where all quasiparticle levels are unoccupied, i.e., γ k | B C S ⟩ = 0 {\displaystyle \gamma _{k}|\mathrm {BCS} \rangle =0} for all k {\displaystyle k} . This state, often called the BCS state in the context of superconductivity, is a coherent superposition of states with different particle numbers and represents the macroscopic condensate. Broken symmetry: The formation of a fermion condensate is often associated with the spontaneous breaking of a symmetry, such as the U(1) gauge symmetry in superconductors. The Bogoliubov transformation provides a way to describe the system in the broken symmetry phase. The connection between broken symmetry and Bogoliubov transformations is explored in Anderson's work on pseudo-spin and gauge invariance. == See also == Holstein–Primakoff transformation Jordan–Wigner transformation Jordan–Schwinger transformation Klein transformation == References == == Further reading == The whole topic, and a lot of definite applications, are treated in the following textbooks: Blaizot, J.-P.; Ripka, G. (1985). Quantum Theory of Finite Systems. MIT Press. ISBN 0-262-02214-1. Fetter, A.; Walecka, J. (2003). Quantum Theory of Many-Particle Systems. Dover. ISBN 0-486-42827-3. Kittel, Ch. (1987). Quantum theory of solids. Wiley. ISBN 0-471-62412-8. Wagner, M. (1986). Unitary Transformations in Solid State Physics. Elsevier Science. ISBN 0-444-86975-1.
Wikipedia/Bogoliubov_transformation
Bilayer graphene is a material consisting of two layers of graphene. One of the first reports of bilayer graphene was in the seminal 2004 Science paper by Geim and colleagues, in which they described devices "which contained just one, two, or three atomic layers" == Structure == Bilayer graphene can exist in the AB, or Bernal-stacked form, where half of the atoms lie directly over the center of a hexagon in the lower graphene sheet, and half of the atoms lie over an atom, or, less commonly, in the AA form, in which the layers are exactly aligned. In Bernal stacked graphene, twin boundaries are common; transitioning from AB to BA stacking. Twisted layers, where one layer is rotated relative to the other, have also been extensively studied. Quantum Monte Carlo methods have been used to calculate the binding energies of AA- and AB-stacked bilayer graphene, which are 11.5(9) and 17.7(9) meV per atom, respectively. This is consistent with the observation that the AB-stacked structure is more stable than the AA-stacked structure. == Synthesis == Bilayer graphene can be made by exfoliation from graphite or by chemical vapor deposition (CVD). In 2016, Rodney S. Ruoff and colleagues showed that large single-crystal bilayer graphene could be produced by oxygen-activated chemical vapour deposition. Later in the same year a Korean group reported the synthesis of wafer-scale single-crystal AB-stacked bilayer graphene == Tunable bandgap == Like monolayer graphene, bilayer graphene has a zero bandgap and thus behaves like a semimetal. In 2007, researchers predicted that a bandgap could be introduced if an electric displacement field were applied to the two layers: a so-called tunable band gap. An experimental demonstration of a tunable bandgap in bilayer graphene came in 2009. In 2015 researchers observed 1D ballistic electron conducting channels at bilayer graphene domain walls. Another group showed that the band gap of bilayer films on silicon carbide could be controlled by selectively adjusting the carrier concentration. == Emergent complex states == In 2014 researchers described the emergence of complex electronic states in bilayer graphene, notably the fractional quantum Hall effect and showed that this could be tuned by an electric field. In 2017 the observation of an even-denominator fractional quantum Hall state was reported in bilayer graphene. == Excitonic Condensation == Bilayer graphene showed the potential to realize a Bose–Einstein condensate of excitons. Electrons and holes are fermions, but when they form an exciton, they become bosons, allowing Bose-Einstein condensation to occur. Exciton condensates in bilayer systems have been shown theoretically to carry a supercurrent. == Superconductivity in twisted bilayer graphene == Pablo Jarillo-Herrero of MIT and colleagues from Harvard and the National Institute for Materials Science, Tsukuba, Japan, have reported the discovery of superconductivity in bilayer graphene with a twist angle of 1.1° between the two layers. The discovery was announced in Nature in March 2018. The findings confirmed predictions made in 2011 by Allan MacDonald and Rafi Bistritzer that the amount of energy a free electron would require to tunnel between two graphene sheets radically changes at this angle. The graphene bilayer was prepared from exfoliated monolayers of graphene, with the second layer being manually rotated to a set angle with respect to the first layer. A critical temperature of T c = 1.7 K {\displaystyle T_{c}=1.7K} was observed with such specimens in the original paper (with newer papers reporting slightly higher temperatures). Jarillo-Herrero has suggested that it may be possible to “...... imagine making a superconducting transistor out of graphene, which you can switch on and off, from superconducting to insulating. That opens many possibilities for quantum devices.” The study of such lattices has been dubbed "twistronics" and was inspired by earlier theoretical treatments of layered assemblies of graphene. == Field effect transistors == Bilayer graphene can be used to construct field effect transistors or tunneling field effect transistors, exploiting the small energy gap. However, the energy gap is smaller than 250 meV and therefore requires the use of low operating voltage (< 250 mV), which is too small to obtain reasonable performance for a field effect transistor, but is very suited to the operation of tunnel field effect transistors, which according to theory from a paper in 2009 can operate with an operating voltage of only 100 mV. In 2016 researchers proposed the use of bilayer graphene to increase the output voltage of tunnel transistors (TT). They operate at a lower operating voltage range (150 mV) than silicon transistors (500 mV). Bilayer graphene's energy band is unlike that of most semiconductors in that the electrons around the edges form a (high density) van Hove singularity. This supplies sufficient electrons to increase current flow across the energy barrier. Bilayer graphene transistors use "electrical" rather than "chemical" doping. == Ultrafast lithium diffusion == In 2017 an international group of researchers showed that bilayer graphene could act as a single-phase mixed conductor which exhibited Li diffusion faster than in graphite by an order of magnitude. In combination with the fast electronic conduction of graphene sheets, this system offers both ionic and electronic conductivity within the same single-phase solid material. This has important implications for energy storage devices such as lithium ion batteries. == Ultrahard carbon from epitaxial bilayer graphene == Researchers from the City University of New York have shown that sheets of bilayer graphene on silicon carbide temporarily become harder than diamond upon impact with the tip of an atomic force microscope. This was attributed to a graphite-diamond transition, and the behavior appeared to be unique to bilayer graphene. This could have applications in personal armor. == Porous nanoflakes == Hybridization processes change the intrinsic properties of graphene and/or induce poor interfaces. In 2014 a general route to obtain unstacked graphene via facile, templated, catalytic growth was announced. The resulting material has a specific surface area of 1628 m2 g-1, is electrically conductive and has a mesoporous structure. The material is made with a mesoporous nanoflake template. Graphene layers are deposited onto the template. The carbon atoms accumulate in the mesopores, forming protuberances that act as spacers to prevent stacking. The protuberance density is approximately 5.8×1014 m−2. Graphene is deposited on both sides of the flakes. During CVD synthesis the protuberances produce intrinsically unstacked double-layer graphene after the removal of the nanoflakes. The presence of such protuberances on the surface can weaken the π-π interactions between graphene layers and thus reduce stacking. The bilayer graphene shows a specific surface area of 1628 m2/g, a pore size ranging from 2 to 7 nm and a total pore volume of 2.0 cm3/g. Using bilayer graphene as cathode material for a lithium sulfur battery yielded reversible capacities of 1034 and 734 mA h/g at discharge rates of 5 and 10 C, respectively. After 1000 cycles reversible capacities of some 530 and 380 mA h/g were retained at 5 and 10 C, with coulombic efficiency constants at 96 and 98%, respectively. Electrical conductivity of 438 S/cm was obtained. Even after the infiltration of sulfur, electrical conductivity of 107 S cm/1 was retained. The graphene's unique porous structure allowed the effective storage of sulfur in the interlayer space, which gives rise to an efficient connection between the sulfur and graphene and prevents the diffusion of polysulfides into the electrolyte. == Characterization == Hyperspectral global Raman imaging is an accurate and rapid technique to spatially characterize product quality. The vibrational modes of a system characterize it, providing information on stoichiometry, composition, morphology, stress and number of layers. Monitoring graphene's G and D peaks (around 1580 and 1360 cm−1) intensity gives direct information on the number of layers of the sample. It has been shown that the two graphene layers can withstand important strain or doping mismatch which ultimately should lead to their exfoliation. Quantitative determination of bilayer graphene's structural parameters---such as surface roughness, inter- and intralayer spacings, stacking order, and interlayer twist---is obtainable using 3D electron diffraction == References ==
Wikipedia/Bilayer_graphene
Bean's critical state model, introduced by C. P. Bean in 1962, gives a macroscopic explanation of the irreversible magnetization behavior (hysteresis) of hard Type-II superconductors. == Assumptions == Hard superconductors often exhibit hysteresis in magnetization measurements. C. P. Bean postulated for the Shubnikov phase an extraordinary shielding process due to the microscopic structure of the materials. He assumed lossless transport with a critical current density Jc(B) (Jc(B→0) = const. and Jc(B→∞) = 0). An external magnetic field is shielded in the Meissner phase (H < Hc1) in the same way as in a soft superconductor. In the Shubnikov phase (Hc1 < H < Hc2), the critical current flows below the surface within a depth necessary to reduce the field in the inside of the superconductor to Hc1. == Explanation of the irreversible magnetization == To understand the origin of the irreversible magnetization: assume a hollow cylinder in an external magnetic field parallel to the cylinder axis. In the Meissner phase, a screening current is within the London penetration depth. Exceeding Hc1, vortices start to penetrate into the superconductor. These vortices are pinned on the surface (Bean–Livingston barrier). In the area below the surface, which is penetrated by the vortices, is a current with the density Jc. At low fields (H < H0), the vortices do not reach the inner surface of the hollow cylinder and the interior stays field-free. For H > H0, the vortices penetrate the whole cylinder and a magnetic field appears in the interior, which then increases with increasing external field. Let us now consider what happens, if the external field is then decreased: Due to induction, an opposed critical current is generated at the outer surface of the cylinder keeping inside the magnetic field for H0 < H < H1 constant. For H > H1, the opposed critical current penetrates the whole cylinder and the inner magnetic field starts to decrease with decreasing external field. When the external field vanishes, a remnant internal magnetic field occurs (comparable to the remanent magnetization of a ferromagnet). With an opposed external field H0, the internal magnetic field finally reaches 0T (H0 equates the coercive field of a ferromagnet). == Extensions == Bean assumed a constant critical current meaning that H << Hc2. Kim et al. extended the model assuming 1/J(H) proportional to H, yielding excellent agreement of theory and measurements on Nb3Sn tubes. Different geometries have to be considered as the irreversible magnetization depends on the sample geometry. == References ==
Wikipedia/Bean's_critical_state_model
The Mattis–Bardeen theory is a theory that describes the electrodynamic properties of superconductivity. It is commonly applied in the research field of optical spectroscopy on superconductors. It was derived to explain the anomalous skin effect of superconductors. Originally, the anomalous skin effect indicates the non-classical response of metals to high frequency electromagnetic field in low temperature, which was solved by Robert G. Chambers. At sufficiently low temperatures and high frequencies, the classically predicted skin depth (normal skin effect) fails because of the enhancement of the mean free path of the electrons in a good metal. Not only the normal metals, but superconductors also show the anomalous skin effect which has to be considered with the theory of Bardeen, Cooper and Schrieffer (BCS). == Response to an electromagnetic wave == The most clear fact the BCS theory gives is the presence of the pairing of two electrons (Cooper pair). After the transition to the superconducting state, the superconducting gap 2Δ in the single-particle density of states arises, and the dispersion relation can be described like the one of a semiconductor with band gap 2Δ around the Fermi energy. From the Fermi golden rule, the transition probabilities can be written as α s = ∫ | M s | 2 N s ( E ) N s ( E + ℏ ω ) × [ f ( E ) − f ( E + ℏ ω ) ] d E {\displaystyle \alpha _{s}=\int {\left|M_{s}\right|^{2}N_{s}(E)N_{s}(E+\hbar \omega )\times [f(E)-f(E+\hbar \omega )]{\rm {}}}dE} where N s {\displaystyle N_{s}} is the density of states. And M s {\displaystyle M_{s}} is the matrix element of an interaction Hamiltonian H 1 {\displaystyle H_{1}} where H 1 = ∑ k σ , k ′ σ ′ B k ′ σ ′ , k σ c k ′ σ ′ ∗ c k ′ σ ′ {\displaystyle H_{1}=\sum \limits _{k\sigma ,k'\sigma '}{B_{k'\sigma ',k\sigma }c_{k'\sigma '}^{*}}c_{k'\sigma '}} In the superconducting state, each term of the Hamiltonian is dependent, because of the superconducting state consists of a phase-coherent superposition of occupied one-electron states, whereas it is independent in the normal state. Therefore, there appear interference terms in the absolute square of the matrix element. The result of the coherence changes the matrix element M s {\displaystyle M_{s}} into the matrix element M {\displaystyle M} of single electron and the coherence factors F(Δ,E,E'). F ( Δ , E , E ′ ) = 1 2 ( 1 ± Δ 2 E E ′ ) {\displaystyle F(\Delta ,E,E')={\frac {1}{2}}\left(1\pm {\frac {\Delta ^{2}}{EE'}}\right)} Then, the transition rate is α s = ∫ | M | 2 F ( Δ , E , E + ℏ ω ) N s ( E ) N s ( E + ℏ ω ) × [ f ( E ) − f ( E + ℏ ω ) ] d E {\displaystyle \alpha _{s}=\int {\left|M\right|^{2}F(\Delta ,E,E+\hbar \omega )N_{s}(E)N_{s}(E+\hbar \omega )\times [f(E)-f(E+\hbar \omega )]{\rm {}}}dE} where the transition rate can be translated to real part of the complex conductivity, σ 1 {\displaystyle \sigma _{1}} , because the electrodynamic energy absorption is proportional to the σ 1 E 2 {\displaystyle \sigma _{1}E^{2}} . α s α n = σ 1 s σ n {\displaystyle {\frac {\alpha _{s}}{\alpha _{n}}}={\frac {\sigma _{1s}}{\sigma _{n}}}} In finite temperature condition, the response of electrons due to the incident electromagnetic wave can be regarded as two parts, the “superconducting” and “normal” electrons. The first one corresponds to the superconducting ground state and the next to the thermally excited electrons from the ground state. This picture is the so-called "two-fluid" model. If we consider the “normal” electrons, the ratio of the optical conductivity to that of the normal state is α s α n = 2 ℏ ω ∫ Δ ∞ [ E(E + ℏ ω ) + Δ 2 ] [ f ( E ) − f ( E + ℏ ω ) ] ( E 2 − Δ 2 ) 1 / 2 [ ( E + ℏ ω ) 2 − Δ 2 ] 1 / 2 d E − Θ ( ℏ ω − 2 Δ ) 1 ℏ ω ∫ Δ − ℏ ω − Δ [ E(E + ℏ ω ) + Δ 2 ] [ 1 − 2 f ( E + ℏ ω ) ] ( E 2 − Δ 2 ) 1 / 2 [ ( E + ℏ ω ) 2 − Δ 2 ] 1 / 2 d E {\displaystyle {\frac {\alpha _{s}}{\alpha _{n}}}={\frac {2}{\hbar \omega }}\int _{\Delta }^{\infty }{{\frac {\left[{{\text{E(E + }}\hbar \omega {\text{)}}+\Delta ^{2}}\right][f(E)-f(E+\hbar \omega )]}{(E^{2}-\Delta ^{2})^{1/2}[(E+\hbar \omega )^{2}-\Delta ^{2}]^{1/2}}}dE}{-}\Theta (\hbar \omega -2\Delta ){\frac {1}{\hbar \omega }}\int _{\Delta -\hbar \omega }^{-\Delta }{{\frac {\left[{{\text{E(E + }}\hbar \omega {\text{)}}+\Delta ^{2}}\right][1-2f(E+\hbar \omega )]}{(E^{2}-\Delta ^{2})^{1/2}[(E+\hbar \omega )^{2}-\Delta ^{2}]^{1/2}}}dE}} where Θ ( x ) {\displaystyle \Theta (x)} is the Heaviside theta function. The first term of the upper equation is the contribution of "normal" electrons, and the second term is due to the superconducting electrons. == Use in optical study == The calculated optical conductivity breaks the sum rule that the spectral weight should be conserved through the transition. This result implies that the missing area of the spectral weight is concentrated in the zero frequency limit, corresponding to the dirac delta function (which covers the conduction of the superconducting condensate, i.e. the Cooper pairs). Many experimental data supports the prediction. This story on electrodynamics of superconductivity is the starting point of optical study. Because any superconducting Tc never exceeds 200K and the superconducting gap value is about the 3.5 kBT, microwave or far-infrared spectroscopy is suitable technique applying this theory. With the Mattis–Bardeen theory, we can derive fruitful properties of the superconducting gap, like gap symmetry. == References == == Further reading == Michael Tinkham, Introduction to Superconductivity. Second edition. Shu-Ang-Zhou, Electrodynamics of Solids and Microwave Superconductivity.
Wikipedia/Mattis–Bardeen_theory
Technological applications of superconductivity include: the production of sensitive magnetometers based on SQUIDs (superconducting quantum interference devices) fast digital circuits (including those based on Josephson junctions and rapid single flux quantum technology), powerful superconducting electromagnets used in maglev trains, magnetic resonance imaging (MRI) and nuclear magnetic resonance (NMR) machines, magnetic confinement fusion reactors (e.g. tokamaks), and the beam-steering and focusing magnets used in particle accelerators low-loss power cables RF and microwave filters (e.g., for mobile phone base stations, as well as military ultra-sensitive/selective receivers) fast fault current limiters high sensitivity particle detectors, including the transition edge sensor, the superconducting bolometer, the superconducting tunnel junction detector, the kinetic inductance detector, and the superconducting nanowire single-photon detector railgun and coilgun magnets electric motors and generators == Low-temperature superconductivity == === Magnetic resonance imaging and nuclear magnetic resonance === The biggest application for superconductivity is in producing the large-volume, stable, and high-intensity magnetic fields required for magnetic resonance imaging (MRI) and nuclear magnetic resonance (NMR). This represents a multi-billion-US$ market for companies such as Oxford Instruments and Siemens. The magnets typically use low-temperature superconductors (LTS) because high-temperature superconductors are not yet cheap enough to cost-effectively deliver the high, stable, and large-volume fields required, notwithstanding the need to cool LTS instruments to liquid helium temperatures. Superconductors are also used in high field scientific magnets. === Particle accelerators and magnetic fusion devices === Particle accelerators such as the Large Hadron Collider can include many high field electromagnets requiring large quantities of LTS. To construct the LHC magnets required more than 28 percent of the world's niobium-titanium wire production for five years, with large quantities of NbTi also used in the magnets for the LHC's huge experiment detectors. Conventional fusion machines (JET, ST-40, NTSX-U and MAST) use blocks of copper. This limits their fields to 1-3 Tesla. Several superconducting fusion machines are planned for the 2024-2026 timeframe. These include ITER, ARC and the next version of ST-40. The addition of High Temperature Superconductors should yield an order of magnitude improvement in fields (10-13 tesla) for a new generation of Tokamaks. == High-temperature superconductivity == The commercial applications so far for high-temperature superconductors (HTS) have been limited by other properties of the materials discovered thus far. HTS require only liquid nitrogen, not liquid helium, to cool to superconducting temperatures. However, currently known high-temperature superconductors are brittle ceramics that are expensive to manufacture and not easily formed into wires or other useful shapes. Therefore, the applications for HTS have been where it has some other intrinsic advantage, e.g. in: low thermal loss current leads for LTS devices (low thermal conductivity), RF and microwave filters (low resistance to RF), and increasingly in specialist scientific magnets, particularly where size and electricity consumption are critical (while HTS wire is much more expensive than LTS in these applications, this can be offset by the relative cost and convenience of cooling); the ability to ramp field is desired (the higher and wider range of HTS's operating temperature means faster changes in field can be managed); or cryogen free operation is desired (LTS generally requires liquid helium, which is becoming more scarce and expensive). === HTS-based systems === HTS has application in scientific and industrial magnets, including use in NMR and MRI systems. Commercial systems are now available in each category. Also one intrinsic attribute of HTS is that it can withstand much higher magnetic fields than LTS, so HTS at liquid helium temperatures are being explored for very high-field inserts inside LTS magnets. Promising future industrial and commercial HTS applications include Induction heaters, transformers, fault current limiters, power storage, motors and generators, fusion reactors (see ITER) and magnetic levitation devices. Early applications will be where the benefit of smaller size, lower weight or the ability to rapidly switch current (fault current limiters) outweighs the added cost. Longer-term as conductor price falls HTS systems should be competitive in a much wider range of applications on energy efficiency grounds alone. (For a relatively technical and US-centric view of state of play of HTS technology in power systems and the development status of Generation 2 conductor see Superconductivity for Electric Systems 2008 US DOE Annual Peer Review.) === Electric power transmission === The Holbrook Superconductor Project, also known as the LIPA project, was a project to design and build the world's first production superconducting transmission power cable. The cable was commissioned in late June 2008 by the Long Island Power Authority (LIPA) and was in operation for two years. The suburban Long Island electrical substation is fed by a 2,000 foot (600 m) underground cable system which consists of about 99 miles (159 km) of high-temperature superconductor wire manufactured by American Superconductor chilled to −371 °F (−223.9 °C; 49.3 K) with liquid nitrogen, greatly reducing the cost required to deliver additional power. In addition, the installation of the cable bypassed strict regulations for overhead power lines, and offered a solution for the public's concerns on overhead power lines. The Tres Amigas Project was proposed in 2009 as an electrical HVDC interconnector between the Eastern Interconnection, the Western Interconnection and Texas Interconnection. It was proposed to be a multi-mile, triangular pathway of superconducting electric cables, capable of transferring five gigawatts of power between the three U.S. power grids. The project lapsed in 2015 when the Eastern Interconnect withdrew from the project. Construction was never begun. Essen, Germany has the world's longest superconducting power cable in production at 1 kilometer. It is a 10 kV liquid nitrogen cooled cable. The cable is smaller than an equivalent 110 kV regular cable and the lower voltage has the additional benefit of smaller transformers. In 2020, an aluminium plant in Voerde, Germany, announced plans to use superconductors for cables carrying 200 kA, citing lower volume and material demand as advantages. === Magnesium diboride === Magnesium diboride is a much cheaper superconductor than either BSCCO or YBCO in terms of cost per current-carrying capacity per length (cost/(kA*m)), in the same ballpark as LTS, and on this basis many manufactured wires are already cheaper than copper. Furthermore, MgB2 superconducts at temperatures higher than LTS (its critical temperature is 39 K, compared with less than 10 K for NbTi and 18.3 K for Nb3Sn), introducing the possibility of using it at 10-20 K in cryogen-free magnets or perhaps eventually in liquid hydrogen. However MgB2 is limited in the magnetic field it can tolerate at these higher temperatures, so further research is required to demonstrate its competitiveness in higher field applications. === Trapped field magnets === Exposing superconducting materials to a brief magnetic field can trap the field for use in machines such as generators. In some applications they could replace traditional permanent magnets. == Notes ==
Wikipedia/Technological_applications_of_superconductivity
In analytical mechanics (particularly Lagrangian mechanics), generalized forces are conjugate to generalized coordinates. They are obtained from the applied forces Fi, i = 1, …, n, acting on a system that has its configuration defined in terms of generalized coordinates. In the formulation of virtual work, each generalized force is the coefficient of the variation of a generalized coordinate. == Virtual work == Generalized forces can be obtained from the computation of the virtual work, δW, of the applied forces.: 265  The virtual work of the forces, Fi, acting on the particles Pi, i = 1, ..., n, is given by δ W = ∑ i = 1 n F i ⋅ δ r i {\displaystyle \delta W=\sum _{i=1}^{n}\mathbf {F} _{i}\cdot \delta \mathbf {r} _{i}} where δri is the virtual displacement of the particle Pi. === Generalized coordinates === Let the position vectors of each of the particles, ri, be a function of the generalized coordinates, qj, j = 1, ..., m. Then the virtual displacements δri are given by δ r i = ∑ j = 1 m ∂ r i ∂ q j δ q j , i = 1 , … , n , {\displaystyle \delta \mathbf {r} _{i}=\sum _{j=1}^{m}{\frac {\partial \mathbf {r} _{i}}{\partial q_{j}}}\delta q_{j},\quad i=1,\ldots ,n,} where δqj is the virtual displacement of the generalized coordinate qj. The virtual work for the system of particles becomes δ W = F 1 ⋅ ∑ j = 1 m ∂ r 1 ∂ q j δ q j + ⋯ + F n ⋅ ∑ j = 1 m ∂ r n ∂ q j δ q j . {\displaystyle \delta W=\mathbf {F} _{1}\cdot \sum _{j=1}^{m}{\frac {\partial \mathbf {r} _{1}}{\partial q_{j}}}\delta q_{j}+\dots +\mathbf {F} _{n}\cdot \sum _{j=1}^{m}{\frac {\partial \mathbf {r} _{n}}{\partial q_{j}}}\delta q_{j}.} Collect the coefficients of δqj so that δ W = ∑ i = 1 n F i ⋅ ∂ r i ∂ q 1 δ q 1 + ⋯ + ∑ i = 1 n F i ⋅ ∂ r i ∂ q m δ q m . {\displaystyle \delta W=\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {r} _{i}}{\partial q_{1}}}\delta q_{1}+\dots +\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {r} _{i}}{\partial q_{m}}}\delta q_{m}.} === Generalized forces === The virtual work of a system of particles can be written in the form δ W = Q 1 δ q 1 + ⋯ + Q m δ q m , {\displaystyle \delta W=Q_{1}\delta q_{1}+\dots +Q_{m}\delta q_{m},} where Q j = ∑ i = 1 n F i ⋅ ∂ r i ∂ q j , j = 1 , … , m , {\displaystyle Q_{j}=\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {r} _{i}}{\partial q_{j}}},\quad j=1,\ldots ,m,} are called the generalized forces associated with the generalized coordinates qj, j = 1, ..., m. === Velocity formulation === In the application of the principle of virtual work it is often convenient to obtain virtual displacements from the velocities of the system. For the n particle system, let the velocity of each particle Pi be Vi, then the virtual displacement δri can also be written in the form δ r i = ∑ j = 1 m ∂ V i ∂ q ˙ j δ q j , i = 1 , … , n . {\displaystyle \delta \mathbf {r} _{i}=\sum _{j=1}^{m}{\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{j}}}\delta q_{j},\quad i=1,\ldots ,n.} This means that the generalized force, Qj, can also be determined as Q j = ∑ i = 1 n F i ⋅ ∂ V i ∂ q ˙ j , j = 1 , … , m . {\displaystyle Q_{j}=\sum _{i=1}^{n}\mathbf {F} _{i}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{j}}},\quad j=1,\ldots ,m.} == D'Alembert's principle == D'Alembert formulated the dynamics of a particle as the equilibrium of the applied forces with an inertia force (apparent force), called D'Alembert's principle. The inertia force of a particle, Pi, of mass mi is F i ∗ = − m i A i , i = 1 , … , n , {\displaystyle \mathbf {F} _{i}^{*}=-m_{i}\mathbf {A} _{i},\quad i=1,\ldots ,n,} where Ai is the acceleration of the particle. If the configuration of the particle system depends on the generalized coordinates qj, j = 1, ..., m, then the generalized inertia force is given by Q j ∗ = ∑ i = 1 n F i ∗ ⋅ ∂ V i ∂ q ˙ j , j = 1 , … , m . {\displaystyle Q_{j}^{*}=\sum _{i=1}^{n}\mathbf {F} _{i}^{*}\cdot {\frac {\partial \mathbf {V} _{i}}{\partial {\dot {q}}_{j}}},\quad j=1,\ldots ,m.} D'Alembert's form of the principle of virtual work yields δ W = ( Q 1 + Q 1 ∗ ) δ q 1 + ⋯ + ( Q m + Q m ∗ ) δ q m . {\displaystyle \delta W=(Q_{1}+Q_{1}^{*})\delta q_{1}+\dots +(Q_{m}+Q_{m}^{*})\delta q_{m}.} == See also == Lagrangian mechanics Generalized coordinates Degrees of freedom (physics and chemistry) Virtual work == References ==
Wikipedia/Generalized_forces
In physics, and more specifically in Hamiltonian mechanics, a generating function is, loosely, a function whose partial derivatives generate the differential equations that determine a system's dynamics. Common examples are the partition function of statistical mechanics, the Hamiltonian, and the function which acts as a bridge between two sets of canonical variables when performing a canonical transformation. == In canonical transformations == There are four basic generating functions, summarized by the following table: == Example == Sometimes a given Hamiltonian can be turned into one that looks like the harmonic oscillator Hamiltonian, which is H = a P 2 + b Q 2 . {\displaystyle H=aP^{2}+bQ^{2}.} For example, with the Hamiltonian H = 1 2 q 2 + p 2 q 4 2 , {\displaystyle H={\frac {1}{2q^{2}}}+{\frac {p^{2}q^{4}}{2}},} where p is the generalized momentum and q is the generalized coordinate, a good canonical transformation to choose would be This turns the Hamiltonian into H = Q 2 2 + P 2 2 , {\displaystyle H={\frac {Q^{2}}{2}}+{\frac {P^{2}}{2}},} which is in the form of the harmonic oscillator Hamiltonian. The generating function F for this transformation is of the third kind, F = F 3 ( p , Q ) . {\displaystyle F=F_{3}(p,Q).} To find F explicitly, use the equation for its derivative from the table above, P = − ∂ F 3 ∂ Q , {\displaystyle P=-{\frac {\partial F_{3}}{\partial Q}},} and substitute the expression for P from equation (1), expressed in terms of p and Q: p Q 2 = − ∂ F 3 ∂ Q {\displaystyle {\frac {p}{Q^{2}}}=-{\frac {\partial F_{3}}{\partial Q}}} Integrating this with respect to Q results in an equation for the generating function of the transformation given by equation (1): To confirm that this is the correct generating function, verify that it matches (1): q = − ∂ F 3 ∂ p = − 1 Q {\displaystyle q=-{\frac {\partial F_{3}}{\partial p}}={\frac {-1}{Q}}} == See also == Hamilton–Jacobi equation Poisson bracket == References ==
Wikipedia/Generating_function_(physics)
Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom. One motivation for the development of the Lagrangian formalism on fields, and more generally, for classical field theory, is to provide a clear mathematical foundation for quantum field theory, which is infamously beset by formal difficulties that make it unacceptable as a mathematical theory. The Lagrangians presented here are identical to their quantum equivalents, but, in treating the fields as classical fields, instead of being quantized, one can provide definitions and obtain solutions with properties compatible with the conventional formal approach to the mathematics of partial differential equations. This enables the formulation of solutions on spaces with well-characterized properties, such as Sobolev spaces. It enables various theorems to be provided, ranging from proofs of existence to the uniform convergence of formal series to the general settings of potential theory. In addition, insight and clarity is obtained by generalizations to Riemannian manifolds and fiber bundles, allowing the geometric structure to be clearly discerned and disentangled from the corresponding equations of motion. A clearer view of the geometric structure has in turn allowed highly abstract theorems from geometry to be used to gain insight, ranging from the Chern–Gauss–Bonnet theorem and the Riemann–Roch theorem to the Atiyah–Singer index theorem and Chern–Simons theory. == Overview == In field theory, the independent variable is replaced by an event in spacetime (x, y, z, t), or more generally still by a point s on a Riemannian manifold. The dependent variables are replaced by the value of a field at that point in spacetime φ ( x , y , z , t ) {\displaystyle \varphi (x,y,z,t)} so that the equations of motion are obtained by means of an action principle, written as: δ S δ φ i = 0 , {\displaystyle {\frac {\delta {\mathcal {S}}}{\delta \varphi _{i}}}=0,} where the action, S {\displaystyle {\mathcal {S}}} , is a functional of the dependent variables φ i ( s ) {\displaystyle \varphi _{i}(s)} , their derivatives and s itself S [ φ i ] = ∫ L ( φ i ( s ) , { ∂ φ i ( s ) ∂ s α } , { s α } ) d n s , {\displaystyle {\mathcal {S}}\left[\varphi _{i}\right]=\int {{\mathcal {L}}\left(\varphi _{i}(s),\left\{{\frac {\partial \varphi _{i}(s)}{\partial s^{\alpha }}}\right\},\{s^{\alpha }\}\right)\,\mathrm {d} ^{n}s},} where the brackets denote { ⋅ ∀ α } {\displaystyle \{\cdot ~\forall \alpha \}} ; and s = {sα} denotes the set of n independent variables of the system, including the time variable, and is indexed by α = 1, 2, 3, ..., n. The calligraphic typeface, L {\displaystyle {\mathcal {L}}} , is used to denote the density, and d n s {\displaystyle \mathrm {d} ^{n}s} is the volume form of the field function, i.e., the measure of the domain of the field function. In mathematical formulations, it is common to express the Lagrangian as a function on a fiber bundle, wherein the Euler–Lagrange equations can be interpreted as specifying the geodesics on the fiber bundle. Abraham and Marsden's textbook provided the first comprehensive description of classical mechanics in terms of modern geometrical ideas, i.e., in terms of tangent manifolds, symplectic manifolds and contact geometry. Bleecker's textbook provided a comprehensive presentation of field theories in physics in terms of gauge invariant fiber bundles. Such formulations were known or suspected long before. Jost continues with a geometric presentation, clarifying the relation between Hamiltonian and Lagrangian forms, describing spin manifolds from first principles, etc. Current research focuses on non-rigid affine structures, (sometimes called "quantum structures") wherein one replaces occurrences of vector spaces by tensor algebras. This research is motivated by the breakthrough understanding of quantum groups as affine Lie algebras (Lie groups are, in a sense "rigid", as they are determined by their Lie algebra. When reformulated on a tensor algebra, they become "floppy", having infinite degrees of freedom; see e.g., Virasoro algebra.) == Definitions == In Lagrangian field theory, the Lagrangian as a function of generalized coordinates is replaced by a Lagrangian density, a function of the fields in the system and their derivatives, and possibly the space and time coordinates themselves. In field theory, the independent variable t is replaced by an event in spacetime (x, y, z, t) or still more generally by a point s on a manifold. Often, a "Lagrangian density" is simply referred to as a "Lagrangian". === Scalar fields === For one scalar field φ {\displaystyle \varphi } , the Lagrangian density will take the form: L ( φ , ∇ φ , ∂ φ / ∂ t , x , t ) {\displaystyle {\mathcal {L}}(\varphi ,{\boldsymbol {\nabla }}\varphi ,\partial \varphi /\partial t,\mathbf {x} ,t)} For many scalar fields L ( φ 1 , ∇ φ 1 , ∂ φ 1 / ∂ t , … , φ n , ∇ φ n , ∂ φ n / ∂ t , … , x , t ) {\displaystyle {\mathcal {L}}(\varphi _{1},{\boldsymbol {\nabla }}\varphi _{1},\partial \varphi _{1}/\partial t,\ldots ,\varphi _{n},{\boldsymbol {\nabla }}\varphi _{n},\partial \varphi _{n}/\partial t,\ldots ,\mathbf {x} ,t)} In mathematical formulations, the scalar fields are understood to be coordinates on a fiber bundle, and the derivatives of the field are understood to be sections of the jet bundle. === Vector fields, tensor fields, spinor fields === The above can be generalized for vector fields, tensor fields, and spinor fields. In physics, fermions are described by spinor fields. Bosons are described by tensor fields, which include scalar and vector fields as special cases. For example, if there are m {\displaystyle m} real-valued scalar fields, φ 1 , … , φ m {\displaystyle \varphi _{1},\dots ,\varphi _{m}} , then the field manifold is R m {\displaystyle \mathbb {R} ^{m}} . If the field is a real vector field, then the field manifold is isomorphic to R n {\displaystyle \mathbb {R} ^{n}} . === Action === The time integral of the Lagrangian is called the action denoted by S. In field theory, a distinction is occasionally made between the Lagrangian L, of which the time integral is the action S = ∫ L d t , {\displaystyle {\mathcal {S}}=\int L\,\mathrm {d} t\,,} and the Lagrangian density L {\displaystyle {\mathcal {L}}} , which one integrates over all spacetime to get the action: S [ φ ] = ∫ L ( φ , ∇ φ , ∂ φ / ∂ t , x , t ) d 3 x d t . {\displaystyle {\mathcal {S}}[\varphi ]=\int {\mathcal {L}}(\varphi ,{\boldsymbol {\nabla }}\varphi ,\partial \varphi /\partial t,\mathbf {x} ,t)\,\mathrm {d} ^{3}\mathbf {x} \,\mathrm {d} t.} The spatial volume integral of the Lagrangian density is the Lagrangian; in 3D, L = ∫ L d 3 x . {\displaystyle L=\int {\mathcal {L}}\,\mathrm {d} ^{3}\mathbf {x} \,.} The action is often referred to as the "action functional", in that it is a function of the fields (and their derivatives). === Volume form === In the presence of gravity or when using general curvilinear coordinates, the Lagrangian density L {\displaystyle {\mathcal {L}}} will include a factor of g {\textstyle {\sqrt {g}}} . This ensures that the action is invariant under general coordinate transformations. In mathematical literature, spacetime is taken to be a Riemannian manifold M {\displaystyle M} and the integral then becomes the volume form S = ∫ M | g | d x 1 ∧ ⋯ ∧ d x m L {\displaystyle {\mathcal {S}}=\int _{M}{\sqrt {|g|}}dx^{1}\wedge \cdots \wedge dx^{m}{\mathcal {L}}} Here, the ∧ {\displaystyle \wedge } is the wedge product and | g | {\textstyle {\sqrt {|g|}}} is the square root of the determinant | g | {\displaystyle |g|} of the metric tensor g {\displaystyle g} on M {\displaystyle M} . For flat spacetime (e.g., Minkowski spacetime), the unit volume is one, i.e. | g | = 1 {\textstyle {\sqrt {|g|}}=1} and so it is commonly omitted, when discussing field theory in flat spacetime. Likewise, the use of the wedge-product symbols offers no additional insight over the ordinary concept of a volume in multivariate calculus, and so these are likewise dropped. Some older textbooks, e.g., Landau and Lifschitz write − g {\textstyle {\sqrt {-g}}} for the volume form, since the minus sign is appropriate for metric tensors with signature (+−−−) or (−+++) (since the determinant is negative, in either case). When discussing field theory on general Riemannian manifolds, the volume form is usually written in the abbreviated notation ∗ ( 1 ) {\displaystyle *(1)} where ∗ {\displaystyle *} is the Hodge star. That is, ∗ ( 1 ) = | g | d x 1 ∧ ⋯ ∧ d x m {\displaystyle *(1)={\sqrt {|g|}}dx^{1}\wedge \cdots \wedge dx^{m}} and so S = ∫ M ∗ ( 1 ) L {\displaystyle {\mathcal {S}}=\int _{M}*(1){\mathcal {L}}} Not infrequently, the notation above is considered to be entirely superfluous, and S = ∫ M L {\displaystyle {\mathcal {S}}=\int _{M}{\mathcal {L}}} is frequently seen. Do not be misled: the volume form is implicitly present in the integral above, even if it is not explicitly written. === Euler–Lagrange equations === The Euler–Lagrange equations describe the geodesic flow of the field φ {\displaystyle \varphi } as a function of time. Taking the variation with respect to φ {\displaystyle \varphi } , one obtains 0 = δ S δ φ = ∫ M ∗ ( 1 ) ( − ∂ μ ( ∂ L ∂ ( ∂ μ φ ) ) + ∂ L ∂ φ ) . {\displaystyle 0={\frac {\delta {\mathcal {S}}}{\delta \varphi }}=\int _{M}*(1)\left(-\partial _{\mu }\left({\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\varphi )}}\right)+{\frac {\partial {\mathcal {L}}}{\partial \varphi }}\right).} Solving, with respect to the boundary conditions, one obtains the Euler–Lagrange equations: ∂ L ∂ φ = ∂ μ ( ∂ L ∂ ( ∂ μ φ ) ) . {\displaystyle {\frac {\partial {\mathcal {L}}}{\partial \varphi }}=\partial _{\mu }\left({\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\varphi )}}\right).} === Lagrangian terms === Often the Lagrangian consists of a sum of polynomial terms, with the symmetries of the theory and the fields involved dictating the types of terms that are allowed. For example, in relativistic theories, each term must be Lorentz invariant while in a theory with a gauge field, they must be gauge invariant. Terms that contain two fields and no derivatives are known as mass terms, with these giving mass to the fields. For example, a single real scalar field ϕ ( x ) {\displaystyle \phi (x)} of mass m {\displaystyle m} has a mass term given by L m = − 1 2 m 2 ϕ 2 ( x ) . {\displaystyle {\mathcal {L}}_{m}=-{\frac {1}{2}}m^{2}\phi ^{2}(x).} The other terms that have two fields, those with at least one derivative, are known as kinetic terms. They make fields dynamical, with most theories requiring a restriction of at most two derivatives in kinetic terms to preserve probabililties in a quantum theory. They are also usually positive-definite to ensure positive energies. For example, the kinetic term for a relativistic real scalar field is given by L k = 1 2 ∂ μ ϕ ∂ μ ϕ . {\displaystyle {\mathcal {L}}_{k}={\frac {1}{2}}\partial _{\mu }\phi \partial ^{\mu }\phi .} Fields with no kinetic terms can also be found, playing the role of auxiliary fields, background fields, or currents. Theories with only kinetic and mass terms, form free field theories. Any term with more than two fields per term is known as an interaction term. The presence of these gives rise to interacting theories where particles can scatter off each other. The coefficients in front of these terms are known as coupling constants and they dictate the strength of the interaction. For example, a quartic interaction in a real scalar field theory is given by L i = − g 4 ! ϕ 4 , {\displaystyle {\mathcal {L}}_{i}=-{\frac {g}{4!}}\phi ^{4},} where g {\displaystyle g} is its coupling constant. This term gives rise to scattering processes whereby two scalar fields can scatter off each other. Interacting terms can have any number of derivatives, with each derivative providing a momentum dependence to the scattering term as can be seen by going into momentum space. Terms with only one field are known as tadpole terms since they give rise to tadpole Feynman diagrams.: 415  In theories with translational symmetries, such terms can usually be eliminated by redefining some of the fields though a shift. Constant terms, those with no fields, have no physical consequences in non-gravitational theories. In classical field theories, the equations of motion only depend on variations of the Lagrangian, so constant terms play no role. In quantum field theories they only provide an irrelevant overall multiplicative term to the partition function, so again play no role. Physically this is because in these theories there is no absolute energy scale as the potential energy can always be shifted by an arbitrary constant without altering the physics. However, in gravitational systems the constant terms are multiplied by the metric determinant, coupling them to the spacetime. They play the role of the cosmological constant, directly affecting the dynamics of the theory at both a classical and quantum level. Polynomial terms are often expressed with certain canonical normalizations, used to simplify the Feynman rules that are derived from them. Usually one divides by the product of the factorial of the multipicity of the fields. For example, in a theory with two real scalar fields, a term of the form g ϕ n φ m {\displaystyle g\phi ^{n}\varphi ^{m}} term would be divided by n ! m ! {\displaystyle n!m!} . Particles and antiparticles are distinguished in this counting, so that a complex scalar field term of the form g ′ ϕ ¯ p ϕ p {\displaystyle g'{\bar {\phi }}^{p}\phi ^{p}} is divided by p ! p ! {\displaystyle p!p!} rather than ( 2 p ) ! {\displaystyle (2p)!} . == Examples == A large variety of physical systems have been formulated in terms of Lagrangians over fields. Below is a sampling of some of the most common ones found in physics textbooks on field theory. === Newtonian gravity === The Lagrangian density for Newtonian gravity is: L ( x , t ) = − 1 8 π G ( ∇ Φ ( x , t ) ) 2 − ρ ( x , t ) Φ ( x , t ) {\displaystyle {\mathcal {L}}(\mathbf {x} ,t)=-{1 \over 8\pi G}(\nabla \Phi (\mathbf {x} ,t))^{2}-\rho (\mathbf {x} ,t)\Phi (\mathbf {x} ,t)} where Φ is the gravitational potential, ρ is the mass density, and G in m3·kg−1·s−2 is the gravitational constant. The density L {\displaystyle {\mathcal {L}}} has units of J·m−3. Here the interaction term involves a continuous mass density ρ in kg·m−3. This is necessary because using a point source for a field would result in mathematical difficulties. This Lagrangian can be written in the form of L = T − V {\displaystyle {\mathcal {L}}=T-V} , with the T = − ( ∇ Φ ) 2 / 8 π G {\displaystyle T=-(\nabla \Phi )^{2}/8\pi G} providing a kinetic term, and the interaction V = ρ Φ {\displaystyle V=\rho \Phi } the potential term. See also Nordström's theory of gravitation for how this could be modified to deal with changes over time. This form is reprised in the next example of a scalar field theory. The variation of the integral with respect to Φ is: δ L ( x , t ) = − ρ ( x , t ) δ Φ ( x , t ) − 2 8 π G ( ∇ Φ ( x , t ) ) ⋅ ( ∇ δ Φ ( x , t ) ) . {\displaystyle \delta {\mathcal {L}}(\mathbf {x} ,t)=-\rho (\mathbf {x} ,t)\delta \Phi (\mathbf {x} ,t)-{2 \over 8\pi G}(\nabla \Phi (\mathbf {x} ,t))\cdot (\nabla \delta \Phi (\mathbf {x} ,t)).} After integrating by parts, discarding the total integral, and dividing out by δΦ the formula becomes: 0 = − ρ ( x , t ) + 1 4 π G ∇ ⋅ ∇ Φ ( x , t ) {\displaystyle 0=-\rho (\mathbf {x} ,t)+{\frac {1}{4\pi G}}\nabla \cdot \nabla \Phi (\mathbf {x} ,t)} which is equivalent to: 4 π G ρ ( x , t ) = ∇ 2 Φ ( x , t ) {\displaystyle 4\pi G\rho (\mathbf {x} ,t)=\nabla ^{2}\Phi (\mathbf {x} ,t)} which yields Gauss's law for gravity. === Scalar field theory === The Lagrangian for a scalar field moving in a potential V ( ϕ ) {\displaystyle V(\phi )} can be written as L = 1 2 ∂ μ ϕ ∂ μ ϕ − V ( ϕ ) = 1 2 ∂ μ ϕ ∂ μ ϕ − 1 2 m 2 ϕ 2 − ∑ n = 3 ∞ 1 n ! g n ϕ n {\displaystyle {\mathcal {L}}={\frac {1}{2}}\partial ^{\mu }\phi \partial _{\mu }\phi -V(\phi )={\frac {1}{2}}\partial ^{\mu }\phi \partial _{\mu }\phi -{\frac {1}{2}}m^{2}\phi ^{2}-\sum _{n=3}^{\infty }{\frac {1}{n!}}g_{n}\phi ^{n}} It is not at all an accident that the scalar theory resembles the undergraduate textbook Lagrangian L = T − V {\displaystyle L=T-V} for the kinetic term of a free point particle written as T = m v 2 / 2 {\displaystyle T=mv^{2}/2} . The scalar theory is the field-theory generalization of a particle moving in a potential. When the V ( ϕ ) {\displaystyle V(\phi )} is the Mexican hat potential, the resulting fields are termed the Higgs fields. === Sigma model Lagrangian === The sigma model describes the motion of a scalar point particle constrained to move on a Riemannian manifold, such as a circle or a sphere. It generalizes the case of scalar and vector fields, that is, fields constrained to move on a flat manifold. The Lagrangian is commonly written in one of three equivalent forms: L = 1 2 d ϕ ∧ ∗ d ϕ {\displaystyle {\mathcal {L}}={\frac {1}{2}}\mathrm {d} \phi \wedge {*\mathrm {d} \phi }} where the d {\displaystyle \mathrm {d} } is the differential. An equivalent expression is L = 1 2 ∑ i = 1 n ∑ j = 1 n g i j ( ϕ ) ∂ μ ϕ i ∂ μ ϕ j {\displaystyle {\mathcal {L}}={\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}g_{ij}(\phi )\;\partial ^{\mu }\phi _{i}\partial _{\mu }\phi _{j}} with g i j {\displaystyle g_{ij}} the Riemannian metric on the manifold of the field; i.e. the fields ϕ i {\displaystyle \phi _{i}} are just local coordinates on the coordinate chart of the manifold. A third common form is L = 1 2 t r ( L μ L μ ) {\displaystyle {\mathcal {L}}={\frac {1}{2}}\mathrm {tr} \left(L_{\mu }L^{\mu }\right)} with L μ = U − 1 ∂ μ U {\displaystyle L_{\mu }=U^{-1}\partial _{\mu }U} and U ∈ S U ( N ) {\displaystyle U\in \mathrm {SU} (N)} , the Lie group SU(N). This group can be replaced by any Lie group, or, more generally, by a symmetric space. The trace is just the Killing form in hiding; the Killing form provides a quadratic form on the field manifold, the lagrangian is then just the pullback of this form. Alternately, the Lagrangian can also be seen as the pullback of the Maurer–Cartan form to the base spacetime. In general, sigma models exhibit topological soliton solutions. The most famous and well-studied of these is the Skyrmion, which serves as a model of the nucleon that has withstood the test of time. === Electromagnetism in special relativity === Consider a point particle, a charged particle, interacting with the electromagnetic field. The interaction terms − q ϕ ( x ( t ) , t ) + q x ˙ ( t ) ⋅ A ( x ( t ) , t ) {\displaystyle -q\phi (\mathbf {x} (t),t)+q{\dot {\mathbf {x} }}(t)\cdot \mathbf {A} (\mathbf {x} (t),t)} are replaced by terms involving a continuous charge density ρ in A·s·m−3 and current density j {\displaystyle \mathbf {j} } in A·m−2. The resulting Lagrangian density for the electromagnetic field is: L ( x , t ) = − ρ ( x , t ) ϕ ( x , t ) + j ( x , t ) ⋅ A ( x , t ) + ϵ 0 2 E 2 ( x , t ) − 1 2 μ 0 B 2 ( x , t ) . {\displaystyle {\mathcal {L}}(\mathbf {x} ,t)=-\rho (\mathbf {x} ,t)\phi (\mathbf {x} ,t)+\mathbf {j} (\mathbf {x} ,t)\cdot \mathbf {A} (\mathbf {x} ,t)+{\epsilon _{0} \over 2}{E}^{2}(\mathbf {x} ,t)-{1 \over {2\mu _{0}}}{B}^{2}(\mathbf {x} ,t).} Varying this with respect to ϕ, we get 0 = − ρ ( x , t ) + ϵ 0 ∇ ⋅ E ( x , t ) {\displaystyle 0=-\rho (\mathbf {x} ,t)+\epsilon _{0}\nabla \cdot \mathbf {E} (\mathbf {x} ,t)} which yields Gauss' law. Varying instead with respect to A {\displaystyle \mathbf {A} } , we get 0 = j ( x , t ) + ϵ 0 E ˙ ( x , t ) − 1 μ 0 ∇ × B ( x , t ) {\displaystyle 0=\mathbf {j} (\mathbf {x} ,t)+\epsilon _{0}{\dot {\mathbf {E} }}(\mathbf {x} ,t)-{1 \over \mu _{0}}\nabla \times \mathbf {B} (\mathbf {x} ,t)} which yields Ampère's law. Using tensor notation, we can write all this more compactly. The term − ρ ϕ ( x , t ) + j ⋅ A {\displaystyle -\rho \phi (\mathbf {x} ,t)+\mathbf {j} \cdot \mathbf {A} } is actually the inner product of two four-vectors. We package the charge density into the current 4-vector and the potential into the potential 4-vector. These two new vectors are j μ = ( ρ , j ) and A μ = ( − ϕ , A ) {\displaystyle j^{\mu }=(\rho ,\mathbf {j} )\quad {\text{and}}\quad A_{\mu }=(-\phi ,\mathbf {A} )} We can then write the interaction term as − ρ ϕ + j ⋅ A = j μ A μ {\displaystyle -\rho \phi +\mathbf {j} \cdot \mathbf {A} =j^{\mu }A_{\mu }} Additionally, we can package the E and B fields into what is known as the electromagnetic tensor F μ ν {\displaystyle F_{\mu \nu }} . We define this tensor as F μ ν = ∂ μ A ν − ∂ ν A μ {\displaystyle F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }} The term we are looking out for turns out to be ϵ 0 2 E 2 − 1 2 μ 0 B 2 = − 1 4 μ 0 F μ ν F μ ν = − 1 4 μ 0 F μ ν F ρ σ η μ ρ η ν σ {\displaystyle {\epsilon _{0} \over 2}{E}^{2}-{1 \over {2\mu _{0}}}{B}^{2}=-{\frac {1}{4\mu _{0}}}F_{\mu \nu }F^{\mu \nu }=-{\frac {1}{4\mu _{0}}}F_{\mu \nu }F_{\rho \sigma }\eta ^{\mu \rho }\eta ^{\nu \sigma }} We have made use of the Minkowski metric to raise the indices on the EMF tensor. In this notation, Maxwell's equations are ∂ μ F μ ν = − μ 0 j ν and ϵ μ ν λ σ ∂ ν F λ σ = 0 {\displaystyle \partial _{\mu }F^{\mu \nu }=-\mu _{0}j^{\nu }\quad {\text{and}}\quad \epsilon ^{\mu \nu \lambda \sigma }\partial _{\nu }F_{\lambda \sigma }=0} where ε is the Levi-Civita tensor. So the Lagrange density for electromagnetism in special relativity written in terms of Lorentz vectors and tensors is L ( x ) = j μ ( x ) A μ ( x ) − 1 4 μ 0 F μ ν ( x ) F μ ν ( x ) {\displaystyle {\mathcal {L}}(x)=j^{\mu }(x)A_{\mu }(x)-{\frac {1}{4\mu _{0}}}F_{\mu \nu }(x)F^{\mu \nu }(x)} In this notation it is apparent that classical electromagnetism is a Lorentz-invariant theory. By the equivalence principle, it becomes simple to extend the notion of electromagnetism to curved spacetime. === Electromagnetism and the Yang–Mills equations === Using differential forms, the electromagnetic action S in vacuum on a (pseudo-) Riemannian manifold M {\displaystyle {\mathcal {M}}} can be written (using natural units, c = ε0 = 1) as S [ A ] = − ∫ M ( 1 2 F ∧ ∗ F − A ∧ ∗ J ) . {\displaystyle {\mathcal {S}}[\mathbf {A} ]=-\int _{\mathcal {M}}\left({\frac {1}{2}}\,\mathbf {F} \wedge \ast \mathbf {F} -\mathbf {A} \wedge \ast \mathbf {J} \right).} Here, A stands for the electromagnetic potential 1-form, J is the current 1-form, F is the field strength 2-form and the star denotes the Hodge star operator. This is exactly the same Lagrangian as in the section above, except that the treatment here is coordinate-free; expanding the integrand into a basis yields the identical, lengthy expression. Note that with forms, an additional integration measure is not necessary because forms have coordinate differentials built in. Variation of the action leads to d ∗ F = ∗ J . {\displaystyle \mathrm {d} {\ast }\mathbf {F} ={\ast }\mathbf {J} .} These are Maxwell's equations for the electromagnetic potential. Substituting F = dA immediately yields the equation for the fields, d F = 0 {\displaystyle \mathrm {d} \mathbf {F} =0} because F is an exact form. The A field can be understood to be the affine connection on a U(1)-fiber bundle. That is, classical electrodynamics, all of its effects and equations, can be completely understood in terms of a circle bundle over Minkowski spacetime. The Yang–Mills equations can be written in exactly the same form as above, by replacing the Lie group U(1) of electromagnetism by an arbitrary Lie group. In the Standard model, it is conventionally taken to be S U ( 3 ) × S U ( 2 ) × U ( 1 ) {\displaystyle \mathrm {SU} (3)\times \mathrm {SU} (2)\times \mathrm {U} (1)} although the general case is of general interest. In all cases, there is no need for any quantization to be performed. Although the Yang–Mills equations are historically rooted in quantum field theory, the above equations are purely classical. === Chern–Simons functional === In the same vein as the above, one can consider the action in one dimension less, i.e. in a contact geometry setting. This gives the Chern–Simons functional. It is written as S [ A ] = ∫ M t r ( A ∧ d A + 2 3 A ∧ A ∧ A ) . {\displaystyle {\mathcal {S}}[\mathbf {A} ]=\int _{\mathcal {M}}\mathrm {tr} \left(\mathbf {A} \wedge d\mathbf {A} +{\frac {2}{3}}\mathbf {A} \wedge \mathbf {A} \wedge \mathbf {A} \right).} Chern–Simons theory was deeply explored in physics, as a toy model for a broad range of geometric phenomena that one might expect to find in a grand unified theory. === Ginzburg–Landau Lagrangian === The Lagrangian density for Ginzburg–Landau theory combines the Lagrangian for the scalar field theory with the Lagrangian for the Yang–Mills action. It may be written as: L ( ψ , A ) = | F | 2 + | D ψ | 2 + 1 4 ( σ − | ψ | 2 ) 2 {\displaystyle {\mathcal {L}}(\psi ,A)=\vert F\vert ^{2}+\vert D\psi \vert ^{2}+{\frac {1}{4}}\left(\sigma -\vert \psi \vert ^{2}\right)^{2}} where ψ {\displaystyle \psi } is a section of a vector bundle with fiber C n {\displaystyle \mathbb {C} ^{n}} . The ψ {\displaystyle \psi } corresponds to the order parameter in a superconductor; equivalently, it corresponds to the Higgs field, after noting that the second term is the famous "Sombrero hat" potential. The field A {\displaystyle A} is the (non-Abelian) gauge field, i.e. the Yang–Mills field and F {\displaystyle F} is its field-strength. The Euler–Lagrange equations for the Ginzburg–Landau functional are the Yang–Mills equations D ⋆ D ψ = 1 2 ( σ − | ψ | 2 ) ψ {\displaystyle D{\star }D\psi ={\frac {1}{2}}\left(\sigma -\vert \psi \vert ^{2}\right)\psi } and D ⋆ F = − Re ⁡ ⟨ D ψ , ψ ⟩ {\displaystyle D{\star }F=-\operatorname {Re} \langle D\psi ,\psi \rangle } where ⋆ {\displaystyle {\star }} is the Hodge star operator, i.e. the fully antisymmetric tensor. These equations are closely related to the Yang–Mills–Higgs equations. Another closely related Lagrangian is found in Seiberg–Witten theory. === Dirac Lagrangian === The Lagrangian density for a Dirac field is: L = ψ ¯ ( i ℏ c ∂ / − m c 2 ) ψ {\displaystyle {\mathcal {L}}={\bar {\psi }}(i\hbar c{\partial }\!\!\!/\ -mc^{2})\psi } where ψ {\displaystyle \psi } is a Dirac spinor, ψ ¯ = ψ † γ 0 {\displaystyle {\bar {\psi }}=\psi ^{\dagger }\gamma ^{0}} is its Dirac adjoint, and ∂ / {\displaystyle {\partial }\!\!\!/} is Feynman slash notation for γ σ ∂ σ {\displaystyle \gamma ^{\sigma }\partial _{\sigma }} . There is no particular need to focus on Dirac spinors in the classical theory. The Weyl spinors provide a more general foundation; they can be constructed directly from the Clifford algebra of spacetime; the construction works in any number of dimensions, and the Dirac spinors appear as a special case. Weyl spinors have the additional advantage that they can be used in a vielbein for the metric on a Riemannian manifold; this enables the concept of a spin structure, which, roughly speaking, is a way of formulating spinors consistently in a curved spacetime. === Quantum electrodynamic Lagrangian === The Lagrangian density for QED combines the Lagrangian for the Dirac field together with the Lagrangian for electrodynamics in a gauge-invariant way. It is: L Q E D = ψ ¯ ( i ℏ c D / − m c 2 ) ψ − 1 4 μ 0 F μ ν F μ ν {\displaystyle {\mathcal {L}}_{\mathrm {QED} }={\bar {\psi }}(i\hbar c{D}\!\!\!\!/\ -mc^{2})\psi -{1 \over 4\mu _{0}}F_{\mu \nu }F^{\mu \nu }} where F μ ν {\displaystyle F^{\mu \nu }} is the electromagnetic tensor, D is the gauge covariant derivative, and D / {\displaystyle {D}\!\!\!\!/} is Feynman notation for γ σ D σ {\displaystyle \gamma ^{\sigma }D_{\sigma }} with D σ = ∂ σ − i e A σ {\displaystyle D_{\sigma }=\partial _{\sigma }-ieA_{\sigma }} where A σ {\displaystyle A_{\sigma }} is the electromagnetic four-potential. Although the word "quantum" appears in the above, this is a historical artifact. The definition of the Dirac field requires no quantization whatsoever, it can be written as a purely classical field of anti-commuting Weyl spinors constructed from first principles from a Clifford algebra. The full gauge-invariant classical formulation is given in Bleecker. === Quantum chromodynamic Lagrangian === The Lagrangian density for quantum chromodynamics combines the Lagrangian for one or more massive Dirac spinors with the Lagrangian for the Yang–Mills action, which describes the dynamics of a gauge field; the combined Lagrangian is gauge invariant. It may be written as: L Q C D = ∑ n ψ ¯ n ( i ℏ c D / − m n c 2 ) ψ n − 1 4 G α μ ν G α μ ν {\displaystyle {\mathcal {L}}_{\mathrm {QCD} }=\sum _{n}{\bar {\psi }}_{n}\left(i\hbar c{D}\!\!\!\!/\ -m_{n}c^{2}\right)\psi _{n}-{1 \over 4}G^{\alpha }{}_{\mu \nu }G_{\alpha }{}^{\mu \nu }} where D is the QCD gauge covariant derivative, n = 1, 2, ...6 counts the quark types, and G α μ ν {\displaystyle G^{\alpha }{}_{\mu \nu }\!} is the gluon field strength tensor. As for the electrodynamics case above, the appearance of the word "quantum" above only acknowledges its historical development. The Lagrangian and its gauge invariance can be formulated and treated in a purely classical fashion. === Einstein gravity === The Lagrange density for general relativity in the presence of matter fields is L GR = L EH + L matter = c 4 16 π G ( R − 2 Λ ) + L matter {\displaystyle {\mathcal {L}}_{\text{GR}}={\mathcal {L}}_{\text{EH}}+{\mathcal {L}}_{\text{matter}}={\frac {c^{4}}{16\pi G}}\left(R-2\Lambda \right)+{\mathcal {L}}_{\text{matter}}} where Λ {\displaystyle \Lambda } is the cosmological constant, R {\displaystyle R} is the curvature scalar, which is the Ricci tensor contracted with the metric tensor, and the Ricci tensor is the Riemann tensor contracted with a Kronecker delta. The integral of L EH {\displaystyle {\mathcal {L}}_{\text{EH}}} is known as the Einstein–Hilbert action. The Riemann tensor is the tidal force tensor, and is constructed out of Christoffel symbols and derivatives of Christoffel symbols, which define the metric connection on spacetime. The gravitational field itself was historically ascribed to the metric tensor; the modern view is that the connection is "more fundamental". This is due to the understanding that one can write connections with non-zero torsion. These alter the metric without altering the geometry one bit. As to the actual "direction in which gravity points" (e.g. on the surface of the Earth, it points down), this comes from the Riemann tensor: it is the thing that describes the "gravitational force field" that moving bodies feel and react to. (This last statement must be qualified: there is no "force field" per se; moving bodies follow geodesics on the manifold described by the connection. They move in a "straight line".) The Lagrangian for general relativity can also be written in a form that makes it manifestly similar to the Yang–Mills equations. This is called the Einstein–Yang–Mills action principle. This is done by noting that most of differential geometry works "just fine" on bundles with an affine connection and arbitrary Lie group. Then, plugging in SO(3,1) for that symmetry group, i.e. for the frame fields, one obtains the equations above. Substituting this Lagrangian into the Euler–Lagrange equation and taking the metric tensor g μ ν {\displaystyle g_{\mu \nu }} as the field, we obtain the Einstein field equations R μ ν − 1 2 R g μ ν + g μ ν Λ = 8 π G c 4 T μ ν . {\displaystyle R_{\mu \nu }-{\frac {1}{2}}Rg_{\mu \nu }+g_{\mu \nu }\Lambda ={\frac {8\pi G}{c^{4}}}T_{\mu \nu }\,.} T μ ν {\displaystyle T_{\mu \nu }} is the energy momentum tensor and is defined by T μ ν ≡ − 2 − g δ ( L m a t t e r − g ) δ g μ ν = − 2 δ L m a t t e r δ g μ ν + g μ ν L m a t t e r . {\displaystyle T_{\mu \nu }\equiv {\frac {-2}{\sqrt {-g}}}{\frac {\delta ({\mathcal {L}}_{\mathrm {matter} }{\sqrt {-g}})}{\delta g^{\mu \nu }}}=-2{\frac {\delta {\mathcal {L}}_{\mathrm {matter} }}{\delta g^{\mu \nu }}}+g_{\mu \nu }{\mathcal {L}}_{\mathrm {matter} }\,.} where g {\displaystyle g} is the determinant of the metric tensor when regarded as a matrix. Generally, in general relativity, the integration measure of the action of Lagrange density is − g d 4 x {\textstyle {\sqrt {-g}}\,d^{4}x} . This makes the integral coordinate independent, as the root of the metric determinant is equivalent to the Jacobian determinant. The minus sign is a consequence of the metric signature (the determinant by itself is negative). This is an example of the volume form, previously discussed, becoming manifest in non-flat spacetime. === Electromagnetism in general relativity === The Lagrange density of electromagnetism in general relativity also contains the Einstein–Hilbert action from above. The pure electromagnetic Lagrangian is precisely a matter Lagrangian L matter {\displaystyle {\mathcal {L}}_{\text{matter}}} . The Lagrangian is L ( x ) = j μ ( x ) A μ ( x ) − 1 4 μ 0 F μ ν ( x ) F ρ σ ( x ) g μ ρ ( x ) g ν σ ( x ) + c 4 16 π G R ( x ) = L Maxwell + L Einstein–Hilbert . {\displaystyle {\begin{aligned}{\mathcal {L}}(x)&=j^{\mu }(x)A_{\mu }(x)-{1 \over 4\mu _{0}}F_{\mu \nu }(x)F_{\rho \sigma }(x)g^{\mu \rho }(x)g^{\nu \sigma }(x)+{\frac {c^{4}}{16\pi G}}R(x)\\&={\mathcal {L}}_{\text{Maxwell}}+{\mathcal {L}}_{\text{Einstein–Hilbert}}.\end{aligned}}} This Lagrangian is obtained by simply replacing the Minkowski metric in the above flat Lagrangian with a more general (possibly curved) metric g μ ν ( x ) {\displaystyle g_{\mu \nu }(x)} . We can generate the Einstein Field Equations in the presence of an EM field using this lagrangian. The energy-momentum tensor is T μ ν ( x ) = 2 − g ( x ) δ δ g μ ν ( x ) S Maxwell = 1 μ 0 ( F λ μ ( x ) F ν λ ( x ) − 1 4 g μ ν ( x ) F ρ σ ( x ) F ρ σ ( x ) ) {\displaystyle T^{\mu \nu }(x)={\frac {2}{\sqrt {-g(x)}}}{\frac {\delta }{\delta g_{\mu \nu }(x)}}{\mathcal {S}}_{\text{Maxwell}}={\frac {1}{\mu _{0}}}\left(F_{{\text{ }}\lambda }^{\mu }(x)F^{\nu \lambda }(x)-{\frac {1}{4}}g^{\mu \nu }(x)F_{\rho \sigma }(x)F^{\rho \sigma }(x)\right)} It can be shown that this energy momentum tensor is traceless, i.e. that T = g μ ν T μ ν = 0 {\displaystyle T=g_{\mu \nu }T^{\mu \nu }=0} If we take the trace of both sides of the Einstein Field Equations, we obtain R = − 8 π G c 4 T {\displaystyle R=-{\frac {8\pi G}{c^{4}}}T} So the tracelessness of the energy momentum tensor implies that the curvature scalar in an electromagnetic field vanishes. The Einstein equations are then R μ ν = 8 π G c 4 1 μ 0 ( F μ λ ( x ) F ν λ ( x ) − 1 4 g μ ν ( x ) F ρ σ ( x ) F ρ σ ( x ) ) {\displaystyle R^{\mu \nu }={\frac {8\pi G}{c^{4}}}{\frac {1}{\mu _{0}}}\left({F^{\mu }}_{\lambda }(x)F^{\nu \lambda }(x)-{\frac {1}{4}}g^{\mu \nu }(x)F_{\rho \sigma }(x)F^{\rho \sigma }(x)\right)} Additionally, Maxwell's equations are D μ F μ ν = − μ 0 j ν {\displaystyle D_{\mu }F^{\mu \nu }=-\mu _{0}j^{\nu }} where D μ {\displaystyle D_{\mu }} is the covariant derivative. For free space, we can set the current tensor equal to zero, j μ = 0 {\displaystyle j^{\mu }=0} . Solving both Einstein and Maxwell's equations around a spherically symmetric mass distribution in free space leads to the Reissner–Nordström charged black hole, with the defining line element (written in natural units and with charge Q): d s 2 = ( 1 − 2 M r + Q 2 r 2 ) d t 2 − ( 1 − 2 M r + Q 2 r 2 ) − 1 d r 2 − r 2 d Ω 2 {\displaystyle \mathrm {d} s^{2}=\left(1-{\frac {2M}{r}}+{\frac {Q^{2}}{r^{2}}}\right)\mathrm {d} t^{2}-\left(1-{\frac {2M}{r}}+{\frac {Q^{2}}{r^{2}}}\right)^{-1}\mathrm {d} r^{2}-r^{2}\mathrm {d} \Omega ^{2}} One possible way of unifying the electromagnetic and gravitational Lagrangians (by using a fifth dimension) is given by Kaluza–Klein theory. Effectively, one constructs an affine bundle, just as for the Yang–Mills equations given earlier, and then considers the action separately on the 4-dimensional and the 1-dimensional parts. Such factorizations, such as the fact that the 7-sphere can be written as a product of the 4-sphere and the 3-sphere, or that the 11-sphere is a product of the 4-sphere and the 7-sphere, accounted for much of the early excitement that a theory of everything had been found. Unfortunately, the 7-sphere proved not large enough to enclose all of the Standard model, dashing these hopes. === Additional examples === The BF model Lagrangian, short for "Background Field", describes a system with trivial dynamics, when written on a flat spacetime manifold. On a topologically non-trivial spacetime, the system will have non-trivial classical solutions, which may be interpreted as solitons or instantons. A variety of extensions exist, forming the foundations for topological field theories. == See also == == Notes == == Citations ==
Wikipedia/Lagrangian_field_theory
An operator is a function over a space of physical states onto another space of states. The simplest example of the utility of operators is the study of symmetry (which makes the concept of a group useful in this context). Because of this, they are useful tools in classical mechanics. Operators are even more important in quantum mechanics, where they form an intrinsic part of the formulation of the theory. They play a central role in describing observables (measurable quantities like energy, momentum, etc.). == Operators in classical mechanics == In classical mechanics, the movement of a particle (or system of particles) is completely determined by the Lagrangian L ( q , q ˙ , t ) {\displaystyle L(q,{\dot {q}},t)} or equivalently the Hamiltonian H ( q , p , t ) {\displaystyle H(q,p,t)} , a function of the generalized coordinates q, generalized velocities q ˙ = d q / d t {\displaystyle {\dot {q}}=\mathrm {d} q/\mathrm {d} t} and its conjugate momenta: p = ∂ L ∂ q ˙ {\displaystyle p={\frac {\partial L}{\partial {\dot {q}}}}} If either L or H is independent of a generalized coordinate q, meaning the L and H do not change when q is changed, which in turn means the dynamics of the particle are still the same even when q changes, the corresponding momenta conjugate to those coordinates will be conserved (this is part of Noether's theorem, and the invariance of motion with respect to the coordinate q is a symmetry). Operators in classical mechanics are related to these symmetries. More technically, when H is invariant under the action of a certain group of transformations G: S ∈ G , H ( S ( q , p ) ) = H ( q , p ) {\displaystyle S\in G,H(S(q,p))=H(q,p)} . The elements of G are physical operators, which map physical states among themselves. === Table of classical mechanics operators === where R ( n ^ , θ ) {\displaystyle R({\hat {\boldsymbol {n}}},\theta )} is the rotation matrix about an axis defined by the unit vector n ^ {\displaystyle {\hat {\boldsymbol {n}}}} and angle θ. == Generators == If the transformation is infinitesimal, the operator action should be of the form I + ϵ A , {\displaystyle I+\epsilon A,} where I {\displaystyle I} is the identity operator, ϵ {\displaystyle \epsilon } is a parameter with a small value, and A {\displaystyle A} will depend on the transformation at hand, and is called a generator of the group. Again, as a simple example, we will derive the generator of the space translations on 1D functions. As it was stated, T a f ( x ) = f ( x − a ) {\displaystyle T_{a}f(x)=f(x-a)} . If a = ϵ {\displaystyle a=\epsilon } is infinitesimal, then we may write T ϵ f ( x ) = f ( x − ϵ ) ≈ f ( x ) − ϵ f ′ ( x ) . {\displaystyle T_{\epsilon }f(x)=f(x-\epsilon )\approx f(x)-\epsilon f'(x).} This formula may be rewritten as T ϵ f ( x ) = ( I − ϵ D ) f ( x ) {\displaystyle T_{\epsilon }f(x)=(I-\epsilon D)f(x)} where D {\displaystyle D} is the generator of the translation group, which in this case happens to be the derivative operator. Thus, it is said that the generator of translations is the derivative. == The exponential map == The whole group may be recovered, under normal circumstances, from the generators, via the exponential map. In the case of the translations the idea works like this. The translation for a finite value of a {\displaystyle a} may be obtained by repeated application of the infinitesimal translation: T a f ( x ) = lim N → ∞ T a / N ⋯ T a / N f ( x ) {\displaystyle T_{a}f(x)=\lim _{N\to \infty }T_{a/N}\cdots T_{a/N}f(x)} with the ⋯ {\displaystyle \cdots } standing for the application N {\displaystyle N} times. If N {\displaystyle N} is large, each of the factors may be considered to be infinitesimal: T a f ( x ) = lim N → ∞ ( I − a N D ) N f ( x ) . {\displaystyle T_{a}f(x)=\lim _{N\to \infty }\left(I-{\frac {a}{N}}D\right)^{N}f(x).} But this limit may be rewritten as an exponential: T a f ( x ) = exp ⁡ ( − a D ) f ( x ) . {\displaystyle T_{a}f(x)=\exp(-aD)f(x).} To be convinced of the validity of this formal expression, we may expand the exponential in a power series: T a f ( x ) = ( I − a D + a 2 D 2 2 ! − a 3 D 3 3 ! + ⋯ ) f ( x ) . {\displaystyle T_{a}f(x)=\left(I-aD+{a^{2}D^{2} \over 2!}-{a^{3}D^{3} \over 3!}+\cdots \right)f(x).} The right-hand side may be rewritten as f ( x ) − a f ′ ( x ) + a 2 2 ! f ″ ( x ) − a 3 3 ! f ( 3 ) ( x ) + ⋯ {\displaystyle f(x)-af'(x)+{\frac {a^{2}}{2!}}f''(x)-{\frac {a^{3}}{3!}}f^{(3)}(x)+\cdots } which is just the Taylor expansion of f ( x − a ) {\displaystyle f(x-a)} , which was our original value for T a f ( x ) {\displaystyle T_{a}f(x)} . The mathematical properties of physical operators are a topic of great importance in itself. For further information, see C*-algebra and Gelfand–Naimark theorem. == Operators in quantum mechanics == The mathematical formulation of quantum mechanics (QM) is built upon the concept of an operator. Physical pure states in quantum mechanics are represented as unit-norm vectors (probabilities are normalized to one) in a special complex Hilbert space. Time evolution in this vector space is given by the application of the evolution operator. Any observable, i.e., any quantity which can be measured in a physical experiment, should be associated with a self-adjoint linear operator. The operators must yield real eigenvalues, since they are values which may come up as the result of the experiment. Mathematically this means the operators must be Hermitian. The probability of each eigenvalue is related to the projection of the physical state on the subspace related to that eigenvalue. See below for mathematical details about Hermitian operators. In the wave mechanics formulation of QM, the wavefunction varies with space and time, or equivalently momentum and time (see position and momentum space for details), so observables are differential operators. In the matrix mechanics formulation, the norm of the physical state should stay fixed, so the evolution operator should be unitary, and the operators can be represented as matrices. Any other symmetry, mapping a physical state into another, should keep this restriction. === Wavefunction === The wavefunction must be square-integrable (see Lp spaces), meaning: ∭ R 3 | ψ ( r ) | 2 d 3 r = ∭ R 3 ψ ( r ) ∗ ψ ( r ) d 3 r < ∞ {\displaystyle \iiint _{\mathbb {R} ^{3}}|\psi (\mathbf {r} )|^{2}\,d^{3}\mathbf {r} =\iiint _{\mathbb {R} ^{3}}\psi (\mathbf {r} )^{*}\psi (\mathbf {r} )\,d^{3}\mathbf {r} <\infty } and normalizable, so that: ∭ R 3 | ψ ( r ) | 2 d 3 r = 1 {\displaystyle \iiint _{\mathbb {R} ^{3}}|\psi (\mathbf {r} )|^{2}\,d^{3}\mathbf {r} =1} Two cases of eigenstates (and eigenvalues) are: for discrete eigenstates | ψ i ⟩ {\displaystyle |\psi _{i}\rangle } forming a discrete basis, so any state is a sum | ψ ⟩ = ∑ i c i | ϕ i ⟩ {\displaystyle |\psi \rangle =\sum _{i}c_{i}|\phi _{i}\rangle } where ci are complex numbers such that |ci|2 = ci*ci is the probability of measuring the state | ϕ i ⟩ {\displaystyle |\phi _{i}\rangle } , and the corresponding set of eigenvalues ai is also discrete - either finite or countably infinite. In this case, the inner product of two eigenstates is given by ⟨ ϕ i | ϕ j ⟩ = δ i j {\displaystyle \langle \phi _{i}\vert \phi _{j}\rangle =\delta _{ij}} , where δ m n {\displaystyle \delta _{mn}} denotes the Kronecker Delta. However, for a continuum of eigenstates forming a continuous basis, any state is an integral | ψ ⟩ = ∫ c ( ϕ ) d ϕ | ϕ ⟩ {\displaystyle |\psi \rangle =\int c(\phi )\,d\phi |\phi \rangle } where c(φ) is a complex function such that |c(φ)|2 = c(φ)*c(φ) is the probability of measuring the state | ϕ ⟩ {\displaystyle |\phi \rangle } , and there is an uncountably infinite set of eigenvalues a. In this case, the inner product of two eigenstates is defined as ⟨ ϕ ′ | ϕ ⟩ = δ ( ϕ − ϕ ′ ) {\displaystyle \langle \phi '\vert \phi \rangle =\delta (\phi -\phi ')} , where here δ ( x − y ) {\displaystyle \delta (x-y)} denotes the Dirac Delta. === Linear operators in wave mechanics === Let ψ be the wavefunction for a quantum system, and A ^ {\displaystyle {\hat {A}}} be any linear operator for some observable A (such as position, momentum, energy, angular momentum etc.). If ψ is an eigenfunction of the operator A ^ {\displaystyle {\hat {A}}} , then A ^ ψ = a ψ , {\displaystyle {\hat {A}}\psi =a\psi ,} where a is the eigenvalue of the operator, corresponding to the measured value of the observable, i.e. observable A has a measured value a. If ψ is an eigenfunction of a given operator A ^ {\displaystyle {\hat {A}}} , then a definite quantity (the eigenvalue a) will be observed if a measurement of the observable A is made on the state ψ. Conversely, if ψ is not an eigenfunction of A ^ {\displaystyle {\hat {A}}} , then it has no eigenvalue for A ^ {\displaystyle {\hat {A}}} , and the observable does not have a single definite value in that case. Instead, measurements of the observable A will yield each eigenvalue with a certain probability (related to the decomposition of ψ relative to the orthonormal eigenbasis of A ^ {\displaystyle {\hat {A}}} ). In bra–ket notation the above can be written; A ^ ψ = A ^ ψ ( r ) = A ^ ⟨ r ∣ ψ ⟩ = ⟨ r | A ^ | ψ ⟩ a ψ = a ψ ( r ) = a ⟨ r ∣ ψ ⟩ = ⟨ r ∣ a ∣ ψ ⟩ {\displaystyle {\begin{aligned}{\hat {A}}\psi &={\hat {A}}\psi (\mathbf {r} )={\hat {A}}\left\langle \mathbf {r} \mid \psi \right\rangle =\left\langle \mathbf {r} \left\vert {\hat {A}}\right\vert \psi \right\rangle \\a\psi &=a\psi (\mathbf {r} )=a\left\langle \mathbf {r} \mid \psi \right\rangle =\left\langle \mathbf {r} \mid a\mid \psi \right\rangle \\\end{aligned}}} that are equal if | ψ ⟩ {\displaystyle \left|\psi \right\rangle } is an eigenvector, or eigenket of the observable A. Due to linearity, vectors can be defined in any number of dimensions, as each component of the vector acts on the function separately. One mathematical example is the del operator, which is itself a vector (useful in momentum-related quantum operators, in the table below). An operator in n-dimensional space can be written: A ^ = ∑ j = 1 n e j A ^ j {\displaystyle \mathbf {\hat {A}} =\sum _{j=1}^{n}\mathbf {e} _{j}{\hat {A}}_{j}} where ej are basis vectors corresponding to each component operator Aj. Each component will yield a corresponding eigenvalue a j {\displaystyle a_{j}} . Acting this on the wave function ψ: A ^ ψ = ( ∑ j = 1 n e j A ^ j ) ψ = ∑ j = 1 n ( e j A ^ j ψ ) = ∑ j = 1 n ( e j a j ψ ) {\displaystyle \mathbf {\hat {A}} \psi =\left(\sum _{j=1}^{n}\mathbf {e} _{j}{\hat {A}}_{j}\right)\psi =\sum _{j=1}^{n}\left(\mathbf {e} _{j}{\hat {A}}_{j}\psi \right)=\sum _{j=1}^{n}\left(\mathbf {e} _{j}a_{j}\psi \right)} in which we have used A ^ j ψ = a j ψ . {\displaystyle {\hat {A}}_{j}\psi =a_{j}\psi .} In bra–ket notation: A ^ ψ = A ^ ψ ( r ) = A ^ ⟨ r ∣ ψ ⟩ = ⟨ r | A ^ | ψ ⟩ ( ∑ j = 1 n e j A ^ j ) ψ = ( ∑ j = 1 n e j A ^ j ) ψ ( r ) = ( ∑ j = 1 n e j A ^ j ) ⟨ r ∣ ψ ⟩ = ⟨ r | ∑ j = 1 n e j A ^ j | ψ ⟩ {\displaystyle {\begin{aligned}\mathbf {\hat {A}} \psi =\mathbf {\hat {A}} \psi (\mathbf {r} )=\mathbf {\hat {A}} \left\langle \mathbf {r} \mid \psi \right\rangle &=\left\langle \mathbf {r} \left\vert \mathbf {\hat {A}} \right\vert \psi \right\rangle \\\left(\sum _{j=1}^{n}\mathbf {e} _{j}{\hat {A}}_{j}\right)\psi =\left(\sum _{j=1}^{n}\mathbf {e} _{j}{\hat {A}}_{j}\right)\psi (\mathbf {r} )=\left(\sum _{j=1}^{n}\mathbf {e} _{j}{\hat {A}}_{j}\right)\left\langle \mathbf {r} \mid \psi \right\rangle &=\left\langle \mathbf {r} \left\vert \sum _{j=1}^{n}\mathbf {e} _{j}{\hat {A}}_{j}\right\vert \psi \right\rangle \end{aligned}}} === Commutation of operators on Ψ === If two observables A and B have linear operators A ^ {\displaystyle {\hat {A}}} and B ^ {\displaystyle {\hat {B}}} , the commutator is defined by, [ A ^ , B ^ ] = A ^ B ^ − B ^ A ^ {\displaystyle \left[{\hat {A}},{\hat {B}}\right]={\hat {A}}{\hat {B}}-{\hat {B}}{\hat {A}}} The commutator is itself a (composite) operator. Acting the commutator on ψ gives: [ A ^ , B ^ ] ψ = A ^ B ^ ψ − B ^ A ^ ψ . {\displaystyle \left[{\hat {A}},{\hat {B}}\right]\psi ={\hat {A}}{\hat {B}}\psi -{\hat {B}}{\hat {A}}\psi .} If ψ is an eigenfunction with eigenvalues a and b for observables A and B respectively, and if the operators commute: [ A ^ , B ^ ] ψ = 0 , {\displaystyle \left[{\hat {A}},{\hat {B}}\right]\psi =0,} then the observables A and B can be measured simultaneously with infinite precision, i.e., uncertainties Δ A = 0 {\displaystyle \Delta A=0} , Δ B = 0 {\displaystyle \Delta B=0} simultaneously. ψ is then said to be the simultaneous eigenfunction of A and B. To illustrate this: [ A ^ , B ^ ] ψ = A ^ B ^ ψ − B ^ A ^ ψ = a ( b ψ ) − b ( a ψ ) = 0. {\displaystyle {\begin{aligned}\left[{\hat {A}},{\hat {B}}\right]\psi &={\hat {A}}{\hat {B}}\psi -{\hat {B}}{\hat {A}}\psi \\&=a(b\psi )-b(a\psi )\\&=0.\\\end{aligned}}} It shows that measurement of A and B does not cause any shift of state, i.e., initial and final states are same (no disturbance due to measurement). Suppose we measure A to get value a. We then measure B to get the value b. We measure A again. We still get the same value a. Clearly the state (ψ) of the system is not destroyed and so we are able to measure A and B simultaneously with infinite precision. If the operators do not commute: [ A ^ , B ^ ] ψ ≠ 0 , {\displaystyle \left[{\hat {A}},{\hat {B}}\right]\psi \neq 0,} they cannot be prepared simultaneously to arbitrary precision, and there is an uncertainty relation between the observables Δ A Δ B ≥ | 1 2 ⟨ [ A , B ] ⟩ | {\displaystyle \Delta A\Delta B\geq \left|{\frac {1}{2}}\langle [A,B]\rangle \right|} even if ψ is an eigenfunction the above relation holds. Notable pairs are position-and-momentum and energy-and-time uncertainty relations, and the angular momenta (spin, orbital and total) about any two orthogonal axes (such as Lx and Ly, or sy and sz, etc.). === Expectation values of operators on Ψ === The expectation value (equivalently the average or mean value) is the average measurement of an observable, for particle in region R. The expectation value ⟨ A ^ ⟩ {\displaystyle \left\langle {\hat {A}}\right\rangle } of the operator A ^ {\displaystyle {\hat {A}}} is calculated from: ⟨ A ^ ⟩ = ∫ R ψ ∗ ( r ) A ^ ψ ( r ) d 3 r = ⟨ ψ | A ^ | ψ ⟩ . {\displaystyle \left\langle {\hat {A}}\right\rangle =\int _{R}\psi ^{*}\left(\mathbf {r} \right){\hat {A}}\psi \left(\mathbf {r} \right)\mathrm {d} ^{3}\mathbf {r} =\left\langle \psi \left|{\hat {A}}\right|\psi \right\rangle .} This can be generalized to any function F of an operator: ⟨ F ( A ^ ) ⟩ = ∫ R ψ ( r ) ∗ [ F ( A ^ ) ψ ( r ) ] d 3 r = ⟨ ψ | F ( A ^ ) | ψ ⟩ , {\displaystyle \left\langle F\left({\hat {A}}\right)\right\rangle =\int _{R}\psi (\mathbf {r} )^{*}\left[F\left({\hat {A}}\right)\psi (\mathbf {r} )\right]\mathrm {d} ^{3}\mathbf {r} =\left\langle \psi \left|F\left({\hat {A}}\right)\right|\psi \right\rangle ,} An example of F is the 2-fold action of A on ψ, i.e. squaring an operator or doing it twice: F ( A ^ ) = A ^ 2 ⇒ ⟨ A ^ 2 ⟩ = ∫ R ψ ∗ ( r ) A ^ 2 ψ ( r ) d 3 r = ⟨ ψ | A ^ 2 | ψ ⟩ {\displaystyle {\begin{aligned}F\left({\hat {A}}\right)&={\hat {A}}^{2}\\\Rightarrow \left\langle {\hat {A}}^{2}\right\rangle &=\int _{R}\psi ^{*}\left(\mathbf {r} \right){\hat {A}}^{2}\psi \left(\mathbf {r} \right)\mathrm {d} ^{3}\mathbf {r} =\left\langle \psi \left\vert {\hat {A}}^{2}\right\vert \psi \right\rangle \\\end{aligned}}\,\!} === Hermitian operators === The definition of a Hermitian operator is: A ^ = A ^ † {\displaystyle {\hat {A}}={\hat {A}}^{\dagger }} Following from this, in bra–ket notation: ⟨ ϕ i | A ^ | ϕ j ⟩ = ⟨ ϕ j | A ^ | ϕ i ⟩ ∗ . {\displaystyle \left\langle \phi _{i}\left|{\hat {A}}\right|\phi _{j}\right\rangle =\left\langle \phi _{j}\left|{\hat {A}}\right|\phi _{i}\right\rangle ^{*}.} Important properties of Hermitian operators include: real eigenvalues, eigenvectors with different eigenvalues are orthogonal, eigenvectors can be chosen to be a complete orthonormal basis, === Operators in matrix mechanics === An operator can be written in matrix form to map one basis vector to another. Since the operators are linear, the matrix is a linear transformation (aka transition matrix) between bases. Each basis element ϕ j {\displaystyle \phi _{j}} can be connected to another, by the expression: A i j = ⟨ ϕ i | A ^ | ϕ j ⟩ , {\displaystyle A_{ij}=\left\langle \phi _{i}\left|{\hat {A}}\right|\phi _{j}\right\rangle ,} which is a matrix element: A ^ = ( A 11 A 12 ⋯ A 1 n A 21 A 22 ⋯ A 2 n ⋮ ⋮ ⋱ ⋮ A n 1 A n 2 ⋯ A n n ) {\displaystyle {\hat {A}}={\begin{pmatrix}A_{11}&A_{12}&\cdots &A_{1n}\\A_{21}&A_{22}&\cdots &A_{2n}\\\vdots &\vdots &\ddots &\vdots \\A_{n1}&A_{n2}&\cdots &A_{nn}\\\end{pmatrix}}} A further property of a Hermitian operator is that eigenfunctions corresponding to different eigenvalues are orthogonal. In matrix form, operators allow real eigenvalues to be found, corresponding to measurements. Orthogonality allows a suitable basis set of vectors to represent the state of the quantum system. The eigenvalues of the operator are also evaluated in the same way as for the square matrix, by solving the characteristic polynomial: det ( A ^ − a I ^ ) = 0 , {\displaystyle \det \left({\hat {A}}-a{\hat {I}}\right)=0,} where I is the n × n identity matrix, as an operator it corresponds to the identity operator. For a discrete basis: I ^ = ∑ i | ϕ i ⟩ ⟨ ϕ i | {\displaystyle {\hat {I}}=\sum _{i}|\phi _{i}\rangle \langle \phi _{i}|} while for a continuous basis: I ^ = ∫ | ϕ ⟩ ⟨ ϕ | d ϕ {\displaystyle {\hat {I}}=\int |\phi \rangle \langle \phi |\mathrm {d} \phi } === Inverse of an operator === A non-singular operator A ^ {\displaystyle {\hat {A}}} has an inverse A ^ − 1 {\displaystyle {\hat {A}}^{-1}} defined by: A ^ A ^ − 1 = A ^ − 1 A ^ = I ^ {\displaystyle {\hat {A}}{\hat {A}}^{-1}={\hat {A}}^{-1}{\hat {A}}={\hat {I}}} If an operator has no inverse, it is a singular operator. In a finite-dimensional space, an operator is non-singular if and only if its determinant is nonzero: det ( A ^ ) ≠ 0 {\displaystyle \det \left({\hat {A}}\right)\neq 0} and hence the determinant is zero for a singular operator. === Table of Quantum Mechanics operators === The operators used in quantum mechanics are collected in the table below (see for example). The bold-face vectors with circumflexes are not unit vectors, they are 3-vector operators; all three spatial components taken together. === Examples of applying quantum operators === The procedure for extracting information from a wave function is as follows. Consider the momentum p of a particle as an example. The momentum operator in position basis in one dimension is: p ^ = − i ℏ ∂ ∂ x {\displaystyle {\hat {p}}=-i\hbar {\frac {\partial }{\partial x}}} Letting this act on ψ we obtain: p ^ ψ = − i ℏ ∂ ∂ x ψ , {\displaystyle {\hat {p}}\psi =-i\hbar {\frac {\partial }{\partial x}}\psi ,} if ψ is an eigenfunction of p ^ {\displaystyle {\hat {p}}} , then the momentum eigenvalue p is the value of the particle's momentum, found by: − i ℏ ∂ ∂ x ψ = p ψ . {\displaystyle -i\hbar {\frac {\partial }{\partial x}}\psi =p\psi .} For three dimensions the momentum operator uses the nabla operator to become: p ^ = − i ℏ ∇ . {\displaystyle \mathbf {\hat {p}} =-i\hbar \nabla .} In Cartesian coordinates (using the standard Cartesian basis vectors ex, ey, ez) this can be written; e x p ^ x + e y p ^ y + e z p ^ z = − i ℏ ( e x ∂ ∂ x + e y ∂ ∂ y + e z ∂ ∂ z ) , {\displaystyle \mathbf {e} _{\mathrm {x} }{\hat {p}}_{x}+\mathbf {e} _{\mathrm {y} }{\hat {p}}_{y}+\mathbf {e} _{\mathrm {z} }{\hat {p}}_{z}=-i\hbar \left(\mathbf {e} _{\mathrm {x} }{\frac {\partial }{\partial x}}+\mathbf {e} _{\mathrm {y} }{\frac {\partial }{\partial y}}+\mathbf {e} _{\mathrm {z} }{\frac {\partial }{\partial z}}\right),} that is: p ^ x = − i ℏ ∂ ∂ x , p ^ y = − i ℏ ∂ ∂ y , p ^ z = − i ℏ ∂ ∂ z {\displaystyle {\hat {p}}_{x}=-i\hbar {\frac {\partial }{\partial x}},\quad {\hat {p}}_{y}=-i\hbar {\frac {\partial }{\partial y}},\quad {\hat {p}}_{z}=-i\hbar {\frac {\partial }{\partial z}}\,\!} The process of finding eigenvalues is the same. Since this is a vector and operator equation, if ψ is an eigenfunction, then each component of the momentum operator will have an eigenvalue corresponding to that component of momentum. Acting p ^ {\displaystyle \mathbf {\hat {p}} } on ψ obtains: p ^ x ψ = − i ℏ ∂ ∂ x ψ = p x ψ p ^ y ψ = − i ℏ ∂ ∂ y ψ = p y ψ p ^ z ψ = − i ℏ ∂ ∂ z ψ = p z ψ {\displaystyle {\begin{aligned}{\hat {p}}_{x}\psi &=-i\hbar {\frac {\partial }{\partial x}}\psi =p_{x}\psi \\{\hat {p}}_{y}\psi &=-i\hbar {\frac {\partial }{\partial y}}\psi =p_{y}\psi \\{\hat {p}}_{z}\psi &=-i\hbar {\frac {\partial }{\partial z}}\psi =p_{z}\psi \\\end{aligned}}\,\!} == See also == == References ==
Wikipedia/Operators_(physics)
In theoretical physics, Hamiltonian field theory is the field-theoretic analogue to classical Hamiltonian mechanics. It is a formalism in classical field theory alongside Lagrangian field theory. It also has applications in quantum field theory. == Definition == The Hamiltonian for a system of discrete particles is a function of their generalized coordinates and conjugate momenta, and possibly, time. For continua and fields, Hamiltonian mechanics is unsuitable but can be extended by considering a large number of point masses, and taking the continuous limit, that is, infinitely many particles forming a continuum or field. Since each point mass has one or more degrees of freedom, the field formulation has infinitely many degrees of freedom. === One scalar field === The Hamiltonian density is the continuous analogue for fields; it is a function of the fields, the conjugate "momentum" fields, and possibly the space and time coordinates themselves. For one scalar field φ(x, t), the Hamiltonian density is defined from the Lagrangian density by H ( ϕ , π , x , t ) = ϕ ˙ π − L ( ϕ , ∇ ϕ , ∂ ϕ / ∂ t , x , t ) . {\displaystyle {\mathcal {H}}(\phi ,\pi ,\mathbf {x} ,t)={\dot {\phi }}\pi -{\mathcal {L}}(\phi ,\nabla \phi ,\partial \phi /\partial t,\mathbf {x} ,t)\,.} with ∇ the "del" or "nabla" operator, x is the position vector of some point in space, and t is time. The Lagrangian density is a function of the fields in the system, their space and time derivatives, and possibly the space and time coordinates themselves. It is the field analogue to the Lagrangian function for a system of discrete particles described by generalized coordinates. As in Hamiltonian mechanics where every generalized coordinate has a corresponding generalized momentum, the field φ(x, t) has a conjugate momentum field π(x, t), defined as the partial derivative of the Lagrangian density with respect to the time derivative of the field, π = ∂ L ∂ ϕ ˙ , ϕ ˙ ≡ ∂ ϕ ∂ t , {\displaystyle \pi ={\frac {\partial {\mathcal {L}}}{\partial {\dot {\phi }}}}\,,\quad {\dot {\phi }}\equiv {\frac {\partial \phi }{\partial t}}\,,} in which the overdot denotes a partial time derivative ∂/∂t, not a total time derivative d/dt. === Many scalar fields === For many fields φi(x, t) and their conjugates πi(x, t) the Hamiltonian density is a function of them all: H ( ϕ 1 , ϕ 2 , … , π 1 , π 2 , … , x , t ) = ∑ i ϕ i ˙ π i − L ( ϕ 1 , ϕ 2 , … ∇ ϕ 1 , ∇ ϕ 2 , … , ∂ ϕ 1 / ∂ t , ∂ ϕ 2 / ∂ t , … , x , t ) . {\displaystyle {\mathcal {H}}(\phi _{1},\phi _{2},\ldots ,\pi _{1},\pi _{2},\ldots ,\mathbf {x} ,t)=\sum _{i}{\dot {\phi _{i}}}\pi _{i}-{\mathcal {L}}(\phi _{1},\phi _{2},\ldots \nabla \phi _{1},\nabla \phi _{2},\ldots ,\partial \phi _{1}/\partial t,\partial \phi _{2}/\partial t,\ldots ,\mathbf {x} ,t)\,.} where each conjugate field is defined with respect to its field, π i ( x , t ) = ∂ L ∂ ϕ ˙ i . {\displaystyle \pi _{i}(\mathbf {x} ,t)={\frac {\partial {\mathcal {L}}}{\partial {\dot {\phi }}_{i}}}\,.} In general, for any number of fields, the volume integral of the Hamiltonian density gives the Hamiltonian, in three spatial dimensions: H = ∫ H d 3 x . {\displaystyle H=\int {\mathcal {H}}\ d^{3}x\,.} The Hamiltonian density is the Hamiltonian per unit spatial volume. The corresponding dimension is [energy][length]−3, in SI units Joules per metre cubed, J m−3. === Tensor and spinor fields === The above equations and definitions can be extended to vector fields and more generally tensor fields and spinor fields. In physics, tensor fields describe bosons and spinor fields describe fermions. == Equations of motion == The equations of motion for the fields are similar to the Hamiltonian equations for discrete particles. For any number of fields: where again the overdots are partial time derivatives, the variational derivative with respect to the fields δ H δ ϕ i = ∂ H ∂ ϕ i − ∇ ⋅ ∂ H ∂ ( ∇ ϕ i ) , {\displaystyle {\frac {\delta H}{\delta \phi _{i}}}={\frac {\partial {\mathcal {H}}}{\partial \phi _{i}}}-\nabla \cdot {\frac {\partial {\mathcal {H}}}{\partial (\nabla \phi _{i})}}\,,} with · the dot product, must be used instead of simply partial derivatives. == Phase space == The fields φi and conjugates πi form an infinite dimensional phase space, because fields have an infinite number of degrees of freedom. == Poisson bracket == For two functions which depend on the fields φi and πi, their spatial derivatives, and the space and time coordinates, A = ∫ d 3 x A ( ϕ 1 , ϕ 2 , … , π 1 , π 2 , … , ∇ ϕ 1 , ∇ ϕ 2 , … , ∇ π 1 , ∇ π 2 , … , x , t ) , {\displaystyle A=\int d^{3}x{\mathcal {A}}\left(\phi _{1},\phi _{2},\ldots ,\pi _{1},\pi _{2},\ldots ,\nabla \phi _{1},\nabla \phi _{2},\ldots ,\nabla \pi _{1},\nabla \pi _{2},\ldots ,\mathbf {x} ,t\right)\,,} B = ∫ d 3 x B ( ϕ 1 , ϕ 2 , … , π 1 , π 2 , … , ∇ ϕ 1 , ∇ ϕ 2 , … , ∇ π 1 , ∇ π 2 , … , x , t ) , {\displaystyle B=\int d^{3}x{\mathcal {B}}\left(\phi _{1},\phi _{2},\ldots ,\pi _{1},\pi _{2},\ldots ,\nabla \phi _{1},\nabla \phi _{2},\ldots ,\nabla \pi _{1},\nabla \pi _{2},\ldots ,\mathbf {x} ,t\right)\,,} and the fields are zero on the boundary of the volume the integrals are taken over, the field theoretic Poisson bracket is defined as (not to be confused with the anticommutator from quantum mechanics). { A , B } ϕ , π = ∫ d 3 x ∑ i ( δ A δ ϕ i δ B δ π i − δ B δ ϕ i δ A δ π i ) , {\displaystyle \{A,B\}_{\phi ,\pi }=\int d^{3}x\sum _{i}\left({\frac {\delta {\mathcal {A}}}{\delta \phi _{i}}}{\frac {\delta {\mathcal {B}}}{\delta \pi _{i}}}-{\frac {\delta {\mathcal {B}}}{\delta \phi _{i}}}{\frac {\delta {\mathcal {A}}}{\delta \pi _{i}}}\right)\,,} where δ F / δ f {\displaystyle \delta {\mathcal {F}}/\delta f} is the variational derivative δ F δ f = ∂ F ∂ f − ∑ i ∇ i ∂ F ∂ ( ∇ i f ) . {\displaystyle {\frac {\delta {\mathcal {F}}}{\delta f}}={\frac {\partial {\mathcal {F}}}{\partial f}}-\sum _{i}\nabla _{i}{\frac {\partial {\mathcal {F}}}{\partial (\nabla _{i}f)}}\,.} Under the same conditions of vanishing fields on the surface, the following result holds for the time evolution of A (similarly for B): d A d t = { A , H } + ∂ A ∂ t {\displaystyle {\frac {dA}{dt}}=\{A,H\}+{\frac {\partial A}{\partial t}}} which can be found from the total time derivative of A, integration by parts, and using the above Poisson bracket. == Explicit time-independence == The following results are true if the Lagrangian and Hamiltonian densities are explicitly time-independent (they can still have implicit time-dependence via the fields and their derivatives), === Kinetic and potential energy densities === The Hamiltonian density is the total energy density, the sum of the kinetic energy density ( T {\displaystyle {\mathcal {T}}} ) and the potential energy density ( V {\displaystyle {\mathcal {V}}} ), H = T + V . {\displaystyle {\mathcal {H}}={\mathcal {T}}+{\mathcal {V}}\,.} === Continuity equation === Taking the partial time derivative of the definition of the Hamiltonian density above, and using the chain rule for implicit differentiation and the definition of the conjugate momentum field, gives the continuity equation: ∂ H ∂ t + ∇ ⋅ S = 0 {\displaystyle {\frac {\partial {\mathcal {H}}}{\partial t}}+\nabla \cdot \mathbf {S} =0} in which the Hamiltonian density can be interpreted as the energy density, and S = ∂ L ∂ ( ∇ ϕ ) ∂ ϕ ∂ t {\displaystyle \mathbf {S} ={\frac {\partial {\mathcal {L}}}{\partial (\nabla \phi )}}{\frac {\partial \phi }{\partial t}}} the energy flux, or flow of energy per unit time per unit surface area. == Relativistic field theory == Covariant Hamiltonian field theory is the relativistic formulation of Hamiltonian field theory. Hamiltonian field theory usually means the symplectic Hamiltonian formalism when applied to classical field theory, that takes the form of the instantaneous Hamiltonian formalism on an infinite-dimensional phase space, and where canonical coordinates are field functions at some instant of time. This Hamiltonian formalism is applied to quantization of fields, e.g., in quantum gauge theory. In Covariant Hamiltonian field theory, canonical momenta pμi corresponds to derivatives of fields with respect to all world coordinates xμ. Covariant Hamilton equations are equivalent to the Euler–Lagrange equations in the case of hyperregular Lagrangians. Covariant Hamiltonian field theory is developed in the Hamilton–De Donder, polysymplectic, multisymplectic and k-symplectic variants. A phase space of covariant Hamiltonian field theory is a finite-dimensional polysymplectic or multisymplectic manifold. Hamiltonian non-autonomous mechanics is formulated as covariant Hamiltonian field theory on fiber bundles over the time axis, i.e. the real line R {\displaystyle \mathbb {R} } . == See also == Analytical mechanics De Donder–Weyl theory Four-vector Canonical quantization Hamiltonian fluid mechanics Covariant classical field theory Polysymplectic manifold Non-autonomous mechanics == Notes == == Citations == == References == Badin, G.; Crisciani, F. (2018). Variational Formulation of Fluid and Geophysical Fluid Dynamics - Mechanics, Symmetries and Conservation Laws -. Springer. p. 218. Bibcode:2018vffg.book.....B. doi:10.1007/978-3-319-59695-2. ISBN 978-3-319-59694-5. S2CID 125902566. Goldstein, Herbert (1980). "Chapter 12: Continuous Systems and Fields". Classical Mechanics (2nd ed.). San Francisco, CA: Addison Wesley. pp. 562–565. ISBN 0201029189. Greiner, W.; Reinhardt, J. (1996), Field Quantization, Springer, ISBN 3-540-59179-6 Fetter, A. L.; Walecka, J. D. (1980). Theoretical Mechanics of Particles and Continua. Dover. pp. 258–259. ISBN 978-0-486-43261-8.
Wikipedia/Hamiltonian_field_theory
In classical mechanics, the Udwadia–Kalaba formulation is a method for deriving the equations of motion of a constrained mechanical system. The method was first described by Anatolii Fedorovich Vereshchagin for the particular case of robotic arms, and later generalized to all mechanical systems by Firdaus E. Udwadia and Robert E. Kalaba in 1992. The approach is based on Gauss's principle of least constraint. The Udwadia–Kalaba method applies to both holonomic constraints and nonholonomic constraints, as long as they are linear with respect to the accelerations. The method generalizes to constraint forces that do not obey D'Alembert's principle. == Background == The Udwadia–Kalaba equation was developed in 1992 and describes the motion of a constrained mechanical system that is subjected to equality constraints. This differs from the Lagrangian formalism, which uses the Lagrange multipliers to describe the motion of constrained mechanical systems, and other similar approaches such as the Gibbs–Appell approach. The physical interpretation of the equation has applications in areas beyond theoretical physics, such as the control of highly nonlinear general dynamical systems. == The central problem of constrained motion == In the study of the dynamics of mechanical systems, the configuration of a given system S is, in general, completely described by n generalized coordinates so that its generalized coordinate n-vector is given by q := [ q 1 , q 2 , … , q n ] T . {\displaystyle \mathbf {q} :=[q_{1},q_{2},\ldots ,q_{n}]^{\mathrm {T} }.} where T denotes matrix transpose. Using Newtonian or Lagrangian dynamics, the unconstrained equations of motion of the system S under study can be derived as a matrix equation (see matrix multiplication): where the dots represent derivatives with respect to time: q ˙ i = d q i d t . {\displaystyle {\dot {q}}_{i}={\frac {dq_{i}}{dt}}\,.} It is assumed that the initial conditions q(0) and q ˙ ( 0 ) {\displaystyle {\dot {\mathbf {q} }}(0)} are known. We call the system S unconstrained because q ˙ ( 0 ) {\displaystyle {\dot {\mathbf {q} }}(0)} may be arbitrarily assigned. The n-vector Q denotes the total generalized force acted on the system by some external influence; it can be expressed as the sum of all the conservative forces as well as non-conservative forces. The n-by-n matrix M is symmetric, and it can be positive definite ( M > 0 ) {\displaystyle (\mathbf {M} >0)} or semi-positive definite ( M ≥ 0 ) {\displaystyle (\mathbf {M} \geq 0)} . Typically, it is assumed that M is positive definite; however, it is not uncommon to derive the unconstrained equations of motion of the system S such that M is only semi-positive definite; i.e., the mass matrix may be singular (it has no inverse matrix). === Constraints === We now assume that the unconstrained system S is subjected to a set of m consistent equality constraints given by A ( q , q ˙ , t ) q ¨ = b ( q , q ˙ , t ) , {\displaystyle \mathbf {A} (q,{\dot {q}},t){\ddot {\mathbf {q} }}=\mathbf {b} (q,{\dot {q}},t),} where A is a known m-by-n matrix of rank r and b is a known m-vector. We note that this set of constraint equations encompass a very general variety of holonomic and non-holonomic equality constraints. For example, holonomic constraints of the form φ ( q , t ) = 0 {\displaystyle \varphi (q,t)=0} can be differentiated twice with respect to time while non-holonomic constraints of the form ψ ( q , q ˙ , t ) = 0 {\displaystyle \psi (q,{\dot {q}},t)=0} can be differentiated once with respect to time to obtain the m-by-n matrix A and the m-vector b. In short, constraints may be specified that are nonlinear functions of displacement and velocity, explicitly dependent on time, and functionally dependent. As a consequence of subjecting these constraints to the unconstrained system S, an additional force is conceptualized to arise, namely, the force of constraint. Therefore, the constrained system Sc becomes where Qc—the constraint force—is the additional force needed to satisfy the imposed constraints. The central problem of constrained motion is now stated as follows: given the unconstrained equations of motion of the system S, given the generalized displacement q(t) and the generalized velocity q ˙ ( t ) {\displaystyle {\dot {q}}(t)} of the constrained system Sc at time t, and given the constraints in the form A q ¨ = b {\displaystyle \mathbf {A} {\ddot {q}}=\mathbf {b} } as stated above, find the equations of motion for the constrained system—the acceleration—at time t, which is in accordance with the agreed upon principles of analytical dynamics. == Notation == Below, for positive definite ⁠ M {\displaystyle \mathbf {M} } ⁠, ⁠ M − 1 / 2 {\displaystyle \mathbf {M} ^{-1/2}} ⁠ denotes the inverse of its square root, defined as M − 1 / 2 = W Λ − 1 / 2 W T {\displaystyle \mathbf {M} ^{-1/2}=\mathbf {W} \mathbf {\Lambda } ^{-1/2}\mathbf {W} ^{T}} , where ⁠ W {\displaystyle \mathbf {W} } ⁠ is the orthogonal matrix arising from eigendecomposition (whose rows consist of suitably selected eigenvectors of ⁠ M {\displaystyle \mathbf {M} } ⁠), and ⁠ Λ − 1 / 2 {\displaystyle \mathbf {\Lambda } ^{-1/2}} ⁠ is the diagonal matrix whose diagonal elements are the inverse square roots of the eigenvalues corresponding to the eigenvectors in ⁠ W {\displaystyle \mathbf {W} } ⁠. == Equation of motion == The solution to this central problem is given by the Udwadia–Kalaba equation. When the matrix M is positive definite, the equation of motion of the constrained system Sc, at each instant of time, is M q ¨ = Q + M 1 / 2 ( A M − 1 / 2 ) + ( b − A M − 1 Q ) , {\displaystyle \mathbf {M} {\ddot {\mathbf {q} }}=\mathbf {Q} +\mathbf {M} ^{1/2}\left(\mathbf {A} \mathbf {M} ^{-1/2}\right)^{+}(\mathbf {b} -\mathbf {A} \mathbf {M} ^{-1}\mathbf {Q} ),} where the '+' symbol denotes the pseudoinverse of the matrix A M − 1 / 2 {\displaystyle \mathbf {A} \mathbf {M} ^{-1/2}} . The force of constraint is thus given explicitly as Q c = M 1 / 2 ( A M − 1 / 2 ) + ( b − A M − 1 Q ) , {\displaystyle \mathbf {Q} _{c}=\mathbf {M} ^{1/2}\left(\mathbf {A} \mathbf {M} ^{-1/2}\right)^{+}(\mathbf {b} -\mathbf {A} \mathbf {M} ^{-1}\mathbf {Q} ),} and since the matrix M is positive definite the generalized acceleration of the constrained system Sc is determined explicitly by q ¨ = M − 1 Q + M − 1 / 2 ( A M − 1 / 2 ) + ( b − A M − 1 Q ) . {\displaystyle {\ddot {\mathbf {q} }}=\mathbf {M} ^{-1}\mathbf {Q} +\mathbf {M} ^{-1/2}\left(\mathbf {A} \mathbf {M} ^{-1/2}\right)^{+}(\mathbf {b} -\mathbf {A} \mathbf {M} ^{-1}\mathbf {Q} ).} In the case that the matrix M is semi-positive definite ( M ≥ 0 ) {\displaystyle (\mathbf {M} \geq 0)} , the above equation cannot be used directly because M may be singular. Furthermore, the generalized accelerations may not be unique unless the (n + m)-by-n matrix M ^ = [ M A ] {\displaystyle {\hat {\mathbf {M} }}=\left[{\begin{array}{c}\mathbf {M} \\\mathbf {A} \end{array}}\right]} has full rank (rank = n). But since the observed accelerations of mechanical systems in nature are always unique, this rank condition is a necessary and sufficient condition for obtaining the uniquely defined generalized accelerations of the constrained system Sc at each instant of time. Thus, when M ^ {\displaystyle {\hat {\mathbf {M} }}} has full rank, the equations of motion of the constrained system Sc at each instant of time are uniquely determined by (1) creating the auxiliary unconstrained system M A q ¨ := ( M + A + A ) q ¨ = Q + A + b := Q b , {\displaystyle \mathbf {M} _{\mathbf {A} }{\ddot {\mathbf {q} }}:=(\mathbf {M} +\mathbf {A} ^{+}\mathbf {A} ){\ddot {\mathbf {q} }}=\mathbf {Q} +\mathbf {A} ^{+}\mathbf {b} :=\mathbf {Q} _{\mathbf {b} },} and by (2) applying the fundamental equation of constrained motion to this auxiliary unconstrained system so that the auxiliary constrained equations of motion are explicitly given by M A q ¨ = Q b + M A 1 / 2 ( A M A − 1 / 2 ) + ( b − A M A − 1 Q b ) . {\displaystyle \mathbf {M} _{\mathbf {A} }{\ddot {\mathbf {q} }}=\mathbf {Q} _{\mathbf {b} }+\mathbf {M} _{\mathbf {A} }^{1/2}(\mathbf {A} \mathbf {M} _{\mathbf {A} }^{-1/2})^{+}(\mathbf {b} -\mathbf {A} \mathbf {M} _{\mathbf {A} }^{-1}\mathbf {Q} _{\mathbf {b} }).} Moreover, when the matrix M ^ {\displaystyle {\hat {\mathbf {M} }}} has full rank, the matrix M A {\displaystyle \mathbf {M} _{\mathbf {A} }} is always positive definite. This yields, explicitly, the generalized accelerations of the constrained system Sc as q ¨ = M A − 1 Q b + M A − 1 / 2 ( A M A − 1 / 2 ) + ( b − A M A − 1 Q b ) . {\displaystyle {\ddot {\mathbf {q} }}=\mathbf {M} _{\mathbf {A} }^{-1}\mathbf {Q} _{\mathbf {b} }+\mathbf {M} _{\mathbf {A} }^{-1/2}(\mathbf {A} \mathbf {M} _{\mathbf {A} }^{-1/2})^{+}(\mathbf {b} -\mathbf {A} \mathbf {M} _{\mathbf {A} }^{-1}\mathbf {Q} _{\mathbf {b} }).} This equation is valid when the matrix M is either positive definite or positive semi-definite. Additionally, the force of constraint that causes the constrained system Sc—a system that may have a singular mass matrix M—to satisfy the imposed constraints is explicitly given by Q c = M A 1 / 2 ( A M A − 1 / 2 ) + ( b − A M A − 1 Q b ) . {\displaystyle \mathbf {Q} _{c}=\mathbf {M} _{\mathbf {A} }^{1/2}(\mathbf {A} \mathbf {M} _{\mathbf {A} }^{-1/2})^{+}(\mathbf {b} -\mathbf {A} \mathbf {M} _{\mathbf {A} }^{-1}\mathbf {Q} _{\mathbf {b} }).} == Non-ideal constraints == At any time during the motion we may consider perturbing the system by a virtual displacement δr consistent with the constraints of the system. The displacement is allowed to be either reversible or irreversible. If the displacement is irreversible, then it performs virtual work. We may write the virtual work of the displacement as W c ( t ) = C T ( q , q ˙ , t ) δ r ( t ) {\displaystyle W_{c}(t)=\mathbf {C} ^{\mathrm {T} }(q,{\dot {q}},t)\delta \mathbf {r} (t)} The vector C ( q , q ˙ , t ) {\displaystyle \mathbf {C} (q,{\dot {q}},t)} describes the non-ideality of the virtual work and may be related, for example, to friction or drag forces (such forces have velocity dependence). This is a generalized D'Alembert's principle, where the usual form of the principle has vanishing virtual work with C ( q , q ˙ , t ) = 0 {\displaystyle \mathbf {C} (q,{\dot {q}},t)=0} . The Udwadia–Kalaba equation is modified by an additional non-ideal constraint term to M q ¨ = Q + M 1 / 2 ( A M − 1 / 2 ) + ( b − A M − 1 Q ) + M 1 / 2 [ I − ( A M − 1 / 2 ) + A M − 1 / 2 ] M − 1 / 2 C {\displaystyle \mathbf {M} {\ddot {\mathbf {q} }}=\mathbf {Q} +\mathbf {M} ^{1/2}\left(\mathbf {A} \mathbf {M} ^{-1/2}\right)^{+}(\mathbf {b} -\mathbf {A} \mathbf {M} ^{-1}\mathbf {Q} )+\mathbf {M} ^{1/2}\left[\mathbf {I} -\left(\mathbf {A} \mathbf {M} ^{-1/2}\right)^{+}\mathbf {A} \mathbf {M} ^{-1/2}\right]\mathbf {M} ^{-1/2}\mathbf {C} } == Examples == === Inverse Kepler problem === The method can solve the inverse Kepler problem of determining the force law that corresponds to the orbits that are conic sections. We take there to be no external forces (not even gravity) and instead constrain the particle motion to follow orbits of the form r = ε x + ℓ {\displaystyle r=\varepsilon x+\ell } where r = x 2 + y 2 {\displaystyle r={\sqrt {x^{2}+y^{2}}}} , ε {\displaystyle \varepsilon } is the eccentricity, and ℓ {\textstyle \ell } is the semi-latus rectum. Differentiating twice with respect to time and rearranging slightly gives a constraint ( x − r ε ) x ¨ + y y ¨ = − ( x y ˙ − y x ˙ ) 2 r 2 {\displaystyle (x-r\varepsilon ){\ddot {x}}+y{\ddot {y}}=-{\frac {(x{\dot {y}}-y{\dot {x}})^{2}}{r^{2}}}} We assume the body has a simple, constant mass. We also assume that angular momentum about the focus is conserved as m ( x y ˙ − y x ˙ ) = L {\displaystyle m(x{\dot {y}}-y{\dot {x}})=L} with time derivative x y ¨ − y x ¨ = 0 {\displaystyle x{\ddot {y}}-y{\ddot {x}}=0} We can combine these two constraints into the matrix equation ( x − r ε y y − x ) ( x ¨ y ¨ ) = ( − L 2 m 2 r 2 0 ) {\displaystyle {\begin{pmatrix}x-r\varepsilon &y\\y&-x\end{pmatrix}}{\begin{pmatrix}{\ddot {x}}\\{\ddot {y}}\end{pmatrix}}={\begin{pmatrix}-{\frac {L^{2}}{m^{2}r^{2}}}\\0\end{pmatrix}}} The constraint matrix has inverse ( x − r ε y y − x ) − 1 = 1 ℓ r ( x y y − ( x − r ε ) ) {\displaystyle {\begin{pmatrix}x-r\varepsilon &y\\y&-x\end{pmatrix}}^{-1}={\frac {1}{\ell r}}{\begin{pmatrix}x&y\\y&-(x-r\varepsilon )\end{pmatrix}}} The force of constraint is therefore the expected, central inverse square law F c = m A − 1 b = m ℓ r ( x y y − ( x − r ε ) ) ( − L 2 m 2 r 2 0 ) = − L 2 m ℓ r 2 ( cos ⁡ θ sin ⁡ θ ) {\displaystyle \mathbf {F} _{c}=m\mathbf {A} ^{-1}\mathbf {b} ={\frac {m}{\ell r}}{\begin{pmatrix}x&y\\y&-(x-r\varepsilon )\end{pmatrix}}{\begin{pmatrix}-{\frac {L^{2}}{m^{2}r^{2}}}\\0\end{pmatrix}}=-{\frac {L^{2}}{m\ell r^{2}}}{\begin{pmatrix}\cos \theta \\\sin \theta \end{pmatrix}}} === Inclined plane with friction === Consider a small block of constant mass on an inclined plane at an angle α {\displaystyle \alpha } above horizontal. The constraint that the block lie on the plane can be written as y = x tan ⁡ α {\displaystyle y=x\tan \alpha } After taking two time derivatives, we can put this into a standard constraint matrix equation form ( − tan ⁡ α 1 ) ( x ¨ y ¨ ) = 0 {\displaystyle {\begin{pmatrix}-\tan \alpha &1\end{pmatrix}}{\begin{pmatrix}{\ddot {x}}\\{\ddot {y}}\end{pmatrix}}=0} The constraint matrix has pseudoinverse ( − tan ⁡ α 1 ) + = cos 2 ⁡ α ( − tan ⁡ α 1 ) {\displaystyle {\begin{pmatrix}-\tan \alpha &1\end{pmatrix}}^{+}=\cos ^{2}\alpha {\begin{pmatrix}-\tan \alpha \\1\end{pmatrix}}} We allow there to be sliding friction between the block and the inclined plane. We parameterize this force by a standard coefficient of friction multiplied by the normal force C = − μ m g cos ⁡ α sgn ⁡ y ˙ ( cos ⁡ α sin ⁡ α ) {\displaystyle \mathbf {C} =-\mu mg\cos \alpha \operatorname {sgn} {\dot {y}}{\begin{pmatrix}\cos \alpha \\\sin \alpha \end{pmatrix}}} Whereas the force of gravity is reversible, the force of friction is not. Therefore, the virtual work associated with a virtual displacement will depend on C. We may summarize the three forces (external, ideal constraint, and non-ideal constraint) as follows: F ext = Q = − m g ( 0 y ) {\displaystyle \mathbf {F} _{\text{ext}}=\mathbf {Q} =-mg{\begin{pmatrix}0\\y\end{pmatrix}}} F c , i = − A + A Q = m g cos 2 ⁡ α ( − tan ⁡ α 1 ) ( − tan ⁡ α 1 ) ( 0 y ) = m g ( − sin ⁡ α cos ⁡ α cos 2 ⁡ α ) {\displaystyle \mathbf {F} _{c,i}=-\mathbf {A} ^{+}\mathbf {A} \mathbf {Q} =mg\cos ^{2}\alpha {\begin{pmatrix}-\tan \alpha \\1\end{pmatrix}}{\begin{pmatrix}-\tan \alpha &1\end{pmatrix}}{\begin{pmatrix}0\\y\end{pmatrix}}=mg{\begin{pmatrix}-\sin \alpha \cos \alpha \\\cos ^{2}\alpha \end{pmatrix}}} F c , n i = ( I − A + A ) C = − μ m g cos ⁡ α sgn ⁡ y ˙ [ ( 1 0 0 1 ) − cos 2 ⁡ α ( − tan ⁡ α 1 ) ( − tan ⁡ α 1 ) ] = − μ m g cos ⁡ α sgn ⁡ y ˙ ( cos 2 ⁡ α sin ⁡ α cos ⁡ α ) {\displaystyle \mathbf {F} _{c,ni}=(\mathbf {I} -\mathbf {A} ^{+}\mathbf {A} )\mathbf {C} =-\mu mg\cos \alpha \operatorname {sgn} {\dot {y}}\left[{\begin{pmatrix}1&0\\0&1\end{pmatrix}}-\cos ^{2}\alpha {\begin{pmatrix}-\tan \alpha \\1\end{pmatrix}}{\begin{pmatrix}-\tan \alpha &1\end{pmatrix}}\right]=-\mu mg\cos \alpha \operatorname {sgn} {\dot {y}}{\begin{pmatrix}\cos ^{2}\alpha \\\sin \alpha \cos \alpha \end{pmatrix}}} Combining the above, we find that the equations of motion are ( x ¨ y ¨ ) = 1 m ( F ext + F c , i + F c , n i ) = − g ( sin ⁡ α + μ cos ⁡ α sgn ⁡ y ˙ ) ( cos ⁡ α sin ⁡ α ) {\displaystyle {\begin{pmatrix}{\ddot {x}}\\{\ddot {y}}\end{pmatrix}}={\frac {1}{m}}\left(\mathbf {F} _{\text{ext}}+\mathbf {F} _{c,i}+\mathbf {F} _{c,ni}\right)=-g\left(\sin \alpha +\mu \cos \alpha \operatorname {sgn} {\dot {y}}\right){\begin{pmatrix}\cos \alpha \\\sin \alpha \end{pmatrix}}} This is like a constant downward acceleration due to gravity with a slight modification. If the block is moving up the inclined plane, then the friction increases the downward acceleration. If the block is moving down the inclined plane, then the friction reduces the downward acceleration. == References ==
Wikipedia/Udwadia–Kalaba_equation
Mechanics (from Ancient Greek μηχανική (mēkhanikḗ) 'of machines') is the area of physics concerned with the relationships between force, matter, and motion among physical objects. Forces applied to objects may result in displacements, which are changes of an object's position relative to its environment. Theoretical expositions of this branch of physics has its origins in Ancient Greece, for instance, in the writings of Aristotle and Archimedes (see History of classical mechanics and Timeline of classical mechanics). During the early modern period, scientists such as Galileo Galilei, Johannes Kepler, Christiaan Huygens, and Isaac Newton laid the foundation for what is now known as classical mechanics. As a branch of classical physics, mechanics deals with bodies that are either at rest or are moving with velocities significantly less than the speed of light. It can also be defined as the physical science that deals with the motion of and forces on bodies not in the quantum realm. == History == === Antiquity === The ancient Greek philosophers were among the first to propose that abstract principles govern nature. The main theory of mechanics in antiquity was Aristotelian mechanics, though an alternative theory is exposed in the pseudo-Aristotelian Mechanical Problems, often attributed to one of his successors. There is another tradition that goes back to the ancient Greeks where mathematics is used more extensively to analyze bodies statically or dynamically, an approach that may have been stimulated by prior work of the Pythagorean Archytas. Examples of this tradition include pseudo-Euclid (On the Balance), Archimedes (On the Equilibrium of Planes, On Floating Bodies), Hero (Mechanica), and Pappus (Collection, Book VIII). === Medieval age === In the Middle Ages, Aristotle's theories were criticized and modified by a number of figures, beginning with John Philoponus in the 6th century. A central problem was that of projectile motion, which was discussed by Hipparchus and Philoponus. Persian Islamic polymath Ibn Sīnā published his theory of motion in The Book of Healing (1020). He said that an impetus is imparted to a projectile by the thrower, and viewed it as persistent, requiring external forces such as air resistance to dissipate it. Ibn Sina made distinction between 'force' and 'inclination' (called "mayl"), and argued that an object gained mayl when the object is in opposition to its natural motion. So he concluded that continuation of motion is attributed to the inclination that is transferred to the object, and that object will be in motion until the mayl is spent. He also claimed that a projectile in a vacuum would not stop unless it is acted upon, consistent with Newton's first law of motion. On the question of a body subject to a constant (uniform) force, the 12th-century Jewish-Arab scholar Hibat Allah Abu'l-Barakat al-Baghdaadi (born Nathanel, Iraqi, of Baghdad) stated that constant force imparts constant acceleration. According to Shlomo Pines, al-Baghdaadi's theory of motion was "the oldest negation of Aristotle's fundamental dynamic law [namely, that a constant force produces a uniform motion], [and is thus an] anticipation in a vague fashion of the fundamental law of classical mechanics [namely, that a force applied continuously produces acceleration]." Influenced by earlier writers such as Ibn Sina and al-Baghdaadi, the 14th-century French priest Jean Buridan developed the theory of impetus, which later developed into the modern theories of inertia, velocity, acceleration and momentum. This work and others was developed in 14th-century England by the Oxford Calculators such as Thomas Bradwardine, who studied and formulated various laws regarding falling bodies. The concept that the main properties of a body are uniformly accelerated motion (as of falling bodies) was worked out by the 14th-century Oxford Calculators. === Early modern age === Two central figures in the early modern age are Galileo Galilei and Isaac Newton. Galileo's final statement of his mechanics, particularly of falling bodies, is his Two New Sciences (1638). Newton's 1687 Philosophiæ Naturalis Principia Mathematica provided a detailed mathematical account of mechanics, using the newly developed mathematics of calculus and providing the basis of Newtonian mechanics. There is some dispute over priority of various ideas: Newton's Principia is certainly the seminal work and has been tremendously influential, and many of the mathematics results therein could not have been stated earlier without the development of the calculus. However, many of the ideas, particularly as pertain to inertia and falling bodies, had been developed by prior scholars such as Christiaan Huygens and the less-known medieval predecessors. Precise credit is at times difficult or contentious because scientific language and standards of proof changed, so whether medieval statements are equivalent to modern statements or sufficient proof, or instead similar to modern statements and hypotheses is often debatable. === Modern age === Two main modern developments in mechanics are general relativity of Einstein, and quantum mechanics, both developed in the 20th century based in part on earlier 19th-century ideas. The development in the modern continuum mechanics, particularly in the areas of elasticity, plasticity, fluid dynamics, electrodynamics, and thermodynamics of deformable media, started in the second half of the 20th century. == Types of mechanical bodies == The often-used term body needs to stand for a wide assortment of objects, including particles, projectiles, spacecraft, stars, parts of machinery, parts of solids, parts of fluids (gases and liquids), etc. Other distinctions between the various sub-disciplines of mechanics concern the nature of the bodies being described. Particles are bodies with little (known) internal structure, treated as mathematical points in classical mechanics. Rigid bodies have size and shape, but retain a simplicity close to that of the particle, adding just a few so-called degrees of freedom, such as orientation in space. Otherwise, bodies may be semi-rigid, i.e. elastic, or non-rigid, i.e. fluid. These subjects have both classical and quantum divisions of study. For instance, the motion of a spacecraft, regarding its orbit and attitude (rotation), is described by the relativistic theory of classical mechanics, while the analogous movements of an atomic nucleus are described by quantum mechanics. == Sub-disciplines == The following are the three main designations consisting of various subjects that are studied in mechanics. Note that there is also the "theory of fields" which constitutes a separate discipline in physics, formally treated as distinct from mechanics, whether it be classical fields or quantum fields. But in actual practice, subjects belonging to mechanics and fields are closely interwoven. Thus, for instance, forces that act on particles are frequently derived from fields (electromagnetic or gravitational), and particles generate fields by acting as sources. In fact, in quantum mechanics, particles themselves are fields, as described theoretically by the wave function. === Classical === The following are described as forming classical mechanics: Newtonian mechanics, the original theory of motion (kinematics) and forces (dynamics) Analytical mechanics is a reformulation of Newtonian mechanics with an emphasis on system energy, rather than on forces. There are two main branches of analytical mechanics: Hamiltonian mechanics, a theoretical formalism, based on the principle of conservation of energy Lagrangian mechanics, another theoretical formalism, based on the principle of the least action Classical statistical mechanics generalizes ordinary classical mechanics to consider systems in an unknown state; often used to derive thermodynamic properties. Celestial mechanics, the motion of bodies in space: planets, comets, stars, galaxies, etc. Astrodynamics, spacecraft navigation, etc. Solid mechanics, elasticity, plasticity, or viscoelasticity exhibited by deformable solids Fracture mechanics Acoustics, sound (density, variation, propagation) in solids, fluids and gases Statics, semi-rigid bodies in mechanical equilibrium Fluid mechanics, the motion of fluids Soil mechanics, mechanical behavior of soils Continuum mechanics, mechanics of continua (both solid and fluid) Hydraulics, mechanical properties of liquids Fluid statics, liquids in equilibrium Applied mechanics (also known as engineering mechanics) Biomechanics, solids, fluids, etc. in biology Biophysics, physical processes in living organisms Relativistic or Einsteinian mechanics === Quantum === The following are categorized as being part of quantum mechanics: Schrödinger wave mechanics, used to describe the movements of the wavefunction of a single particle. Matrix mechanics is an alternative formulation that allows considering systems with a finite-dimensional state space. Quantum statistical mechanics generalizes ordinary quantum mechanics to consider systems in an unknown state; often used to derive thermodynamic properties. Particle physics, the motion, structure, and behavior of fundamental particles Nuclear physics, the motion, structure, and reactions of nuclei Condensed matter physics, quantum gases, solids, liquids, etc. Historically, classical mechanics had been around for nearly a quarter millennium before quantum mechanics developed. Classical mechanics originated with Isaac Newton's laws of motion in Philosophiæ Naturalis Principia Mathematica, developed over the seventeenth century. Quantum mechanics developed later, over the nineteenth century, precipitated by Planck's postulate and Albert Einstein's explanation of the photoelectric effect. Both fields are commonly held to constitute the most certain knowledge that exists about physical nature. Classical mechanics has especially often been viewed as a model for other so-called exact sciences. Essential in this respect is the extensive use of mathematics in theories, as well as the decisive role played by experiment in generating and testing them. Quantum mechanics is of a bigger scope, as it encompasses classical mechanics as a sub-discipline which applies under certain restricted circumstances. According to the correspondence principle, there is no contradiction or conflict between the two subjects, each simply pertains to specific situations. The correspondence principle states that the behavior of systems described by quantum theories reproduces classical physics in the limit of large quantum numbers, i.e. if quantum mechanics is applied to large systems (for e.g. a baseball), the result would almost be the same if classical mechanics had been applied. Quantum mechanics has superseded classical mechanics at the foundation level and is indispensable for the explanation and prediction of processes at the molecular, atomic, and sub-atomic level. However, for macroscopic processes classical mechanics is able to solve problems which are unmanageably difficult (mainly due to computational limits) in quantum mechanics and hence remains useful and well used. Modern descriptions of such behavior begin with a careful definition of such quantities as displacement (distance moved), time, velocity, acceleration, mass, and force. Until about 400 years ago, however, motion was explained from a very different point of view. For example, following the ideas of Greek philosopher and scientist Aristotle, scientists reasoned that a cannonball falls down because its natural position is in the Earth; the Sun, the Moon, and the stars travel in circles around the Earth because it is the nature of heavenly objects to travel in perfect circles. Often cited as father to modern science, Galileo brought together the ideas of other great thinkers of his time and began to calculate motion in terms of distance travelled from some starting position and the time that it took. He showed that the speed of falling objects increases steadily during the time of their fall. This acceleration is the same for heavy objects as for light ones, provided air friction (air resistance) is discounted. The English mathematician and physicist Isaac Newton improved this analysis by defining force and mass and relating these to acceleration. For objects traveling at speeds close to the speed of light, Newton's laws were superseded by Albert Einstein's theory of relativity. [A sentence illustrating the computational complication of Einstein's theory of relativity.] For atomic and subatomic particles, Newton's laws were superseded by quantum theory. For everyday phenomena, however, Newton's three laws of motion remain the cornerstone of dynamics, which is the study of what causes motion. === Relativistic === Akin to the distinction between quantum and classical mechanics, Albert Einstein's general and special theories of relativity have expanded the scope of Newton and Galileo's formulation of mechanics. The differences between relativistic and Newtonian mechanics become significant and even dominant as the velocity of a body approaches the speed of light. For instance, in Newtonian mechanics, the kinetic energy of a free particle is E = ⁠1/2⁠mv2, whereas in relativistic mechanics, it is E = (γ − 1)mc2 (where γ is the Lorentz factor; this formula reduces to the Newtonian expression in the low energy limit). For high-energy processes, quantum mechanics must be adjusted to account for special relativity; this has led to the development of quantum field theory. == Professional organizations == Applied Mechanics Division, American Society of Mechanical Engineers Fluid Dynamics Division, American Physical Society Society for Experimental Mechanics International Union of Theoretical and Applied Mechanics == See also == Action principles Applied mechanics Computational mechanics Dynamics Engineering Index of engineering science and mechanics articles Kinematics Kinetics Non-autonomous mechanics Statics Wiesen Test of Mechanical Aptitude (WTMA) == References == == Further reading == Salma Alrasheed (2019). Principles of Mechanics. Springer Nature. ISBN 978-3-030-15195-9. Landau, L. D.; Lifshitz, E. M. (1972). Mechanics and Electrodynamics, Vol. 1. Franklin Book Company, Inc. ISBN 978-0-08-016739-8. Practical Mechanics for Boys (1914) by James Slough Zerbe. == External links == Physclips: Mechanics with animations and video clips from the University of New South Wales The Archimedes Project
Wikipedia/Theoretical_mechanics
The Solar Physics Division of the American Astronomical Society (AAS/SPD) or (AAS-SPD), often referred to as simply the "Solar Physics Division" (SPD), is the primary trade organization of solar physicists in the U.S. It exists for the advancement of the study of the Sun and to coordinate of such research with other branches of science. SPD organizes meetings and certain solar-physics-specific prizes, occasionally advocates for solar physics in the political arena, and promotes outreach via formal and informal educational projects. The SPD awards the George Ellery Hale Prize for outstanding contributions over an extended period of time to solar astronomy, and the Karen Harvey Prize for a significant contribution to the study of the Sun early in a scientist's professional career. The SPD also gives popular writing awards, and holds a student poster contest at its meetings. Contestants are judged on readability, flow, quality of appearance and proportion of independent work done by the student. Judges also take into account the oral presentation by the student, how well the conclusions line up with the aim or purpose and the overall quality of the work. == References == == External links == AAS/SPD official website
Wikipedia/Solar_Physics_Division
Solar phenomena are natural phenomena which occur within the atmosphere of the Sun. They take many forms, including solar wind, radio wave flux, solar flares, coronal mass ejections, coronal heating and sunspots. These phenomena are believed to be generated by a helical dynamo, located near the center of the Sun's mass, which generates strong magnetic fields, as well as a chaotic dynamo, located near the surface, which generates smaller magnetic field fluctuations. All solar fluctuations together are referred to as solar variation, producing space weather within the Sun's gravitational field. Solar activity and related events have been recorded since the eighth century BCE. Throughout history, observation technology and methodology advanced, and in the 20th century, interest in astrophysics surged and many solar telescopes were constructed. The 1931 invention of the coronagraph allowed the corona to be studied in full daylight. == Sun == The Sun is a star located at the center of the Solar System. It is almost perfectly spherical and consists of hot plasma and magnetic fields. It has a diameter of about 1,392,684 kilometres (865,374 mi), around 109 times that of Earth, and its mass (1.989×1030 kilograms, approximately 330,000 times that of Earth) accounts for some 99.86% of the total mass of the Solar System. Chemically, about three quarters of the Sun's mass consists of hydrogen, while the rest is mostly helium. The remaining 1.69% (equal to 5,600 times the mass of Earth) consists of heavier elements, including oxygen, carbon, neon and iron. The Sun formed about 4.567 billion years ago from the gravitational collapse of a region within a large molecular cloud. Most of the matter gathered in the center, while the rest flattened into an orbiting disk that became the balance of the Solar System. The central mass became increasingly hot and dense, eventually initiating thermonuclear fusion in its core. The Sun is a G-type main-sequence star (G2V) based on spectral class, and it is informally designated as a yellow dwarf because its visible radiation is most intense in the yellow-green portion of the spectrum. It is actually white, but from the Earth's surface, it appears yellow because of atmospheric scattering of blue light. In the spectral class label, G2 indicates its surface temperature, of approximately 5770 K [3] ( the UAI will accept in 2014 5772 K ) and V indicates that the Sun, like most stars, is a main-sequence star, and thus generates its energy via fusing hydrogen into helium. In its core, the Sun fuses about 620 million metric tons of hydrogen each second. The Earth's mean distance from the Sun is approximately 1 astronomical unit (about 150,000,000 km; 93,000,000 mi), though the distance varies as the Earth moves from perihelion in January to aphelion in July. At this average distance, light travels from the Sun to Earth in about 8 minutes, 19 seconds. The energy of this sunlight supports almost all life on Earth by photosynthesis, and drives Earth's climate and weather. As recent as the 19th century, scientists had little knowledge of the Sun's physical composition and source of energy. This understanding is still developing; a number of present-day anomalies in the Sun's behavior remain unexplained. == Solar cycle == Many solar phenomena change periodically over an average interval of about 11 years. This solar cycle affects solar irradiation and influences space weather, terrestrial weather, and climate. The solar cycle also modulates the flux of short-wavelength solar radiation, from ultraviolet to X-ray and influences the frequency of solar flares, coronal mass ejections and other solar eruptive phenomena. == Types == === Coronal mass ejections === A coronal mass ejection (CME) is a massive burst of solar wind and magnetic fields rising above the solar corona. Near solar maxima, the Sun produces about three CMEs every day, whereas solar minima feature about one every five days. CMEs, along with solar flares of other origin, can disrupt radio transmissions and damage satellites and electrical transmission line facilities, resulting in potentially massive and long-lasting power outages. Coronal mass ejections often appear with other forms of solar activity, most notably solar flares, but no causal relationship has been established. Most weak flares do not have CMEs; however, most powerful ones do. Most ejections originate from active regions on the Sun's surface, such as sunspot groupings associated with frequent flares. Other forms of solar activity frequently associated with coronal mass ejections are eruptive prominences, coronal dimming, coronal waves and Moreton waves, also called solar tsunami. Magnetic reconnection is responsible for CME and solar flares. Magnetic reconnection is the name given to the rearrangement of magnetic field lines when two oppositely directed magnetic fields are brought together. This rearrangement is accompanied with a sudden release of energy stored in the original oppositely directed fields. When a CME impacts the Earth's magnetosphere, it temporarily deforms the Earth's magnetic field, changing the direction of compass needles and inducing large electrical ground currents in Earth itself; this is called a geomagnetic storm, and it is a global phenomenon. CME impacts can induce magnetic reconnection in Earth's magnetotail (the midnight side of the magnetosphere); this launches protons and electrons downward toward Earth's atmosphere, where they form the aurora. === Flares === A solar flare is a sudden flash of brightness observed over the Sun's surface or the solar limb, which is interpreted as an energy release of up to 6 × 1025 joules (about a sixth of the total Sun's energy output each second or 160 billion megatons of TNT equivalent, over 25,000 times more energy than released from the impact of Comet Shoemaker–Levy 9 with Jupiter). It may be followed by a coronal mass ejection. The flare ejects clouds of electrons, ions and atoms through the corona into space. These clouds typically reach Earth a day or two after the event. Similar phenomena in other stars are known as stellar flares. Solar flares strongly influence space weather near the Earth. They can produce streams of highly energetic particles in the solar wind, known as a solar proton event. These particles can impact the Earth's magnetosphere in the form of a geomagnetic storm and present radiation hazards to spacecraft and astronauts. A solar flare === Solar proton events === A solar proton event (SPE), or "proton storm", occurs when particles (mostly protons) emitted by the Sun become accelerated either close to the Sun during a flare or in interplanetary space by CME shocks. The events can include other nuclei such as helium ions and HZE ions. These particles cause multiple effects. They can penetrate the Earth's magnetic field and cause ionization in the ionosphere. The effect is similar to auroral events, except that protons rather than electrons are involved. Energetic protons are a significant radiation hazard to spacecraft and astronauts. Energetic protons can reach Earth within 30 minutes of a major flare's peak. === Prominences === A prominence is a large, bright, gaseous feature extending outward from the Sun's surface, often in the shape of a loop. Prominences are anchored to the Sun's surface in the photosphere and extend outwards into the corona. While the corona consists of high temperature plasma, which does not emit much visible light, prominences contain much cooler plasma, similar in composition to that of the chromosphere. Prominence plasma is typically a hundred times cooler and denser than coronal plasma. A prominence forms over timescales of about an earthly day and may persist for weeks or months. Some prominences break apart and form CMEs. A typical prominence extends over many thousands of kilometers; the largest on record was estimated at over 800,000 kilometres (500,000 mi) long – roughly the solar radius. When a prominence is viewed against the Sun instead of space, it appears darker than the background. This formation is called a solar filament. It is possible for a projection to be both a filament and a prominence. Some prominences are so powerful that they eject matter at speeds ranging from 600 km/s to more than 1000 km/s. Other prominences form huge loops or arching columns of glowing gases over sunspots that can reach heights of hundreds of thousands of kilometers. === Sunspots === Sunspots are relatively dark areas on the Sun's radiating 'surface' (photosphere) where intense magnetic activity inhibits convection and cools the Photosphere. Faculae are slightly brighter areas that form around sunspot groups as the flow of energy to the photosphere is re-established and both the normal flow and the sunspot-blocked energy elevate the radiating 'surface' temperature. Scientists began speculating on possible relationships between sunspots and solar luminosity in the 17th century. Luminosity decreases caused by sunspots (generally < - 0.3%) are correlated with increases (generally < + 0.05%) caused both by faculae that are associated with active regions as well as the magnetically active 'bright network'. The net effect during periods of enhanced solar magnetic activity is increased radiant solar output because faculae are larger and persist longer than sunspots. Conversely, periods of lower solar magnetic activity and fewer sunspots (such as the Maunder Minimum) may correlate with times of lower irradiance. Sunspot activity has been measured using the Wolf number for about 300 years. This index (also known as the Zürich number) uses both the number of sunspots and the number of sunspot groups to compensate for measurement variations. A 2003 study found that sunspots had been more frequent since the 1940s than in the previous 1150 years. Sunspots usually appear as pairs with opposite magnetic polarity. Detailed observations reveal patterns, in yearly minima and maxima and in relative location. As each cycle proceeds, the latitude of spots declines, from 30 to 45° to around 7° after the solar maximum. This latitudinal change follows Spörer's law. For a sunspot to be visible to the human eye it must be about 50,000 km in diameter, covering 2,000,000,000 square kilometres (770,000,000 sq mi) or 700 millionths of the visible area. Over recent cycles, approximately 100 sunspots or compact sunspot groups are visible from Earth. Sunspots expand and contract as they move about and can travel at a few hundred meters per second when they first appear. === Wind === The solar wind is a stream of plasma released from the Sun's upper atmosphere. It consists of mostly electrons and protons with energies usually between 1.5 and 10 keV. The stream of particles varies in density, temperature and speed over time and over solar longitude. These particles can escape the Sun's gravity because of their high energy. The solar wind is divided into the slow solar wind and the fast solar wind. The slow solar wind has a velocity of about 400 kilometres per second (250 mi/s), a temperature of 2×105 K and a composition that is a close match to the corona. The fast solar wind has a typical velocity of 750 km/s, a temperature of 8×105 K and nearly matches the photosphere's. The slow solar wind is twice as dense and more variable in intensity than the fast solar wind. The slow wind has a more complex structure, with turbulent regions and large-scale organization. Both the fast and slow solar winds can be interrupted by large, fast-moving bursts of plasma called interplanetary CMEs, or ICMEs. They cause shock waves in the thin plasma of the heliosphere, generating electromagnetic waves and accelerating particles (mostly protons and electrons) to form showers of ionizing radiation that precede the CME. == Effects == === Space weather === Space weather is the environmental condition within the Solar System, including the solar wind. It is studied especially surrounding the Earth, including conditions from the magnetosphere to the ionosphere and thermosphere. Space weather is distinct from terrestrial weather of the troposphere and stratosphere. The term was not used until the 1990s. Prior to that time, such phenomena were considered to be part of physics or aeronomy. ==== Solar storms ==== Solar storms are caused by disturbances on the Sun, most often coronal clouds associated with solar flare CMEs emanating from active sunspot regions, or less often from coronal holes. The Sun can produce intense geomagnetic and proton storms capable of causing power outages, disruption or communications blackouts (including GPS systems) and temporary/permanent disabling of satellites and other spaceborne technology. Solar storms may be hazardous to high-latitude, high-altitude aviation and to human spaceflight. Geomagnetic storms cause aurorae. The most significant known solar storm occurred in September 1859 and is known as the Carrington event. ==== Aurora ==== An aurora is a natural light display in the sky, especially in the high latitude (Arctic and Antarctic) regions, in the form of a large circle around the pole. It is caused by the collision of solar wind and charged magnetospheric particles with the high altitude atmosphere (thermosphere). Most auroras occur in a band known as the auroral zone, which is typically 3° to 6° wide in latitude and observed at 10° to 20° from the geomagnetic poles at all longitudes, but often most vividly around the spring and autumn equinoxes. The charged particles and solar wind are directed into the atmosphere by the Earth's magnetosphere. A geomagnetic storm expands the auroral zone to lower latitudes. Auroras are associated with the solar wind. The Earth's magnetic field traps its particles, many of which travel toward the poles where they are accelerated toward Earth. Collisions between these ions and the atmosphere release energy in the form of auroras appearing in large circles around the poles. Auroras are more frequent and brighter during the solar cycle's intense phase when CMEs increase the intensity of the solar wind. ==== Geomagnetic storm ==== A geomagnetic storm is a temporary disturbance of the Earth's magnetosphere caused by a solar wind shock wave and/or cloud of magnetic field that interacts with the Earth's magnetic field. The increase in solar wind pressure compresses the magnetosphere and the solar wind's magnetic field interacts with the Earth's magnetic field to transfer increased energy into the magnetosphere. Both interactions increase plasma movement through the magnetosphere (driven by increased electric fields) and increase the electric current in the magnetosphere and ionosphere. The disturbance in the interplanetary medium that drives a storm may be due to a CME or a high speed stream (co-rotating interaction region or CIR) of the solar wind originating from a region of weak magnetic field on the solar surface. The frequency of geomagnetic storms increases and decreases with the sunspot cycle. CME driven storms are more common during the solar maximum of the solar cycle, while CIR-driven storms are more common during the solar minimum. Several space weather phenomena are associated with geomagnetic storms. These include Solar Energetic Particle (SEP) events, geomagnetically induced currents (GIC), ionospheric disturbances that cause radio and radar scintillation, disruption of compass navigation and auroral displays at much lower latitudes than normal. A 1989 geomagnetic storm energized ground induced currents that disrupted electric power distribution throughout most of the province of Quebec and caused aurorae as far south as Texas. ==== Sudden ionospheric disturbance ==== A sudden ionospheric disturbance (SID) is an abnormally high ionization/plasma density in the D region of the ionosphere caused by a solar flare. The SID results in a sudden increase in radio-wave absorption that is most severe in the upper medium frequency (MF) and lower high frequency (HF) ranges, and as a result, often interrupts or interferes with telecommunications systems. ==== Geomagnetically induced currents ==== Geomagnetically induced currents are a manifestation at ground level of space weather, which affect the normal operation of long electrical conductor systems. During space weather events, electric currents in the magnetosphere and ionosphere experience large variations, which manifest also in the Earth's magnetic field. These variations induce currents (GIC) in earthly conductors. Electric transmission grids and buried pipelines are common examples of such conductor systems. GIC can cause problems such as increased corrosion of pipeline steel and damaged high-voltage power transformers. === Carbon-14 === The production of carbon-14 (radiocarbon: 14C) is related to solar activity. Carbon-14 is produced in the upper atmosphere when cosmic ray bombardment of atmospheric nitrogen (14N) induces the nitrogen to undergo β+ decay, thus transforming into an unusual isotope of carbon with an atomic weight of 14 rather than the more common 12. Because galactic cosmic rays are partially excluded from the Solar System by the outward sweep of magnetic fields in the solar wind, increased solar activity reduces 14C production. Atmospheric 14C concentration is lower during solar maxima and higher during solar minima. By measuring the captured 14C in wood and counting tree rings, production of radiocarbon relative to recent wood can be measured and dated. A reconstruction of the past 10,000 years shows that the 14C production was much higher during the mid-Holocene 7,000 years ago and decreased until 1,000 years ago. In addition to variations in solar activity, long-term trends in carbon-14 production are influenced by changes in the Earth's geomagnetic field and by changes in carbon cycling within the biosphere (particularly those associated with changes in the extent of vegetation between ice ages). == Observation history == Solar activity and related events have been regularly recorded since the time of the Babylonians. Early records described solar eclipses, the corona and sunspots. Soon after the invention of telescopes, in the early 1600s, astronomers began observing the Sun. Thomas Harriot was the first to observe sunspots, in 1610. Observers confirmed the less-frequent sunspots and aurorae during the Maunder minimum. One of these observers was the renowned astronomer Johannes Hevelius who recorded a number of sunspots from 1653 to 1679 in the early Maunder minimum, listed in the book Machina Coelestis (1679). Solar spectrometry began in 1817. Rudolf Wolf gathered sunspot observations as far back as the 1755–1766 cycle. He established a relative sunspot number formulation (the Wolf or Zürich sunspot number) that became the standard measure. Around 1852, Sabine, Wolf, Gautier and von Lamont independently found a link between the solar cycle and geomagnetic activity. On 2 April 1845, Fizeau and Foucault first photographed the Sun. Photography assisted in the study of solar prominences, granulation, spectroscopy and solar eclipses. On 1 September 1859, Richard C. Carrington and separately R. Hodgson first observed a solar flare. Carrington and Gustav Spörer discovered that the Sun exhibits differential rotation, and that the outer layer must be fluid. In 1907–08, George Ellery Hale uncovered the Sun's magnetic cycle and the magnetic nature of sunspots. Hale and his colleagues later deduced Hale's polarity laws that described its magnetic field. Bernard Lyot's 1931 invention of the coronagraph allowed the corona to be studied in full daylight. The Sun was, until the 1990s, the only star whose surface had been resolved. Other major achievements included understanding of: X-ray-emitting loops (e.g., by Yohkoh) Corona and solar wind (e.g., by SoHO) Variance of solar brightness with level of activity, and verification of this effect in other solar-type stars (e.g., by ACRIM) The intense fibril state of the magnetic fields at the visible surface of a star like the Sun (e.g., by Hinode) The presence of magnetic fields of 0.5×105 to 1×105 gauss at the base of the conductive zone, presumably in some fibril form, inferred from the dynamics of rising azimuthal flux bundles. Low-level electron neutrino emission from the Sun's core. In the later twentieth century, satellites began observing the Sun, providing many insights. For example, modulation of solar luminosity by magnetically active regions was confirmed by satellite measurements of total solar irradiance (TSI) by the ACRIM1 experiment on the Solar Maximum Mission (launched in 1980). == See also == Attribution of recent climate change (section Solar activity) Climate change (section Solar output) Outline of astronomy Radiative levitation Solar cycle == Notes == == References == == Further reading == Karl, Thomas R.; Melillo, Jerry M.; Peterson, Thomas C. (2009). "Global Climate Change Impacts in the United States" (PDF). Cambridge University Press. Retrieved 30 January 2024. Willson, Richard C.; H.S. Hudson (1991). "The Sun's luminosity over a complete solar cycle". Nature. 351 (6321): 42–4. Bibcode:1991Natur.351...42W. doi:10.1038/351042a0. S2CID 4273483. Foukal, Peter; et al. (1977). "The effects of sunspots and faculae on the solar constant". Astrophysical Journal. 215: 952. Bibcode:1977ApJ...215..952F. doi:10.1086/155431. Dziembowski, W.A.; P.R. Goode; J. Schou (2001). "Does the sun shrink with increasing magnetic activity?". Astrophysical Journal. 553 (2): 897–904. arXiv:astro-ph/0101473. Bibcode:2001ApJ...553..897D. doi:10.1086/320976. S2CID 8177954. Stetson, H.T. (1937). Sunspots and Their Effects. New York: McGraw Hill. Bibcode:1937sate.book.....S. Yaskell, Steven Haywood (31 December 2012). Grand Phases On The Sun: The case for a mechanism responsible for extended solar minima and maxima. Trafford Publishing. ISBN 978-1-4669-6300-9. Solar activity Hugh Hudson Scholarpedia, 3(3):3967. doi:10.4249/scholarpedia.3967 == External links == NOAA / NESDIS / NGDC (2002) Solar Variability Affecting Earth NOAA CD-ROM NGDC-05/01. This CD-ROM contains over 100 solar-terrestrial and related global data bases covering the period through April 1990. Recent Total Solar Irradiance data Archived 2013-07-06 at the Wayback Machine updated every Monday Latest Space Weather Data – from the Solar Influences Data Analysis Center (Belgium) Latest images from Big Bear Solar Observatory (California) The Very Latest SOHO Images – from the ESA/NASA Solar & Heliospheric Observatory Map of Solar Active Regions – from the Kislovodsk Mountain Astronomical Station
Wikipedia/Solar_phenomena
The Solar Dynamics Observatory (SDO) is a NASA mission which has been observing the Sun since 2010. Launched on 11 February 2010, the observatory is part of the Living With a Star (LWS) program. The goal of the LWS program is to develop the scientific understanding necessary to effectively address those aspects of the connected Sun–Earth system directly affecting life on Earth and its society. The goal of the SDO is to understand the influence of the Sun on the Earth and near-Earth space by studying the solar atmosphere on small scales of space and time and in many wavelengths simultaneously. SDO has been investigating how the Sun's magnetic field is generated and structured, how this stored magnetic energy is converted and released into the heliosphere and geospace in the form of solar wind, energetic particles, and variations in the solar irradiance. == General == The SDO spacecraft was developed at NASA's Goddard Space Flight Center in Greenbelt, Maryland, and launched on 11 February 2010, from Cape Canaveral Air Force Station (CCAFS). The primary mission lasted five years and three months, with expendables expected to last at least ten years. Some consider SDO to be a follow-on mission to the Solar and Heliospheric Observatory (SOHO). SDO is a three-axis stabilized spacecraft, with two solar arrays, and two high-gain antennas, in an inclined geosynchronous orbit around Earth. The spacecraft includes three instruments: the Extreme Ultraviolet Variability Experiment (EVE) built in partnership with the University of Colorado Boulder's Laboratory for Atmospheric and Space Physics (LASP), the Helioseismic and Magnetic Imager (HMI) built in partnership with Stanford University, and the Atmospheric Imaging Assembly (AIA) built in partnership with the Lockheed Martin Solar and Astrophysics Laboratory (LMSAL). Data which are collected by the craft are made available as soon as possible after reception. === Extended mission === As of February 2020, SDO is expected to remain operational until 2030. == Instruments == === Helioseismic and Magnetic Imager (HMI) === The Helioseismic and Magnetic Imager (HMI), led from Stanford University in Stanford, California, studies solar variability and characterizes the Sun's interior and the various components of magnetic activity. HMI takes high-resolution measurements of the longitudinal and vector magnetic field by viewing the entirety of the Sun's disk, with emphasis on various concentrations of metals in the Sun; specifically it passes the light (the variety of usable frequencies of which are centered on the solar spectrum's 617.3-nm Fraunhofer line) through five filter instruments including a Lyot filter and two Michelson interferometers to rapidly and frequently create Doppler images and magnetograms. The full-disk focus and advanced magnetometers improve on the capabilities of SOHO's MDI instrument which could only focus within the line of sight with limited magnetic data. HMI produces data to determine the interior sources and mechanisms of solar variability and how the physical processes inside the Sun are related to surface magnetic field and activity. It also produces data to enable estimates of the coronal magnetic field for studies of variability in the extended solar atmosphere. HMI observations will enable establishing the relationships between the internal dynamics and magnetic activity in order to understand solar variability and its effects. === Extreme Ultraviolet Variability Experiment (EVE) === The Extreme Ultraviolet Variability Experiment (EVE) measures the Sun's extreme ultraviolet irradiance with improved spectral resolution, "temporal cadence", accuracy, and precision over preceding measurements made by TIMED SEE, SOHO, and SORCE XPS. Some key requirements for EVE are to measure the solar EUV irradiance spectrum with 0.1 nm spectral resolution and with 20 sec cadence. These drive the EVE design to include grating spectrographs with array detectors so that all EUV wavelengths can be measured simultaneously. The instrument incorporates physics-based models in order to further scientific understanding of the relationship between solar EUV variations and magnetic variation changes in the Sun. The Sun's output of energetic extreme ultraviolet photons is primarily what heats the Earth's upper atmosphere and creates the ionosphere. Solar EUV radiation output undergoes constant changes, both moment to moment and over the Sun's 11-year solar cycle, and these changes are important to understand because they have a significant impact on atmospheric heating, satellite drag, and communications system degradation, including disruption of the Global Positioning System. The EVE instrument package was built by the University of Colorado Boulder's Laboratory for Atmospheric and Space Physics (LASP), with Dr. Tom Woods as principal investigator, and was delivered to NASA Goddard Space Flight Center on 7 September 2007. The instrument provides improvements of up to 70% in spectral resolution measurements in the wavelengths below 30 nm, and a 30% improvement in "time cadence" by taking measurements every 10 seconds over a 100% duty cycle. === Atmospheric Imaging Assembly (AIA) === The Atmospheric Imaging Assembly (AIA), led from the Lockheed Martin Solar and Astrophysics Laboratory (LMSAL), provides continuous full-disk observations of the solar chromosphere and corona in seven extreme ultraviolet (EUV) channels, spanning a temperature range from approximately 20,000 Kelvin to in excess of 20 million Kelvin. The 12-second cadence of the image stream with 4096 by 4096 pixel images at 0.6 arcsec/pixel provides unprecedented views of the various phenomena that occur within the evolving solar outer atmosphere. The AIA science investigation is led by LMSAL, which also operates the instrument and – jointly with Stanford University – runs the Joint Science Operations Center from which all of the data are served to the worldwide scientific community, as well as the general public. LMSAL designed the overall instrumentation and led its development and integration. The four telescopes providing the individual light feeds for the instrument were designed and built at the Smithsonian Astrophysical Observatory (SAO). Since beginning its operational phase on 1 May 2010, AIA has operated successfully with unprecedented EUV image quality. Photographs of the Sun in these various regions of the spectrum can be seen at NASA's SDO Data website. Images and movies of the Sun seen on any day of the mission, including within the last half-hour, can be found at The Sun Today. == Communications == SDO down-links science data (K-band) from its two onboard high-gain antennas, and telemetry (S-band) from its two onboard omnidirectional antennas. The ground station consists of two dedicated (redundant) 18-meter radio antennas in White Sands Missile Range, New Mexico, constructed specifically for SDO. Mission controllers operate the spacecraft remotely from the Mission Operations Center at NASA Goddard Space Flight Center. The combined data rate is about 130 Mbit/s (150 Mbit/s with overhead, or 300 Msymbols/s with rate 1/2 convolutional encoding), and the craft generates approximately 1.5 Terabytes of data per day (equivalent to downloading around 500,000 songs). == Launch == NASA's Launch Services Program at Kennedy Space Center managed the payload integration and launch. The SDO launched from Cape Canaveral Space Launch Complex 41 (SLC-41), utilizing an Atlas V-401 rocket with a RD-180 powered Common Core Booster, which has been developed to meet the Evolved Expendable Launch Vehicle (EELV) program requirements. Sun dog phenomenon: Moments after launch, SDO's Atlas V rocket penetrated a cirrus cloud which created visible shock waves in the sky and destroyed the alignment of ice crystals that were forming a sun dog visible to onlookers. After launch, the spacecraft was deployed from the Atlas V into an orbit around the Earth with an initial perigee of about 2,500 km (1,600 mi). === Transfer to final Orbit === SDO then underwent a series of orbit-raising maneuvers over a few weeks which adjusted its orbit until the spacecraft reached its planned circular, geosynchronous orbit at an altitude of 35,789 km (22,238 mi), at 102° West longitude, inclined at 28.5°. This orbit was chosen to allow 24/7 communications to/from the fixed ground station, and to minimise solar eclipses to about an hour a day for only a few weeks a year. == Mission mascot - Camilla == Camilla Corona is a rubber chicken and is the mission mascot for SDO. It is part of the Education and public outreach team and assists with various functions to help educate the public, mainly children, about the SDO mission, facts about the Sun and Space weather. Camilla also assists in cross-informing the public about other NASA missions and space related projects. Camilla Corona SDO uses social media to interact with fans. == Image gallery == == Stamps == In 2021, the United States Postal Service released a series of forever stamps using images of the Sun taken by the Solar Dynamics Observatory. == See also == Heliophysics Advanced Composition Explorer Parker Solar Probe Radiation Belt Storm Probes (Van Allen Probes) Richard R. Fisher Solar and Heliospheric Observatory (SOHO) STEREO (Solar TErrestrial RElations Observatory), launched 2006, 1 of 2 spacecraft still operational. Wind (spacecraft), launched 1994, still operational. List of heliophysics missions == References == == External links == Solar Dynamics Observatory (SDO) mission website Where is the Solar Dynamics Observatory (SDO) right now? SDO Outreach Material, HELAS Inbound SOHO comet disintegrates as seen in SDO AIA images (Cometal 14 July 2011) History of SDO patch, Facebook Sunspot Database based on SDO (HMI) satellite observations from 2010 to nowadays with the newest data. () Album of images and videos by Seán Doran, based on SDO imagery, and a longer (24 min.) YouTube video: Sun Dance SDO 5-year timelapse video of the Sun SDO 10-year timelapse video of the Sun === Instruments === Extreme Ultraviolet Variability Experiment (EVE) Archived 30 April 2010 at the Wayback Machine, University of Colorado ATMOSPHERIC IMAGING ASSEMBLY (AIA), Lockheed Martin Helioseismic and Magnetic Imager (HMI), Stanford Joint Science Operations Center – Science Data Processing HMI – AIA
Wikipedia/Solar_Dynamics_Observatory
Solar Physics is a peer-reviewed scientific journal published monthly by Springer Science+Business Media. The editors-in-chief are Lidia van Driel-Gesztelyi (various affiliations), Cristina Mandrini (Universidad de Buenos Aires), Iñigo Arregui (Instituto de Astrofísica de Canarias), and Marco Velli. == Scope and history == The focus of this journal is fundamental research on the Sun and it covers all aspects of solar physics. Topical coverage includes solar-terrestrial physics and stellar research if it pertains to the focus of this journal. Publishing formats include regular manuscripts, invited reviews, invited memoirs, and topical collections. Solar Physics was established in 1967 by solar physicists Cornelis de Jager and Zdeněk Švestka, and publisher D. Reidel. == Abstracting and indexing == This journal is indexed by the following services: Science Citation Index Scopus INSPEC Chemical Abstracts Service Current Contents/Physical, Chemical & Earth Sciences GeoRef Journal Citation Reports SIMBAD == References == == External links == Official website
Wikipedia/Solar_Physics_(journal)
Helioseismology is the study of the structure and dynamics of the Sun through its oscillations. These are principally caused by sound waves that are continuously driven and damped by convection near the Sun's surface. It is similar to geoseismology, or asteroseismology, which are respectively the studies of the Earth or stars through their oscillations. While the Sun's oscillations were first detected in the early 1960s, it was only in the mid-1970s that it was realized that the oscillations propagated throughout the Sun and could allow scientists to study the Sun's deep interior. The term was coined by Douglas Gough in the 90s. The modern field is separated into global helioseismology, which studies the Sun's resonant modes directly, and local helioseismology, which studies the propagation of the component waves near the Sun's surface. Helioseismology has contributed to a number of scientific breakthroughs. The most notable was to show that the anomaly in the predicted neutrino flux from the Sun could not be caused by flaws in stellar models and must instead be a problem of particle physics. The so-called solar neutrino problem was ultimately resolved by neutrino oscillations. The experimental discovery of neutrino oscillations was recognized by the 2015 Nobel Prize for Physics. Helioseismology also allowed accurate measurements of the quadrupole (and higher-order) moments of the Sun's gravitational potential, which are consistent with General Relativity. The first helioseismic calculations of the Sun's internal rotation profile showed a rough separation into a rigidly-rotating core and differentially-rotating envelope. The boundary layer is now known as the tachocline and is thought to be a key component for the solar dynamo. Although it roughly coincides with the base of the solar convection zone — also inferred through helioseismology — it is conceptually distinct, being a boundary layer in which there is a meridional flow connected with the convection zone and driven by the interplay between baroclinicity and Maxwell stresses. Helioseismology benefits most from continuous monitoring of the Sun, which began first with uninterrupted observations from near the South Pole over the austral summer. In addition, observations over multiple solar cycles have allowed helioseismologists to study changes in the Sun's structure over decades. These studies are made possible by global telescope networks like the Global Oscillations Network Group (GONG) and the Birmingham Solar Oscillations Network (BiSON), which have been operating for over several decades. == Types of solar oscillation == Solar oscillation modes are interpreted as resonant vibrations of a roughly spherically symmetric self-gravitating fluid in hydrostatic equilibrium. Each mode can then be represented approximately as the product of a function of radius r {\displaystyle r} and a spherical harmonic Y l m ( θ , ϕ ) {\displaystyle Y_{l}^{m}(\theta ,\phi )} , and consequently can be characterized by the three quantum numbers which label: the number of nodal shells in radius, known as the radial order n {\displaystyle n} ; the total number of nodal circles on each spherical shell, known as the angular degree ℓ {\displaystyle \ell } ; and the number of those nodal circles that are longitudinal, known as the azimuthal order m {\displaystyle m} . It can be shown that the oscillations are separated into two categories: interior oscillations and a special category of surface oscillations. More specifically, there are: === Pressure modes (p modes) === Pressure modes are in essence standing sound waves. The dominant restoring force is the pressure (rather than buoyancy), hence the name. All the solar oscillations that are used for inferences about the interior are p modes, with frequencies between about 1 and 5 millihertz and angular degrees ranging from zero (purely radial motion) to order 10 3 {\displaystyle 10^{3}} . Broadly speaking, their energy densities vary with radius inversely proportional to the sound speed, so their resonant frequencies are determined predominantly by the outer regions of the Sun. Consequently it is difficult to infer from them the structure of the solar core. === Gravity modes (g modes) === Gravity modes are confined to convectively stable regions, either the radiative interior or the atmosphere. The restoring force is predominantly buoyancy, and thus indirectly gravity, from which they take their name. They are evanescent in the convection zone, and therefore interior modes have tiny amplitudes at the surface and are extremely difficult to detect and identify. It has long been recognized that measurement of even just a few g modes could substantially increase our knowledge of the deep interior of the Sun. However, no individual g mode has yet been unambiguously measured, although indirect detections have been both claimed and challenged. Additionally, there can be similar gravity modes confined to the convectively stable atmosphere. === Surface gravity modes (f modes) === Surface gravity waves are analogous to waves in deep water, having the property that the Lagrangian pressure perturbation is essentially zero. They are of high degree ℓ {\displaystyle \ell } , penetrating a characteristic distance R / ℓ {\displaystyle R/\ell } , where R {\displaystyle R} is the solar radius. To good approximation, they obey the so-called deep-water-wave dispersion law: ω 2 = g k h {\displaystyle \omega ^{2}=gk_{\rm {h}}} , irrespective of the stratification of the Sun, where ω {\displaystyle \omega } is the angular frequency, g {\displaystyle g} is the surface gravity and k h = ℓ / R {\displaystyle k_{\rm {h}}=\ell /R} is the horizontal wavenumber, and tend asymptotically to that relation as k h → ∞ {\displaystyle k_{\rm {h}}\rightarrow \infty } . == What seismology can reveal == The oscillations that have been successfully utilized for seismology are essentially adiabatic. Their dynamics is therefore the action of pressure forces p {\displaystyle p} (plus putative Maxwell stresses) against matter with inertia density ρ {\displaystyle \rho } , which itself depends upon the relation between them under adiabatic change, usually quantified via the (first) adiabatic exponent γ 1 {\displaystyle \gamma _{1}} . The equilibrium values of the variables p {\displaystyle p} and ρ {\displaystyle \rho } (together with the dynamically small angular velocity Ω {\displaystyle \Omega } and magnetic field B {\displaystyle {\rm {B}}} ) are related by the constraint of hydrostatic support, which depends upon the total mass M {\displaystyle M} and radius R {\displaystyle R} of the Sun. Evidently, the oscillation frequencies ω {\displaystyle \omega } depend only on the seismic variables ρ ( p , Ω , B ) {\displaystyle \rho (p,\Omega ,{\rm {B)}}} , γ 1 {\displaystyle \gamma _{1}} , Ω {\displaystyle \Omega } and B {\displaystyle {\rm {B}}} , or any independent set of functions of them. Consequently it is only about these variables that information can be derived directly. The square of the adiabatic sound speed, c 2 = γ 1 p / ρ {\displaystyle c^{2}=\gamma _{1}p/\rho } , is such commonly adopted function, because that is the quantity upon which acoustic propagation principally depends. Properties of other, non-seismic, quantities, such as helium abundance, Y {\displaystyle Y} , or main-sequence age t ⊙ {\displaystyle t_{\odot }} , can be inferred only by supplementation with additional assumptions, which renders the outcome more uncertain. == Data analysis == === Global helioseismology === The chief tool for analysing the raw seismic data is the Fourier transform. To good approximation, each mode is a damped harmonic oscillator, for which the power as a function of frequency is a Lorentz function. Spatially resolved data are usually projected onto desired spherical harmonics to obtain time series which are then Fourier transformed. Helioseismologists typically combine the resulting one-dimensional power spectra into a two-dimensional spectrum. The lower frequency range of the oscillations is dominated by the variations caused by granulation. This must first be filtered out before (or at the same time that) the modes are analysed. Granular flows at the solar surface are mostly horizontal, from the centres of the rising granules to the narrow downdrafts between them. Relative to the oscillations, granulation produces a stronger signal in intensity than line-of-sight velocity, so the latter is preferred for helioseismic observatories. === Local helioseismology === Local helioseismology—a term coined by Charles Lindsey, Doug Braun and Stuart Jefferies in 1993—employs several different analysis methods to make inferences from the observational data. The Fourier–Hankel spectral method was originally used to search for wave absorption by sunspots. Ring-diagram analysis, first introduced by Frank Hill, is used to infer the speed and direction of horizontal flows below the solar surface by observing the Doppler shifts of ambient acoustic waves from power spectra of solar oscillations computed over patches of the solar surface (typically 15° × 15°). Thus, ring-diagram analysis is a generalization of global helioseismology applied to local areas on the Sun (as opposed to half of the Sun). For example, the sound speed and adiabatic index can be compared within magnetically active and inactive (quiet Sun) regions. Time-distance helioseismology aims to measure and interpret the travel times of solar waves between any two locations on the solar surface. Inhomogeneities near the ray path connecting the two locations perturb the travel time between those two points. An inverse problem must then be solved to infer the local structure and dynamics of the solar interior. Helioseismic holography, introduced in detail by Charles Lindsey and Doug Braun for the purpose of far-side (magnetic) imaging, is a special case of phase-sensitive holography. The idea is to use the wavefield on the visible disk to learn about active regions on the far side of the Sun. The basic idea in helioseismic holography is that the wavefield, e.g., the line-of-sight Doppler velocity observed at the solar surface, can be used to make an estimate of the wavefield at any location in the solar interior at any instant in time. In this sense, holography is much like seismic migration, a technique in geophysics that has been in use since the 1940s. As another example, this technique has been used to give a seismic image of a solar flare. In direct modelling, the idea is to estimate subsurface flows from direct inversion of the frequency-wavenumber correlations seen in the wavefield in the Fourier domain. Woodard demonstrated the ability of the technique to recover near-surface flows the f modes. == Inversion == === Introduction === The Sun's oscillation modes represent a discrete set of observations that are sensitive to its continuous structure. This allows scientists to formulate inverse problems for the Sun's interior structure and dynamics. Given a reference model of the Sun, the differences between its mode frequencies and those of the Sun, if small, are weighted averages of the differences between the Sun's structure and that of the reference model. The frequency differences can then be used to infer those structural differences. The weighting functions of these averages are known as kernels. === Structure === The first inversions of the Sun's structure were made using Duvall's law and later using Duvall's law linearized about a reference solar model. These results were subsequently supplemented by analyses that linearize the full set of equations describing the stellar oscillations about a theoretical reference model and are now a standard way to invert frequency data. The inversions demonstrated differences in solar models that were greatly reduced by implementing gravitational settling: the gradual separation of heavier elements towards the solar centre (and lighter elements to the surface to replace them). === Rotation === If the Sun were perfectly spherical, the modes with different azimuthal orders m would have the same frequencies. Rotation, however, breaks this degeneracy, and the modes frequencies differ by rotational splittings that are weighted-averages of the angular velocity through the Sun. Different modes are sensitive to different parts of the Sun and, given enough data, these differences can be used to infer the rotation rate throughout the Sun. For example, if the Sun were rotating uniformly throughout, all the p modes would be split by approximately the same amount. Actually, the angular velocity is not uniform, as can be seen at the surface, where the equator rotates faster than the poles. The Sun rotates slowly enough that a spherical, non-rotating model is close enough to reality for deriving the rotational kernels. Helioseismology has shown that the Sun has a rotation profile with several features: a rigidly-rotating radiative (i.e. non-convective) zone, though the rotation rate of the inner core is not well known; a thin shear layer, known as the tachocline, which separates the rigidly-rotating interior and the differentially-rotating convective envelope; a convective envelope in which the rotation rate varies both with depth and latitude; and a final shear layer just beneath the surface, in which the rotation rate slows down towards the surface. == Relationship to other fields == === Geoseismology === Helioseismology was born from analogy with geoseismology but several important differences remain. First, the Sun lacks a solid surface and therefore cannot support shear waves. From the data analysis perspective, global helioseismology differs from geoseismology by studying only normal modes. Local helioseismology is thus somewhat closer in spirit to geoseismology in the sense that it studies the complete wavefield. === Asteroseismology === Because the Sun is a star, helioseismology is closely related to the study of oscillations in other stars, known as asteroseismology. Helioseismology is most closely related to the study of stars whose oscillations are also driven and damped by their outer convection zones, known as solar-like oscillators, but the underlying theory is broadly the same for other classes of variable star. The principal difference is that oscillations in distant stars cannot be resolved. Because the brighter and darker sectors of the spherical harmonic cancel out, this restricts asteroseismology almost entirely to the study of low degree modes (angular degree ℓ ≤ 3 {\displaystyle \ell \leq 3} ). This makes inversion much more difficult but upper limits can still be achieved by making more restrictive assumptions. == History == Solar oscillations were first observed in the early 1960s as a quasi-periodic intensity and line-of-sight velocity variation with a period of about 5 minutes. Scientists gradually realized that the oscillations might be global modes of the Sun and predicted that the modes would form clear ridges in two-dimensional power spectra. The ridges were subsequently confirmed in observations of high-degree modes in the mid 1970s, and mode multiplets of different radial orders were distinguished in whole-disc observations. At a similar time, Jørgen Christensen-Dalsgaard and Douglas Gough suggested the potential of using individual mode frequencies to infer the interior structure of the Sun. They calibrated solar models against the low-degree data finding two similarly good fits, one with low Y {\displaystyle Y} and a corresponding low neutrino production rate L ν {\displaystyle L_{\nu }} , the other with higher Y {\displaystyle Y} and L ν {\displaystyle L_{\nu }} ; earlier envelope calibrations against high-degree frequencies preferred the latter, but the results were not wholly convincing. It was not until Tom Duvall and Jack Harvey connected the two extreme data sets by measuring modes of intermediate degree to establish the quantum numbers associated with the earlier observations that the higher- Y {\displaystyle Y} model was established, thereby suggesting at that early stage that the resolution of the neutrino problem must lie in nuclear or particle physics. New methods of inversion developed in the 1980s, allowing researchers to infer the profiles sound speed and, less accurately, density throughout most of the Sun, corroborating the conclusion that residual errors in the inference of the solar structure is not the cause of the neutrino problem. Towards the end of the decade, observations also began to show that the oscillation mode frequencies vary with the Sun's magnetic activity cycle. To overcome the problem of not being able to observe the Sun at night, several groups had begun to assemble networks of telescopes (e.g. the Birmingham Solar Oscillations Network, or BiSON, and the Global Oscillation Network Group) from which the Sun would always be visible to at least one node. Long, uninterrupted observations brought the field to maturity, and the state of the field was summarized in a 1996 special issue of Science magazine. This coincided with the start of normal operations of the Solar and Heliospheric Observatory (SoHO), which began producing high-quality data for helioseismology. The subsequent years saw the resolution of the solar neutrino problem, and the long seismic observations began to allow analysis of multiple solar activity cycles. The agreement between standard solar models and helioseismic inversions was disrupted by new measurements of the heavy element content of the solar photosphere based on detailed three-dimensional models. Though the results later shifted back towards the traditional values used in the 1990s, the new abundances significantly worsened the agreement between the models and helioseismic inversions. The cause of the discrepancy remains unsolved and is known as the solar abundance problem. Space-based observations by SoHO have continued and SoHO was joined in 2010 by the Solar Dynamics Observatory (SDO), which has also been monitoring the Sun continuously since its operations began. In addition, ground-based networks (notably BiSON and GONG) continue to operate, providing nearly continuous data from the ground too. == See also == == References == == External links == Non-technical description of helio- and asteroseismology retrieved November 2009 Gough, D.O. (2003). "Solar Neutrino Production". Annales Henri Poincaré. 4 (S1): 303–317. Bibcode:2003AnHP....4..303G. doi:10.1007/s00023-003-0924-z. S2CID 195335212. Gizon, Laurent; Birch, Aaron C. (2005). "Local Helioseismology". Living Rev. Sol. Phys. 2 (1): 6. Bibcode:2005LRSP....2....6G. doi:10.12942/lrsp-2005-6. Scientists Issue Unprecedented Forecast of Next Sunspot Cycle National Science Foundation press release, March 6, 2006 Miesch, Mark S. (2005). "Large-Scale Dynamics of the Convection Zone and Tachocline". Living Rev. Sol. Phys. 2 (1): 1. Bibcode:2005LRSP....2....1M. doi:10.12942/lrsp-2005-1. European Helio- and Asteroseismology Network (HELAS) Farside and Earthside images of the Sun Living Reviews in Solar Physics Archived 2010-09-29 at the Wayback Machine Helioseismology and Asteroseismology at MPS === Satellite instruments === VIRGO SOI/MDI SDO/HMI TRACE === Ground-based instruments === BiSON Mark-1 GONG HiDHN == Further reading == Christensen-Dalsgaard, Jørgen. "Lecture notes on stellar oscillations". Archived from the original on 1 July 2018. Retrieved 5 June 2015. Pijpers, Frank P. (2006). Methods in Helio- and Asteroseismology. London: Imperial College Press. ISBN 978-1-8609-4755-1.
Wikipedia/Helioseismography
The standard solar model (SSM) is a mathematical model of the Sun as a spherical ball of gas (in varying states of ionisation, with the hydrogen in the deep interior being a completely ionised plasma). This stellar model, technically the spherically symmetric quasi-static model of a star, has stellar structure described by several differential equations derived from basic physical principles. The model is constrained by boundary conditions, namely the luminosity, radius, age and composition of the Sun, which are well determined. The age of the Sun cannot be measured directly; one way to estimate it is from the age of the oldest meteorites, and models of the evolution of the Solar System. The composition in the photosphere of the modern-day Sun, by mass, is 74.9% hydrogen and 23.8% helium. All heavier elements, called metals in astronomy, account for less than 2 percent of the mass. The SSM is used to test the validity of stellar evolution theory. In fact, the only way to determine the two free parameters of the stellar evolution model, the helium abundance and the mixing length parameter (used to model convection in the Sun), are to adjust the SSM to "fit" the observed Sun. == A calibrated solar model == A star is considered to be at zero age (protostellar) when it is assumed to have a homogeneous composition and to be just beginning to derive most of its luminosity from nuclear reactions (so neglecting the period of contraction from a cloud of gas and dust). To obtain the SSM, a one solar mass (M☉) stellar model at zero age is evolved numerically to the age of the Sun. The abundance of elements in the zero age solar model is estimated from primordial meteorites. Along with this abundance information, a reasonable guess at the zero-age luminosity (such as the present-day Sun's luminosity) is then converted by an iterative procedure into the correct value for the model, and the temperature, pressure and density throughout the model calculated by solving the equations of stellar structure numerically assuming the star to be in a steady state. The model is then evolved numerically up to the age of the Sun. Any discrepancy from the measured values of the Sun's luminosity, surface abundances, etc. can then be used to refine the model. For example, since the Sun formed, some of the helium and heavy elements have settled out of the photosphere by diffusion. As a result, the Solar photosphere now contains about 87% as much helium and heavy elements as the protostellar photosphere had; the protostellar Solar photosphere was 71.1% hydrogen, 27.4% helium, and 1.5% metals. A measure of heavy-element settling by diffusion is required for a more accurate model. == Numerical modelling of the stellar structure equations == The differential equations of stellar structure, such as the equation of hydrostatic equilibrium, are integrated numerically. The differential equations are approximated by difference equations. The star is imagined to be made up of spherically symmetric shells and the numerical integration carried out in finite steps making use of the equations of state, giving relationships for the pressure, the opacity and the energy generation rate in terms of the density, temperature and composition. == Evolution of the Sun == Nuclear reactions in the core of the Sun change its composition, by converting hydrogen nuclei into helium nuclei by the proton–proton chain and (to a lesser extent in the Sun than in more massive stars) the CNO cycle. This increases the mean molecular weight in the core of the Sun, which should lead to a decrease in pressure. This does not happen as instead the core contracts. By the virial theorem half of the gravitational potential energy released by this contraction goes towards raising the temperature of the core, and the other half is radiated away. This increase in temperature also increases the pressure and restores the balance of hydrostatic equilibrium. The luminosity of the Sun is increased by the temperature rise, increasing the rate of nuclear reactions. The outer layers expand to compensate for the increased temperature and pressure gradients, so the radius also increases. No star is completely static, but stars stay on the main sequence (burning hydrogen in the core) for long periods. In the case of the Sun, it has been on the main sequence for roughly 4.6 billion years, and will become a red giant in roughly 6.5 billion years for a total main sequence lifetime of roughly 11 billion (1010) years. Thus the assumption of steady state is a very good approximation. For simplicity, the stellar structure equations are written without explicit time dependence, with the exception of the luminosity gradient equation: d L d r = 4 π r 2 ρ ( ε − ε ν ) {\displaystyle {\frac {dL}{dr}}=4\pi r^{2}\rho \left(\varepsilon -\varepsilon _{\nu }\right)} Here L is the luminosity, ε is the nuclear energy generation rate per unit mass and εν is the luminosity due to neutrino emission (see below for the other quantities). The slow evolution of the Sun on the main sequence is then determined by the change in the nuclear species (principally hydrogen being consumed and helium being produced). The rates of the various nuclear reactions are estimated from particle physics experiments at high energies, which are extrapolated back to the lower energies of stellar interiors (the Sun burns hydrogen rather slowly). Historically, errors in the nuclear reaction rates have been one of the biggest sources of error in stellar modelling. Computers are employed to calculate the varying abundances (usually by mass fraction) of the nuclear species. A particular species will have a rate of production and a rate of destruction, so both are needed to calculate its abundance over time, at varying conditions of temperature and density. Since there are many nuclear species, a computerised reaction network is needed to keep track of how all the abundances vary together. According to the Vogt–Russell theorem, the mass and the composition structure throughout a star uniquely determine its radius, luminosity, and internal structure, as well as its subsequent evolution (though this "theorem" was only intended to apply to the slow, stable phases of stellar evolution and certainly does not apply to the transitions between stages and rapid evolutionary stages). The information about the varying abundances of nuclear species over time, along with the equations of state, is sufficient for a numerical solution by taking sufficiently small time increments and using iteration to find the unique internal structure of the star at each stage. == Purpose of the standard solar model == The SSM serves two purposes: it provides estimates for the helium abundance and mixing length parameter by forcing the stellar model to have the correct luminosity and radius at the Sun's age, it provides a way to evaluate more complex models with additional physics, such as rotation, magnetic fields and diffusion or improvements to the treatment of convection, such as modelling turbulence, and convective overshooting. Like the Standard Model of particle physics and the standard cosmology model the SSM changes over time in response to relevant new theoretical or experimental physics discoveries. == Energy transport in the Sun == The Sun has a radiative core and a convective outer envelope. In the core, the luminosity due to nuclear reactions is transmitted to outer layers principally by radiation. However, in the outer layers the temperature gradient is so great that radiation cannot transport enough energy. As a result, thermal convection occurs as thermal columns carry hot material to the surface (photosphere) of the Sun. Once the material cools off at the surface, it plunges back downward to the base of the convection zone, to receive more heat from the top of the radiative zone. In a solar model, as described in stellar structure, one considers the density ρ ( r ) {\displaystyle \rho (r)} , temperature T(r), total pressure (matter plus radiation) P(r), luminosity l(r) and energy generation rate per unit mass ε(r) in a spherical shell of a thickness dr at a distance r from the center of the star. Radiative transport of energy is described by the radiative temperature gradient equation: d T d r = − 3 κ ρ l 16 π r 2 σ T 3 , {\displaystyle {dT \over dr}=-{3\kappa \rho l \over 16\pi r^{2}\sigma T^{3}},} where κ is the opacity of the matter, σ is the Stefan–Boltzmann constant, and the Boltzmann constant is set to one. Convection is described using mixing length theory and the corresponding temperature gradient equation (for adiabatic convection) is: d T d r = ( 1 − 1 γ ) T P d P d r , {\displaystyle {dT \over dr}=\left(1-{1 \over \gamma }\right){T \over P}{dP \over dr},} where γ = cp / cv is the adiabatic index, the ratio of specific heats in the gas. (For a fully ionized ideal gas, γ = 5/3.) Near the base of the Sun's convection zone, the convection is adiabatic, but near the surface of the Sun, convection is not adiabatic. == Simulations of near-surface convection == A more realistic description of the uppermost part of the convection zone is possible through detailed three-dimensional and time-dependent hydrodynamical simulations, taking into account radiative transfer in the atmosphere. Such simulations successfully reproduce the observed surface structure of solar granulation, as well as detailed profiles of lines in the solar radiative spectrum, without the use of parametrized models of turbulence. The simulations only cover a very small fraction of the solar radius, and are evidently far too time-consuming to be included in general solar modeling. Extrapolation of an averaged simulation through the adiabatic part of the convection zone by means of a model based on the mixing-length description, demonstrated that the adiabat predicted by the simulation was essentially consistent with the depth of the solar convection zone as determined from helioseismology. An extension of mixing-length theory, including effects of turbulent pressure and kinetic energy, based on numerical simulations of near-surface convection, has been developed. This section is adapted from the Christensen-Dalsgaard review of helioseismology, Chapter IV. == Equations of state == The numerical solution of the differential equations of stellar structure requires equations of state for the pressure, opacity and energy generation rate, as described in stellar structure, which relate these variables to the density, temperature and composition. == Helioseismology == Helioseismology is the study of the wave oscillations in the Sun. Changes in the propagation of these waves through the Sun reveal inner structures and allow astrophysicists to develop extremely detailed profiles of the interior conditions of the Sun. In particular, the location of the convection zone in the outer layers of the Sun can be measured, and information about the core of the Sun provides a method, using the SSM, to calculate the age of the Sun, independently of the method of inferring the age of the Sun from that of the oldest meteorites. This is another example of how the SSM can be refined. == Neutrino production == Hydrogen is fused into helium through several different interactions in the Sun. The vast majority of neutrinos are produced through the pp chain, a process in which four protons are combined to produce two protons, two neutrons, two positrons, and two electron neutrinos. Neutrinos are also produced by the CNO cycle, but that process is considerably less important in the Sun than in other stars. Most of the neutrinos produced in the Sun come from the first step of the pp chain but their energy is so low (<0.425 MeV) they are very difficult to detect. A rare side branch of the pp chain produces the "boron-8" neutrinos with a maximum energy of roughly 15 MeV, and these are the easiest neutrinos to detect. A very rare interaction in the pp chain produces the "hep" neutrinos, the highest energy neutrinos predicted to be produced by the Sun. They are predicted to have a maximum energy of about 18 MeV. All of the interactions described above produce neutrinos with a spectrum of energies. The electron capture of 7Be produces neutrinos at either roughly 0.862 MeV (~90%) or 0.384 MeV (~10%). == Neutrino detection == The weakness of the neutrino's interactions with other particles means that most neutrinos produced in the core of the Sun can pass all the way through the Sun without being absorbed. It is possible, therefore, to observe the core of the Sun directly by detecting these neutrinos. === History === The first experiment to successfully detect cosmic neutrinos was Ray Davis's chlorine experiment, in which neutrinos were detected by observing the conversion of chlorine nuclei to radioactive argon in a large tank of perchloroethylene. This was a reaction channel expected for neutrinos, but since only the number of argon decays was counted, it did not give any directional information, such as where the neutrinos came from. The experiment found about 1/3 as many neutrinos as were predicted by the standard solar model of the time, and this problem became known as the solar neutrino problem. While it is now known that the chlorine experiment detected neutrinos, some physicists at the time were suspicious of the experiment, mainly because they did not trust such radiochemical techniques. Unambiguous detection of solar neutrinos was provided by the Kamiokande-II experiment, a water Cherenkov detector with a low enough energy threshold to detect neutrinos through neutrino-electron elastic scattering. In the elastic scattering interaction the electrons coming out of the point of reaction strongly point in the direction that the neutrino was travelling, away from the Sun. This ability to "point back" at the Sun was the first conclusive evidence that the Sun is powered by nuclear interactions in the core. While the neutrinos observed in Kamiokande-II were clearly from the Sun, the rate of neutrino interactions was again suppressed compared to theory at the time. Even worse, the Kamiokande-II experiment measured about 1/2 the predicted flux, rather than the chlorine experiment's 1/3. The solution to the solar neutrino problem was finally experimentally determined by the Sudbury Neutrino Observatory (SNO). The radiochemical experiments were only sensitive to electron neutrinos, and the signal in the water Cerenkov experiments was dominated by the electron neutrino signal. The SNO experiment, by contrast, had sensitivity to all three neutrino flavours. By simultaneously measuring the electron neutrino and total neutrino fluxes the experiment demonstrated that the suppression was due to the MSW effect, the conversion of electron neutrinos from their pure flavour state into the second neutrino mass eigenstate as they passed through a resonance due to the changing density of the Sun. The resonance is energy dependent, and "turns on" near 2MeV. The water Cherenkov detectors only detect neutrinos above about 5MeV, while the radiochemical experiments were sensitive to lower energy (0.8MeV for chlorine, 0.2MeV for gallium), and this turned out to be the source of the difference in the observed neutrino rates at the two types of experiments. === Proton–proton chain === All neutrinos from the proton–proton chain reaction (PP neutrinos) have been detected except hep neutrinos (next point). Three techniques have been adopted: The radiochemical technique, used by Homestake, GALLEX, GNO and SAGE allowed to measure the neutrino flux above a minimum energy. The detector SNO used scattering on deuterium that allowed to measure the energy of the events, thereby identifying the single components of the predicted SSM neutrino emission. Finally, Kamiokande, Super-Kamiokande, SNO, Borexino and KamLAND used elastic scattering on electrons, which allows the measurement of the neutrino energy. Boron8 neutrinos have been seen by Kamiokande, Super-Kamiokande, SNO, Borexino, KamLAND. Beryllium7, pep, and PP neutrinos have been seen only by Borexino to date. === HEP neutrinos === The highest energy neutrinos have not yet been observed due to their small flux compared to the boron-8 neutrinos, so thus far only limits have been placed on the flux. No experiment yet has had enough sensitivity to observe the flux predicted by the SSM. === CNO cycle === Neutrinos from the CNO cycle of solar energy generation – i.e., the CNO-neutrinos – are also expected to provide observable events below 1 MeV. They have not yet been observed due to experimental noise (background). Ultra-pure scintillator detectors have the potential to probe the flux predicted by the SSM. This detection could be possible already in Borexino; the next scientific occasions will be in SNO+ and, on the longer term, in LENA and JUNO, three detectors that will be larger but will use the same principles of Borexino. The Borexino Collaboration has confirmed that the CNO cycle accounts for 1% of the energy generation within the Sun's core. === Future experiments === While radiochemical experiments have in some sense observed the pp and Be7 neutrinos they have measured only integral fluxes. The "holy grail" of solar neutrino experiments would detect the Be7 neutrinos with a detector that is sensitive to the individual neutrino energies. This experiment would test the MSW hypothesis by searching for the turn-on of the MSW effect. Some exotic models are still capable of explaining the solar neutrino deficit, so the observation of the MSW turn on would, in effect, finally solve the solar neutrino problem. == Core temperature prediction == The flux of boron-8 neutrinos is highly sensitive to the temperature of the core of the Sun, ϕ ( B 8 ) ∝ T 25 {\displaystyle \phi ({\ce {^8B}})\propto T^{25}} . For this reason, a precise measurement of the boron-8 neutrino flux can be used in the framework of the standard solar model as a measurement of the temperature of the core of the Sun. This estimate was performed by Fiorentini and Ricci after the first SNO results were published, and they obtained a temperature of T sun = 15.7 × 10 6 K ± 1 % {\displaystyle T_{\text{sun}}=15.7\times 10^{6}\;{\text{K}}\;\pm 1\%} from a determined neutrino flux of 5.2×106/cm2·s. == Lithium depletion at the solar surface == Stellar models of the Sun's evolution predict the solar surface chemical abundance pretty well except for lithium (Li). The surface abundance of Li on the Sun is 140 times less than the protosolar value (i.e. the primordial abundance at the Sun's birth), yet the temperature at the base of the surface convective zone is not hot enough to burn – and hence deplete – Li. This is known as the solar lithium problem. A large range of Li abundances is observed in solar-type stars of the same age, mass, and metallicity as the Sun. Observations of an unbiased sample of stars of this type with or without observed planets (exoplanets) showed that the known planet-bearing stars have less than one per cent of the primordial Li abundance, and of the remainder half had ten times as much Li. It is hypothesised that the presence of planets may increase the amount of mixing and deepen the convective zone to such an extent that the Li can be burned. A possible mechanism for this is the idea that the planets affect the angular momentum evolution of the star, thus changing the rotation of the star relative to similar stars without planets; in the case of the Sun slowing its rotation. More research is needed to discover where and when the fault in the modelling lies. Given the precision of helioseismic probes of the interior of the modern-day Sun, it is likely that the modelling of the protostellar Sun needs to be adjusted. == See also == Protostar == References == == External links == Description of the SSM by David Guenther Solar Models: An Historical Overview by John N. Bahcall
Wikipedia/Standard_solar_model
Five Equations That Changed the World: The Power and Poetry of Mathematics is a book by Michael Guillen, published in 1995. It is divided into five chapters that talk about five different equations in physics and the people who have developed them The book is a light study in science and history, portraying the preludes to and times and settings of discoveries that have been the basis of further development, including space travel, flight and nuclear power. Each chapter of the book is divided into sections titled Veni, Vidi, Vici. The reviews of the book have been mixed. Publishers Weekly called it "wholly accessible, beautifully written", Kirkus Reviews wrote that it is a "crowd-pleasing kind of book designed to make the science as palatable as possible", and Frank Mahnke wrote that Guillen "has a nice touch for the history of mathematics and physics and their impact on the world". However, in contrast, Charles Stephens panned "the superficiality of the author's treatment of scientific ideas", and the editors of The Capital Times called the book a "miserable failure" at its goal of helping the public appreciate the beauty of mathematics. == Equations == The scientists and their equations are: Isaac Newton and the law of universal gravity: F = G × M × m ÷ d 2 {\displaystyle F=G\times M\times m\div d^{2}} Daniel Bernoulli and the law of hydrodynamic pressure: P + ρ × 1 2 v 2 = CONSTANT {\displaystyle P+\rho \times {\tfrac {1}{2}}v^{2}={\text{CONSTANT}}} Michael Faraday and the law of electromagnetic induction: ∇ × E = − ∂ B / ∂ t {\displaystyle \nabla \times \mathbf {E} =-\partial \mathbf {B} /\partial t} Rudolf Clausius and the second law of thermodynamics: Δ S universe > 0 {\displaystyle \Delta S_{\text{universe}}>0} Albert Einstein and the theory of special relativity: E = m × c 2 {\displaystyle E=m\times c^{2}} == References == "Nonfiction book review: Five Equations That Changed the World", Publishers Weekly, September 4, 1995 McWilliams, Brendan (October 24, 1995), "The elation of the equation", Irish Times, archived from the original on July 14, 2014 "Math's beauty hard to show in words", The Capital Times, March 15, 1996, archived from the original on July 14, 2014
Wikipedia/Five_Equations_That_Changed_the_World
An inexact differential or imperfect differential is a differential whose integral is path dependent. It is most often used in thermodynamics to express changes in path dependent quantities such as heat and work, but is defined more generally within mathematics as a type of differential form. In contrast, an integral of an exact differential is always path independent since the integral acts to invert the differential operator. Consequently, a quantity with an inexact differential cannot be expressed as a function of only the variables within the differential. I.e., its value cannot be inferred just by looking at the initial and final states of a given system. Inexact differentials are primarily used in calculations involving heat and work because they are path functions, not state functions. == Definition == An inexact differential δ u {\displaystyle \delta u} is a differential for which the integral over some two paths with the same end points is different. Specifically, there exist integrable paths γ 1 , γ 2 : [ 0 , 1 ] → R {\displaystyle \gamma _{1},\gamma _{2}:[0,1]\to \mathbb {R} } such that γ 1 ( 0 ) = γ 2 ( 0 ) {\displaystyle \gamma _{1}(0)=\gamma _{2}(0)} , γ 1 ( 1 ) = γ 2 ( 1 ) {\displaystyle \gamma _{1}(1)=\gamma _{2}(1)} and ∫ γ 1 δ u ≠ ∫ γ 2 δ u {\displaystyle \int _{\gamma _{1}}\delta u\not =\int _{\gamma _{2}}\delta u} In this case, we denote the integrals as Δ u | γ 1 {\displaystyle \Delta u|_{\gamma _{1}}} and Δ u | γ 2 {\displaystyle \Delta u|_{\gamma _{2}}} respectively to make explicit the path dependence of the change of the quantity we are considering as u {\displaystyle u} . More generally, an inexact differential δ u {\displaystyle \delta u} is a differential form which is not an exact differential, i.e., for all functions f {\displaystyle f} , d f ≠ δ u {\displaystyle \mathrm {d} f\neq \delta u} The fundamental theorem of calculus for line integrals requires path independence in order to express the values of a given vector field in terms of the partial derivatives of another function that is the multivariate analogue of the antiderivative. This is because there can be no unique representation of an antiderivative for inexact differentials since their variation is inconsistent along different paths. This stipulation of path independence is an unnecessary addendum to the fundamental theorem of calculus because in one-dimensional calculus there is only one path in between two points defined by a function. == Notation == === Thermodynamics === Instead of the differential symbol d, the symbol δ is used, a convention which originated in the 19th century work of German mathematician Carl Gottfried Neumann, indicating that Q (heat) and W (work) are path-dependent, while U (internal energy) is not. === Statistical Mechanics === Within statistical mechanics, inexact differentials are often denoted with a bar through the differential operator, đ. In LaTeX the command \rlap{\textrm{d}}{\bar{\phantom{w}}} is an approximation or simply \dj for a dyet character, which needs the T1 encoding. === Mathematics === Within mathematics, inexact differentials are usually just referred more generally to as differential forms which are often written just as ω {\displaystyle \omega } . == Examples == === Total distance === When you walk from a point A {\displaystyle A} to a point B {\displaystyle B} along a line A B ¯ {\displaystyle {\overline {AB}}} (without changing directions) your net displacement and total distance covered are both equal to the length of said line A B {\displaystyle AB} . If you then return to point A {\displaystyle A} (without changing directions) then your net displacement is zero while your total distance covered is 2 A B {\displaystyle 2AB} . This example captures the essential idea behind the inexact differential in one dimension. Note that if we allowed ourselves to change directions, then we could take a step forward and then backward at any point in time in going from A {\displaystyle A} to B {\displaystyle B} and in-so-doing increase the overall distance covered to an arbitrarily large number while keeping the net displacement constant. Reworking the above with differentials and taking A B ¯ {\displaystyle {\overline {AB}}} to be along the x {\displaystyle x} -axis, the net distance differential is d f = d x {\displaystyle \mathrm {d} f=\mathrm {d} x} , an exact differential with antiderivative x {\displaystyle x} . On the other hand, the total distance differential is | d x | {\displaystyle |\mathrm {d} x|} , which does not have an antiderivative. The path taken is γ : [ 0 , 1 ] → A B ¯ {\displaystyle \gamma :[0,1]\to {\overline {AB}}} where there exists a time t ∈ ( 0 , 1 ) {\displaystyle t\in (0,1)} such that γ {\displaystyle \gamma } is strictly increasing before t {\displaystyle t} and strictly decreasing afterward. Then d x {\displaystyle \mathrm {d} x} is positive before t {\displaystyle t} and negative afterward, yielding the integrals, Δ f = ∫ γ d x = 0 {\displaystyle \Delta f=\int _{\gamma }\mathrm {d} x=0} Δ g | γ = ∫ γ | d x | = ∫ A B d x + ∫ B A ( − d x ) = 2 ∫ A B d x = 2 A B {\displaystyle \Delta g|_{\gamma }=\int _{\gamma }|\mathrm {d} x|=\int _{A}^{B}\mathrm {d} x+\int _{B}^{A}(-\mathrm {d} x)=2\int _{A}^{B}\mathrm {d} x=2AB} exactly the results we expected from the verbal argument before. === First law of thermodynamics === Inexact differentials show up explicitly in the first law of thermodynamics, d U = δ Q − δ W {\displaystyle \mathrm {d} U=\delta Q-\delta W} where U {\displaystyle U} is the energy, δ Q {\displaystyle \delta Q} is the differential change in heat and δ W {\displaystyle \delta W} is the differential change in work. Based on the constants of the thermodynamic system, we are able to parameterize the average energy in several different ways. E.g., in the first stage of the Carnot cycle a gas is heated by a reservoir, giving us an isothermal expansion of that gas. Some differential amount of heat δ Q = T d S {\displaystyle \delta Q=TdS} enters the gas. During the second stage, the gas is allowed to freely expand, outputting some differential amount of work δ W = P d V {\displaystyle \delta W=PdV} . The third stage is similar to the first stage, except the heat is lost by contact with a cold reservoir, while the fourth cycle is like the second except work is done onto the system by the surroundings to compress the gas. Because the overall changes in heat and work are different over different parts of the cycle, there is a nonzero net change in the heat and work, indicating that the differentials δ Q {\displaystyle \delta Q} and δ W {\displaystyle \delta W} must be inexact differentials. Internal energy U is a state function, meaning its change can be inferred just by comparing two different states of the system (independently of its transition path), which we can therefore indicate with U1 and U2. Since we can go from state U1 to state U2 either by providing heat Q = U2 − U1 or work W = U2 − U1, such a change of state does not uniquely identify the amount of work W done to the system or heat Q transferred, but only the change in internal energy ΔU. === Heat and work === A fire requires heat, fuel, and an oxidizing agent. The energy required to overcome the activation energy barrier for combustion is transferred as heat into the system, resulting in changes to the system's internal energy. In a process, the energy input to start a fire may comprise both work and heat, such as when one rubs tinder (work) and experiences friction (heat) to start a fire. The ensuing combustion is highly exothermic, which releases heat. The overall change in internal energy does not reveal the mode of energy transfer and quantifies only the net work and heat. The difference between initial and final states of the system's internal energy does not account for the extent of the energy interactions transpired. Therefore, internal energy is a state function (i.e. exact differential), while heat and work are path functions (i.e. inexact differentials) because integration must account for the path taken. == Integrating factors == It is sometimes possible to convert an inexact differential into an exact one by means of an integrating factor. The most common example of this in thermodynamics is the definition of entropy: d S = δ Q rev T {\displaystyle \mathrm {d} S={\frac {\delta Q_{\text{rev}}}{T}}} In this case, δQ is an inexact differential, because its effect on the state of the system can be compensated by δW. However, when divided by the absolute temperature and when the exchange occurs at reversible conditions (therefore the rev subscript), it produces an exact differential: the entropy S is also a state function. === Example === Consider the inexact differential form, δ u = 2 y d x + x d y . {\displaystyle \delta u=2y\,\mathrm {d} x+x\,\mathrm {d} y.} This must be inexact by considering going to the point (1,1). If we first increase y and then increase x, then that corresponds to first integrating over y and then over x. Integrating over y first contributes ∫ 0 1 x d y | x = 0 = 0 {\textstyle \int _{0}^{1}x\,dy|_{x=0}=0} and then integrating over x contributes ∫ 0 1 2 y d x | y = 1 = 2 {\textstyle \int _{0}^{1}2y\,\mathrm {\mathrm {d} } x|_{y=1}=2} . Thus, along the first path we get a value of 2. However, along the second path we get a value of ∫ 0 1 2 y d x | y = 0 + ∫ 0 1 x d y | x = 1 = 1 {\textstyle \int _{0}^{1}2y\,\mathrm {d} x|_{y=0}+\int _{0}^{1}x\,\mathrm {d} y|_{x=1}=1} . We can make δ u {\displaystyle \delta u} an exact differential by multiplying it by x, yielding x δ u = 2 x y d x + x 2 d y = d ( x 2 y ) . {\displaystyle x\,\delta u=2xy\,\mathrm {d} x+x^{2}\,\mathrm {d} y=\mathrm {d} (x^{2}y).} And so x δ u {\displaystyle x\,\delta u} is an exact differential. == See also == Closed and exact differential forms for a higher-level treatment Differential (mathematics) Exact differential Exact differential equation Integrating factor for solving non-exact differential equations by making them exact Conservative vector field == References == == External links == Inexact Differential – from Wolfram MathWorld Exact and Inexact Differentials – University of Arizona Exact and Inexact Differentials – University of Texas Exact Differential – from Wolfram MathWorld
Wikipedia/Inexact_differential
In the theory of general relativity, a stress–energy–momentum pseudotensor, such as the Landau–Lifshitz pseudotensor, is an extension of the non-gravitational stress–energy tensor that incorporates the energy–momentum of gravity. It allows the energy–momentum of a system of gravitating matter to be defined. In particular it allows the total of matter plus the gravitating energy–momentum to form a conserved current within the framework of general relativity, so that the total energy–momentum crossing the hypersurface (3-dimensional boundary) of any compact space–time hypervolume (4-dimensional submanifold) vanishes. Some people (such as Erwin Schrödinger) have objected to this derivation on the grounds that pseudotensors are inappropriate objects in general relativity, but the conservation law only requires the use of the 4-divergence of a pseudotensor which is, in this case, a tensor (which also vanishes). Mathematical developments in the 1980's have allowed pseudotensors to be understood as sections of jet bundles, thus providing a firm theoretical foundation for the concept of pseudotensors in general relativity. == Landau–Lifshitz pseudotensor == The Landau–Lifshitz pseudotensor, a stress–energy–momentum pseudotensor for gravity, when combined with terms for matter (including photons and neutrinos), allows the energy–momentum conservation laws to be extended into general relativity. === Requirements === Landau and Lifshitz were led by four requirements in their search for a gravitational energy momentum pseudotensor, t LL μ ν {\displaystyle t_{\text{LL}}^{\mu \nu }} : that it be constructed entirely from the metric tensor, so as to be purely geometrical or gravitational in origin. that it be index symmetric, i.e. t LL μ ν = t LL ν μ {\displaystyle t_{\text{LL}}^{\mu \nu }=t_{\text{LL}}^{\nu \mu }} , (to conserve angular momentum) that, when added to the stress–energy tensor of matter, T μ ν {\displaystyle T^{\mu \nu }} , its total ordinary 4-divergence (∂μ, not ∇μ) vanishes so that we have a conserved expression for the total stress–energy–momentum. (This is required of any conserved current.) that it vanish locally in an inertial frame of reference (which requires that it only contains first order and not second or higher order derivatives of the metric). This is because the equivalence principle requires that the gravitational force field, the Christoffel symbols, vanish locally in some frames. If gravitational energy is a function of its force field, as is usual for other forces, then the associated gravitational pseudotensor should also vanish locally. === Definition === Landau and Lifshitz showed that there is a unique construction that satisfies these requirements, namely t LL μ ν = − 1 κ G μ ν + 1 2 κ ( − g ) ( ( − g ) ( g μ ν g α β − g μ α g ν β ) ) , α β {\displaystyle t_{\text{LL}}^{\mu \nu }=-{\frac {1}{\kappa }}G^{\mu \nu }+{\frac {1}{2\kappa (-g)}}\left((-g)\left(g^{\mu \nu }g^{\alpha \beta }-g^{\mu \alpha }g^{\nu \beta }\right)\right)_{,\alpha \beta }} where: Gμν is the Einstein tensor (which is constructed from the metric) gμν is the inverse of the metric tensor, gμν g = det(gμν) is the determinant of the metric tensor. g < 0, hence its appearance as − g {\displaystyle -g} . , α β = ∂ 2 ∂ x α ∂ x β {\textstyle {}_{,\alpha \beta }={\frac {\partial ^{2}}{\partial x^{\alpha }\partial x^{\beta }}}} are partial derivatives, not covariant derivatives κ = ⁠8πG/c4⁠ is the Einstein gravitational constant G is the Newtonian constant of gravitation === Verification === Examining the 4 requirement conditions we can see that the first 3 are relatively easy to demonstrate: Since the Einstein tensor, G μ ν {\displaystyle G^{\mu \nu }} , is itself constructed from the metric, so therefore is t LL μ ν {\displaystyle t_{\text{LL}}^{\mu \nu }} Since the Einstein tensor, G μ ν {\displaystyle G^{\mu \nu }} , is symmetric so is t LL μ ν {\displaystyle t_{\text{LL}}^{\mu \nu }} since the additional terms are symmetric by inspection. The Landau–Lifshitz pseudotensor is constructed so that when added to the stress–energy tensor of matter, T μ ν {\displaystyle T^{\mu \nu }} , its total 4-divergence vanishes: ( ( − g ) ( T μ ν + t LL μ ν ) ) , μ = 0 {\displaystyle \left(\left(-g\right)\left(T^{\mu \nu }+t_{\text{LL}}^{\mu \nu }\right)\right)_{,\mu }=0} . This follows from the cancellation of the Einstein tensor, G μ ν {\displaystyle G^{\mu \nu }} , with the stress–energy tensor, T μ ν {\displaystyle T^{\mu \nu }} by the Einstein field equations; the remaining term vanishes algebraically due to the commutativity of partial derivatives applied across antisymmetric indices. The Landau–Lifshitz pseudotensor appears to include second derivative terms in the metric, but in fact the explicit second derivative terms in the pseudotensor cancel with the implicit second derivative terms contained within the Einstein tensor, G μ ν {\displaystyle G^{\mu \nu }} . This is more evident when the pseudotensor is directly expressed in terms of the metric tensor or the Levi-Civita connection; only the first derivative terms in the metric survive and these vanish where the frame is locally inertial at any chosen point. As a result, the entire pseudotensor vanishes locally (again, at any chosen point) t LL μ ν = 0 {\displaystyle t_{\text{LL}}^{\mu \nu }=0} , which demonstrates the delocalisation of gravitational energy–momentum. === Cosmological constant === When the Landau–Lifshitz pseudotensor was formulated it was commonly assumed that the cosmological constant, Λ {\displaystyle \Lambda } , was zero. Nowadays, that assumption is suspect, and the expression frequently gains a Λ {\displaystyle \Lambda } term, giving: t LL μ ν = − 1 κ ( G μ ν + Λ g μ ν ) + 1 2 κ ( − g ) ( ( − g ) ( g μ ν g α β − g μ α g ν β ) ) , α β {\displaystyle t_{\text{LL}}^{\mu \nu }=-{\frac {1}{\kappa }}\left(G^{\mu \nu }+\Lambda g^{\mu \nu }\right)+{\frac {1}{2\kappa (-g)}}\left(\left(-g\right)\left(g^{\mu \nu }g^{\alpha \beta }-g^{\mu \alpha }g^{\nu \beta }\right)\right)_{,\alpha \beta }} This is necessary for consistency with the Einstein field equations. === Metric and affine connection versions === Landau and Lifshitz also provide two equivalent but longer expressions for the Landau–Lifshitz pseudotensor: Metric tensor version: ( − g ) ( t LL μ ν + Λ g μ ν κ ) = 1 2 κ [ ( − g g μ ν ) , α ( − g g α β ) , β − ( − g g μ α ) , α ( − g g ν β ) , β + 1 8 ( 2 g μ α g ν β − g μ ν g α β ) ( 2 g σ ρ g λ ω − g ρ λ g σ ω ) ( − g g σ ω ) , α ( − g g ρ λ ) , β − ( g μ α g β σ ( − g g ν σ ) , ρ ( − g g β ρ ) , α + g ν α g β σ ( − g g μ σ ) , ρ ( − g g β ρ ) , α ) + 1 2 g μ ν g α β ( − g g α σ ) , ρ ( − g g ρ β ) , σ + g α β g σ ρ ( − g g μ α ) , σ ( − g g ν β ) , ρ ] {\displaystyle {\begin{aligned}(-g)\left(t_{\text{LL}}^{\mu \nu }+{\frac {\Lambda g^{\mu \nu }}{\kappa }}\right)={\frac {1}{2\kappa }}{\bigg [}&\left({\sqrt {-g}}g^{\mu \nu }\right)_{,\alpha }\left({\sqrt {-g}}g^{\alpha \beta }\right)_{,\beta }-\left({\sqrt {-g}}g^{\mu \alpha }\right)_{,\alpha }\left({\sqrt {-g}}g^{\nu \beta }\right)_{,\beta }+{}\\&{\frac {1}{8}}\left(2g^{\mu \alpha }g^{\nu \beta }-g^{\mu \nu }g^{\alpha \beta }\right)\left(2g_{\sigma \rho }g_{\lambda \omega }-g_{\rho \lambda }g_{\sigma \omega }\right)\left({\sqrt {-g}}g^{\sigma \omega }\right)_{,\alpha }\left({\sqrt {-g}}g^{\rho \lambda }\right)_{,\beta }-{}\\&\left(g^{\mu \alpha }g_{\beta \sigma }\left({\sqrt {-g}}g^{\nu \sigma }\right)_{,\rho }\left({\sqrt {-g}}g^{\beta \rho }\right)_{,\alpha }+g^{\nu \alpha }g_{\beta \sigma }\left({\sqrt {-g}}g^{\mu \sigma }\right)_{,\rho }\left({\sqrt {-g}}g^{\beta \rho }\right)_{,\alpha }\right)+{}\\&\left.{\frac {1}{2}}g^{\mu \nu }g_{\alpha \beta }\left({\sqrt {-g}}g^{\alpha \sigma }\right)_{,\rho }\left({\sqrt {-g}}g^{\rho \beta }\right)_{,\sigma }+g_{\alpha \beta }g^{\sigma \rho }\left({\sqrt {-g}}g^{\mu \alpha }\right)_{,\sigma }\left({\sqrt {-g}}g^{\nu \beta }\right)_{,\rho }\right]\end{aligned}}} Affine connection version: t LL μ ν + Λ g μ ν κ = 1 2 κ [ ( 2 Γ α β σ Γ σ ρ ρ − Γ α ρ σ Γ β σ ρ − Γ α σ σ Γ β ρ ρ ) ( g μ α g ν β − g μ ν g α β ) + ( Γ α ρ ν Γ β σ ρ + Γ β σ ν Γ α ρ ρ − Γ σ ρ ν Γ α β ρ − Γ α β ν Γ σ ρ ρ ) g μ α g β σ + ( Γ α ρ μ Γ β σ ρ + Γ β σ μ Γ α ρ ρ − Γ σ ρ μ Γ α β ρ − Γ α β μ Γ σ ρ ρ ) g ν α g β σ + ( Γ α σ μ Γ β ρ ν − Γ α β μ Γ σ ρ ν ) g α β g σ ρ ] {\displaystyle {\begin{aligned}t_{\text{LL}}^{\mu \nu }+{\frac {\Lambda g^{\mu \nu }}{\kappa }}={\frac {1}{2\kappa }}{\Big [}&\left(2\Gamma _{\alpha \beta }^{\sigma }\Gamma _{\sigma \rho }^{\rho }-\Gamma _{\alpha \rho }^{\sigma }\Gamma _{\beta \sigma }^{\rho }-\Gamma _{\alpha \sigma }^{\sigma }\Gamma _{\beta \rho }^{\rho }\right)\left(g^{\mu \alpha }g^{\nu \beta }-g^{\mu \nu }g^{\alpha \beta }\right)+{}\\&\left(\Gamma _{\alpha \rho }^{\nu }\Gamma _{\beta \sigma }^{\rho }+\Gamma _{\beta \sigma }^{\nu }\Gamma _{\alpha \rho }^{\rho }-\Gamma _{\sigma \rho }^{\nu }\Gamma _{\alpha \beta }^{\rho }-\Gamma _{\alpha \beta }^{\nu }\Gamma _{\sigma \rho }^{\rho }\right)g^{\mu \alpha }g^{\beta \sigma }+\\&\left(\Gamma _{\alpha \rho }^{\mu }\Gamma _{\beta \sigma }^{\rho }+\Gamma _{\beta \sigma }^{\mu }\Gamma _{\alpha \rho }^{\rho }-\Gamma _{\sigma \rho }^{\mu }\Gamma _{\alpha \beta }^{\rho }-\Gamma _{\alpha \beta }^{\mu }\Gamma _{\sigma \rho }^{\rho }\right)g^{\nu \alpha }g^{\beta \sigma }+\\&\left.\left(\Gamma _{\alpha \sigma }^{\mu }\Gamma _{\beta \rho }^{\nu }-\Gamma _{\alpha \beta }^{\mu }\Gamma _{\sigma \rho }^{\nu }\right)g^{\alpha \beta }g^{\sigma \rho }\right]\end{aligned}}} This definition of energy–momentum is covariantly applicable not just under Lorentz transformations, but also under general coordinate transformations. == Einstein pseudotensor == This pseudotensor was originally developed by Albert Einstein. Paul Dirac showed that the mixed Einstein pseudotensor t μ ν = 1 2 κ − g ( ( g α β − g ) , μ ( Γ α β ν − δ β ν Γ α σ σ ) − δ μ ν g α β ( Γ α β σ Γ σ ρ ρ − Γ α σ ρ Γ β ρ σ ) − g ) {\displaystyle {t_{\mu }}^{\nu }={\frac {1}{2\kappa {\sqrt {-g}}}}\left(\left(g^{\alpha \beta }{\sqrt {-g}}\right)_{,\mu }\left(\Gamma _{\alpha \beta }^{\nu }-\delta _{\beta }^{\nu }\Gamma _{\alpha \sigma }^{\sigma }\right)-\delta _{\mu }^{\nu }g^{\alpha \beta }\left(\Gamma _{\alpha \beta }^{\sigma }\Gamma _{\sigma \rho }^{\rho }-\Gamma _{\alpha \sigma }^{\rho }\Gamma _{\beta \rho }^{\sigma }\right){\sqrt {-g}}\right)} satisfies a conservation law ( ( T μ ν + t μ ν ) − g ) , ν = 0. {\displaystyle \left(\left({T_{\mu }}^{\nu }+{t_{\mu }}^{\nu }\right){\sqrt {-g}}\right)_{,\nu }=0.} Clearly this pseudotensor for gravitational stress–energy is constructed exclusively from the metric tensor and its first derivatives. Consequently, it vanishes at any event when the coordinate system is chosen to make the first derivatives of the metric vanish because each term in the pseudotensor is quadratic in the first derivatives of the metric tensor field. However it is not symmetric, and is therefore not suitable as a basis for defining the angular momentum. == See also == Bel–Robinson tensor Gravitational wave == Notes == == References == Petrov, Alexander (2008). "Nonlinear Perturbations and Conservation Laws on Curved Backgrounds in GR and Other Metric Theories". In Christiansen, M.N.; Rasmussen, T.K. (eds.). Classical and Quantum Gravity Research. New York: Nova Science Publishers. arXiv:0705.0019. ISBN 978-1-61122-957-8.
Wikipedia/Stress–energy–momentum_pseudotensor
In the theory of general relativity, a stress–energy–momentum pseudotensor, such as the Landau–Lifshitz pseudotensor, is an extension of the non-gravitational stress–energy tensor that incorporates the energy–momentum of gravity. It allows the energy–momentum of a system of gravitating matter to be defined. In particular it allows the total of matter plus the gravitating energy–momentum to form a conserved current within the framework of general relativity, so that the total energy–momentum crossing the hypersurface (3-dimensional boundary) of any compact space–time hypervolume (4-dimensional submanifold) vanishes. Some people (such as Erwin Schrödinger) have objected to this derivation on the grounds that pseudotensors are inappropriate objects in general relativity, but the conservation law only requires the use of the 4-divergence of a pseudotensor which is, in this case, a tensor (which also vanishes). Mathematical developments in the 1980's have allowed pseudotensors to be understood as sections of jet bundles, thus providing a firm theoretical foundation for the concept of pseudotensors in general relativity. == Landau–Lifshitz pseudotensor == The Landau–Lifshitz pseudotensor, a stress–energy–momentum pseudotensor for gravity, when combined with terms for matter (including photons and neutrinos), allows the energy–momentum conservation laws to be extended into general relativity. === Requirements === Landau and Lifshitz were led by four requirements in their search for a gravitational energy momentum pseudotensor, t LL μ ν {\displaystyle t_{\text{LL}}^{\mu \nu }} : that it be constructed entirely from the metric tensor, so as to be purely geometrical or gravitational in origin. that it be index symmetric, i.e. t LL μ ν = t LL ν μ {\displaystyle t_{\text{LL}}^{\mu \nu }=t_{\text{LL}}^{\nu \mu }} , (to conserve angular momentum) that, when added to the stress–energy tensor of matter, T μ ν {\displaystyle T^{\mu \nu }} , its total ordinary 4-divergence (∂μ, not ∇μ) vanishes so that we have a conserved expression for the total stress–energy–momentum. (This is required of any conserved current.) that it vanish locally in an inertial frame of reference (which requires that it only contains first order and not second or higher order derivatives of the metric). This is because the equivalence principle requires that the gravitational force field, the Christoffel symbols, vanish locally in some frames. If gravitational energy is a function of its force field, as is usual for other forces, then the associated gravitational pseudotensor should also vanish locally. === Definition === Landau and Lifshitz showed that there is a unique construction that satisfies these requirements, namely t LL μ ν = − 1 κ G μ ν + 1 2 κ ( − g ) ( ( − g ) ( g μ ν g α β − g μ α g ν β ) ) , α β {\displaystyle t_{\text{LL}}^{\mu \nu }=-{\frac {1}{\kappa }}G^{\mu \nu }+{\frac {1}{2\kappa (-g)}}\left((-g)\left(g^{\mu \nu }g^{\alpha \beta }-g^{\mu \alpha }g^{\nu \beta }\right)\right)_{,\alpha \beta }} where: Gμν is the Einstein tensor (which is constructed from the metric) gμν is the inverse of the metric tensor, gμν g = det(gμν) is the determinant of the metric tensor. g < 0, hence its appearance as − g {\displaystyle -g} . , α β = ∂ 2 ∂ x α ∂ x β {\textstyle {}_{,\alpha \beta }={\frac {\partial ^{2}}{\partial x^{\alpha }\partial x^{\beta }}}} are partial derivatives, not covariant derivatives κ = ⁠8πG/c4⁠ is the Einstein gravitational constant G is the Newtonian constant of gravitation === Verification === Examining the 4 requirement conditions we can see that the first 3 are relatively easy to demonstrate: Since the Einstein tensor, G μ ν {\displaystyle G^{\mu \nu }} , is itself constructed from the metric, so therefore is t LL μ ν {\displaystyle t_{\text{LL}}^{\mu \nu }} Since the Einstein tensor, G μ ν {\displaystyle G^{\mu \nu }} , is symmetric so is t LL μ ν {\displaystyle t_{\text{LL}}^{\mu \nu }} since the additional terms are symmetric by inspection. The Landau–Lifshitz pseudotensor is constructed so that when added to the stress–energy tensor of matter, T μ ν {\displaystyle T^{\mu \nu }} , its total 4-divergence vanishes: ( ( − g ) ( T μ ν + t LL μ ν ) ) , μ = 0 {\displaystyle \left(\left(-g\right)\left(T^{\mu \nu }+t_{\text{LL}}^{\mu \nu }\right)\right)_{,\mu }=0} . This follows from the cancellation of the Einstein tensor, G μ ν {\displaystyle G^{\mu \nu }} , with the stress–energy tensor, T μ ν {\displaystyle T^{\mu \nu }} by the Einstein field equations; the remaining term vanishes algebraically due to the commutativity of partial derivatives applied across antisymmetric indices. The Landau–Lifshitz pseudotensor appears to include second derivative terms in the metric, but in fact the explicit second derivative terms in the pseudotensor cancel with the implicit second derivative terms contained within the Einstein tensor, G μ ν {\displaystyle G^{\mu \nu }} . This is more evident when the pseudotensor is directly expressed in terms of the metric tensor or the Levi-Civita connection; only the first derivative terms in the metric survive and these vanish where the frame is locally inertial at any chosen point. As a result, the entire pseudotensor vanishes locally (again, at any chosen point) t LL μ ν = 0 {\displaystyle t_{\text{LL}}^{\mu \nu }=0} , which demonstrates the delocalisation of gravitational energy–momentum. === Cosmological constant === When the Landau–Lifshitz pseudotensor was formulated it was commonly assumed that the cosmological constant, Λ {\displaystyle \Lambda } , was zero. Nowadays, that assumption is suspect, and the expression frequently gains a Λ {\displaystyle \Lambda } term, giving: t LL μ ν = − 1 κ ( G μ ν + Λ g μ ν ) + 1 2 κ ( − g ) ( ( − g ) ( g μ ν g α β − g μ α g ν β ) ) , α β {\displaystyle t_{\text{LL}}^{\mu \nu }=-{\frac {1}{\kappa }}\left(G^{\mu \nu }+\Lambda g^{\mu \nu }\right)+{\frac {1}{2\kappa (-g)}}\left(\left(-g\right)\left(g^{\mu \nu }g^{\alpha \beta }-g^{\mu \alpha }g^{\nu \beta }\right)\right)_{,\alpha \beta }} This is necessary for consistency with the Einstein field equations. === Metric and affine connection versions === Landau and Lifshitz also provide two equivalent but longer expressions for the Landau–Lifshitz pseudotensor: Metric tensor version: ( − g ) ( t LL μ ν + Λ g μ ν κ ) = 1 2 κ [ ( − g g μ ν ) , α ( − g g α β ) , β − ( − g g μ α ) , α ( − g g ν β ) , β + 1 8 ( 2 g μ α g ν β − g μ ν g α β ) ( 2 g σ ρ g λ ω − g ρ λ g σ ω ) ( − g g σ ω ) , α ( − g g ρ λ ) , β − ( g μ α g β σ ( − g g ν σ ) , ρ ( − g g β ρ ) , α + g ν α g β σ ( − g g μ σ ) , ρ ( − g g β ρ ) , α ) + 1 2 g μ ν g α β ( − g g α σ ) , ρ ( − g g ρ β ) , σ + g α β g σ ρ ( − g g μ α ) , σ ( − g g ν β ) , ρ ] {\displaystyle {\begin{aligned}(-g)\left(t_{\text{LL}}^{\mu \nu }+{\frac {\Lambda g^{\mu \nu }}{\kappa }}\right)={\frac {1}{2\kappa }}{\bigg [}&\left({\sqrt {-g}}g^{\mu \nu }\right)_{,\alpha }\left({\sqrt {-g}}g^{\alpha \beta }\right)_{,\beta }-\left({\sqrt {-g}}g^{\mu \alpha }\right)_{,\alpha }\left({\sqrt {-g}}g^{\nu \beta }\right)_{,\beta }+{}\\&{\frac {1}{8}}\left(2g^{\mu \alpha }g^{\nu \beta }-g^{\mu \nu }g^{\alpha \beta }\right)\left(2g_{\sigma \rho }g_{\lambda \omega }-g_{\rho \lambda }g_{\sigma \omega }\right)\left({\sqrt {-g}}g^{\sigma \omega }\right)_{,\alpha }\left({\sqrt {-g}}g^{\rho \lambda }\right)_{,\beta }-{}\\&\left(g^{\mu \alpha }g_{\beta \sigma }\left({\sqrt {-g}}g^{\nu \sigma }\right)_{,\rho }\left({\sqrt {-g}}g^{\beta \rho }\right)_{,\alpha }+g^{\nu \alpha }g_{\beta \sigma }\left({\sqrt {-g}}g^{\mu \sigma }\right)_{,\rho }\left({\sqrt {-g}}g^{\beta \rho }\right)_{,\alpha }\right)+{}\\&\left.{\frac {1}{2}}g^{\mu \nu }g_{\alpha \beta }\left({\sqrt {-g}}g^{\alpha \sigma }\right)_{,\rho }\left({\sqrt {-g}}g^{\rho \beta }\right)_{,\sigma }+g_{\alpha \beta }g^{\sigma \rho }\left({\sqrt {-g}}g^{\mu \alpha }\right)_{,\sigma }\left({\sqrt {-g}}g^{\nu \beta }\right)_{,\rho }\right]\end{aligned}}} Affine connection version: t LL μ ν + Λ g μ ν κ = 1 2 κ [ ( 2 Γ α β σ Γ σ ρ ρ − Γ α ρ σ Γ β σ ρ − Γ α σ σ Γ β ρ ρ ) ( g μ α g ν β − g μ ν g α β ) + ( Γ α ρ ν Γ β σ ρ + Γ β σ ν Γ α ρ ρ − Γ σ ρ ν Γ α β ρ − Γ α β ν Γ σ ρ ρ ) g μ α g β σ + ( Γ α ρ μ Γ β σ ρ + Γ β σ μ Γ α ρ ρ − Γ σ ρ μ Γ α β ρ − Γ α β μ Γ σ ρ ρ ) g ν α g β σ + ( Γ α σ μ Γ β ρ ν − Γ α β μ Γ σ ρ ν ) g α β g σ ρ ] {\displaystyle {\begin{aligned}t_{\text{LL}}^{\mu \nu }+{\frac {\Lambda g^{\mu \nu }}{\kappa }}={\frac {1}{2\kappa }}{\Big [}&\left(2\Gamma _{\alpha \beta }^{\sigma }\Gamma _{\sigma \rho }^{\rho }-\Gamma _{\alpha \rho }^{\sigma }\Gamma _{\beta \sigma }^{\rho }-\Gamma _{\alpha \sigma }^{\sigma }\Gamma _{\beta \rho }^{\rho }\right)\left(g^{\mu \alpha }g^{\nu \beta }-g^{\mu \nu }g^{\alpha \beta }\right)+{}\\&\left(\Gamma _{\alpha \rho }^{\nu }\Gamma _{\beta \sigma }^{\rho }+\Gamma _{\beta \sigma }^{\nu }\Gamma _{\alpha \rho }^{\rho }-\Gamma _{\sigma \rho }^{\nu }\Gamma _{\alpha \beta }^{\rho }-\Gamma _{\alpha \beta }^{\nu }\Gamma _{\sigma \rho }^{\rho }\right)g^{\mu \alpha }g^{\beta \sigma }+\\&\left(\Gamma _{\alpha \rho }^{\mu }\Gamma _{\beta \sigma }^{\rho }+\Gamma _{\beta \sigma }^{\mu }\Gamma _{\alpha \rho }^{\rho }-\Gamma _{\sigma \rho }^{\mu }\Gamma _{\alpha \beta }^{\rho }-\Gamma _{\alpha \beta }^{\mu }\Gamma _{\sigma \rho }^{\rho }\right)g^{\nu \alpha }g^{\beta \sigma }+\\&\left.\left(\Gamma _{\alpha \sigma }^{\mu }\Gamma _{\beta \rho }^{\nu }-\Gamma _{\alpha \beta }^{\mu }\Gamma _{\sigma \rho }^{\nu }\right)g^{\alpha \beta }g^{\sigma \rho }\right]\end{aligned}}} This definition of energy–momentum is covariantly applicable not just under Lorentz transformations, but also under general coordinate transformations. == Einstein pseudotensor == This pseudotensor was originally developed by Albert Einstein. Paul Dirac showed that the mixed Einstein pseudotensor t μ ν = 1 2 κ − g ( ( g α β − g ) , μ ( Γ α β ν − δ β ν Γ α σ σ ) − δ μ ν g α β ( Γ α β σ Γ σ ρ ρ − Γ α σ ρ Γ β ρ σ ) − g ) {\displaystyle {t_{\mu }}^{\nu }={\frac {1}{2\kappa {\sqrt {-g}}}}\left(\left(g^{\alpha \beta }{\sqrt {-g}}\right)_{,\mu }\left(\Gamma _{\alpha \beta }^{\nu }-\delta _{\beta }^{\nu }\Gamma _{\alpha \sigma }^{\sigma }\right)-\delta _{\mu }^{\nu }g^{\alpha \beta }\left(\Gamma _{\alpha \beta }^{\sigma }\Gamma _{\sigma \rho }^{\rho }-\Gamma _{\alpha \sigma }^{\rho }\Gamma _{\beta \rho }^{\sigma }\right){\sqrt {-g}}\right)} satisfies a conservation law ( ( T μ ν + t μ ν ) − g ) , ν = 0. {\displaystyle \left(\left({T_{\mu }}^{\nu }+{t_{\mu }}^{\nu }\right){\sqrt {-g}}\right)_{,\nu }=0.} Clearly this pseudotensor for gravitational stress–energy is constructed exclusively from the metric tensor and its first derivatives. Consequently, it vanishes at any event when the coordinate system is chosen to make the first derivatives of the metric vanish because each term in the pseudotensor is quadratic in the first derivatives of the metric tensor field. However it is not symmetric, and is therefore not suitable as a basis for defining the angular momentum. == See also == Bel–Robinson tensor Gravitational wave == Notes == == References == Petrov, Alexander (2008). "Nonlinear Perturbations and Conservation Laws on Curved Backgrounds in GR and Other Metric Theories". In Christiansen, M.N.; Rasmussen, T.K. (eds.). Classical and Quantum Gravity Research. New York: Nova Science Publishers. arXiv:0705.0019. ISBN 978-1-61122-957-8.
Wikipedia/Einstein_pseudotensor
Energy transformation, also known as energy conversion, is the process of changing energy from one form to another. In physics, energy is a quantity that provides the capacity to perform work (e.g. lifting an object) or provides heat. In addition to being converted, according to the law of conservation of energy, energy is transferable to a different location or object or living being, but it cannot be created or destroyed. == Limitations in the conversion of thermal energy == Conversions to thermal energy from other forms of energy may occur with 100% efficiency. Conversion among non-thermal forms of energy may occur with fairly high efficiency, though there is always some energy dissipated thermally due to friction and similar processes. Sometimes the efficiency is close to 100%, such as when potential energy is converted to kinetic energy as an object falls in a vacuum. This also applies to the opposite case; for example, an object in an elliptical orbit around another body converts its kinetic energy (speed) into gravitational potential energy (distance from the other object) as it moves away from its parent body. When it reaches the furthest point, it will reverse the process, accelerating and converting potential energy into kinetic. Since space is a near-vacuum, this process has close to 100% efficiency. Because transformations between non-thermal forms of energy are constrained only by the conservation of energy, they have a theoretical maximum efficiency of 100%. By contrast, transformations from thermal energy to other forms of energy are additionally constrained by the second law of thermodynamics and have a theoretical maximum efficiency strictly less than 100% (see Carnot cycle), and typically much lower. In addition, only a difference in the density of thermal/heat energy (temperature) can be used to perform work. This is because thermal energy represents a particularly disordered form of energy; it is spread out randomly among many available states of a collection of microscopic particles constituting the system (these combinations of position and momentum for each of the particles are said to form a phase space). The measure of this disorder or randomness is entropy, and its defining feature is that the entropy of an isolated system never decreases. One cannot take a high-entropy system (like a hot substance, with a certain amount of thermal energy) and convert it into a low entropy state (like a low-temperature substance, with correspondingly lower energy), without that entropy going somewhere else (like the surrounding air). In other words, there is no way to concentrate energy without spreading out energy somewhere else. Thermal energy in equilibrium at a given temperature already represents the maximal evening-out of energy between all possible states because it is not entirely convertible to a "useful" form, i.e. one that can do more than just affect temperature. The second law of thermodynamics states that the entropy of a closed system can never decrease. For this reason, thermal energy in a system may be converted to other kinds of energy with efficiencies approaching 100% only if the entropie of the universe is increased by other means, to compensate for the decrease in entropy associated with the disappearance of the thermal energy and its entropy content. Otherwise, only a part of that thermal energy may be converted to other kinds of energy (and thus useful work). This is because the remainder of the heat must be reserved to be transferred to a thermal reservoir at a lower temperature. The increase in entropy for this process is greater than the decrease in entropy associated with the transformation of the rest of the heat into other types of energy. In order to make energy transformation more efficient, it is desirable to avoid thermal conversion. For example, the efficiency of nuclear reactors, where the kinetic energy of the nuclei is first converted to thermal energy and then to electrical energy, lies at around 35%. By direct conversion of kinetic energy to electric energy, effected by eliminating the intermediate thermal energy transformation, the efficiency of the energy transformation process can be dramatically improved. == History of energy transformation == Energy transformations in the universe over time are usually characterized by various kinds of energy, which have been available since the Big Bang, later being "released" (that is, transformed to more active types of energy such as kinetic or radiant energy) by a triggering mechanism. === Release of energy from gravitational potential === A direct transformation of energy occurs when hydrogen produced in the Big Bang collects into structures such as planets, in a process during which part of the gravitational potential is to be converted directly into heat. In Jupiter, Saturn, and Neptune, for example, such heat from the continued collapse of the planets' large gas atmospheres continue to drive most of the planets' weather systems. These systems, consisting of atmospheric bands, winds, and powerful storms, are only partly powered by sunlight. However, on Uranus, little of this process occurs. On Earth, a significant portion of the heat output from the interior of the planet, estimated at a third to half of the total, is caused by the slow collapse of planetary materials to a smaller size, generating heat. === Release of energy from radioactive potential === Familiar examples of other such processes transforming energy from the Big Bang include nuclear decay, which releases energy that was originally "stored" in heavy isotopes, such as uranium and thorium. This energy was stored at the time of the nucleosynthesis of these elements. This process uses the gravitational potential energy released from the collapse of Type II supernovae to create these heavy elements before they are incorporated into star systems such as the Solar System and the Earth. The energy locked into uranium is released spontaneously during most types of radioactive decay, and can be suddenly released in nuclear fission bombs. In both cases, a portion of the energy binding the atomic nuclei together is released as heat. === Release of energy from hydrogen fusion potential === In a similar chain of transformations beginning at the dawn of the universe, nuclear fusion of hydrogen in the Sun releases another store of potential energy which was created at the time of the Big Bang. At that time, according to one theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This resulted in hydrogen representing a store of potential energy which can be released by nuclear fusion. Such a fusion process is triggered by heat and pressure generated from the gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into starlight. Considering the solar system, starlight, overwhelmingly from the Sun, may again be stored as gravitational potential energy after it strikes the Earth. This occurs in the case of avalanches, or when water evaporates from oceans and is deposited as precipitation high above sea level (where, after being released at a hydroelectric dam, it can be used to drive turbine/generators to produce electricity). Sunlight also drives many weather phenomena on Earth. One example is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, give up some of their thermal energy suddenly to power a few days of violent air movement. Sunlight is also captured by plants as a chemical potential energy via photosynthesis, when carbon dioxide and water are converted into a combustible combination of carbohydrates, lipids, and oxygen. The release of this energy as heat and light may be triggered suddenly by a spark, in a forest fire; or it may be available more slowly for animal or human metabolism when these molecules are ingested, and catabolism is triggered by enzyme action. Through all of these transformation chains, the potential energy stored at the time of the Big Bang is later released by intermediate events, sometimes being stored in several different ways for long periods between releases, as more active energy. All of these events involve the conversion of one kind of energy into others, including heat. == Examples == === Examples of sets of energy conversions in machines === A coal-fired power plant involves these energy transformations: Chemical energy in the coal is converted into thermal energy in the exhaust gases of combustion Thermal energy of the exhaust gases converted into thermal energy of steam through heat exchange Kinetic energy of steam converted to mechanical energy in the turbine Mechanical energy of the turbine is converted to electrical energy by the generator, which is the ultimate output In such a system, the first and fourth steps are highly efficient, but the second and third steps are less efficient. The most efficient gas-fired electrical power stations can achieve 50% conversion efficiency. Oil- and coal-fired stations are less efficient. In a conventional automobile, the following energy transformations occur: Chemical energy in the fuel is converted into kinetic energy of expanding gas via combustion Kinetic energy of expanding gas converted to the linear piston movement Linear piston movement converted to rotary crankshaft movement Rotary crankshaft movement passed into transmission assembly Rotary movement passed out of transmission assembly Rotary movement passed through a differential Rotary movement passed out of differential to drive wheels Rotary movement of drive wheels converted to linear motion of the vehicle === Other energy conversions === There are many different machines and transducers that convert one energy form into another. A short list of examples follows: ATP hydrolysis (chemical energy in adenosine triphosphate → mechanical energy) Battery (electricity) (chemical energy → electrical energy) Electric generator (kinetic energy or mechanical work → electrical energy) Electric heater (electric energy → heat) Fire (chemical energy → heat and light) Friction (kinetic energy → heat) Fuel cell (chemical energy → electrical energy) Geothermal power (heat→ electrical energy) Heat engines, such as the internal combustion engine used in cars, or the steam engine (heat → mechanical energy) Hydroelectric dam (gravitational potential energy → electrical energy) Electric lamp (electrical energy → heat and light) Microphone (sound → electrical energy) Ocean thermal power (heat → electrical energy) Photosynthesis (electromagnetic radiation → chemical energy) Piezoelectrics (strain → electrical energy) Thermoelectric (heat → electrical energy) Wave power (mechanical energy → electrical energy) Windmill (wind energy → electrical energy or mechanical energy) == See also == == References == == Further reading == "Energy—Volume 3: Nuclear energy and energy policies". Applied Energy. 5 (4): 321. October 1979. doi:10.1016/0306-2619(79)90027-8. Energy Transfer and Transformation | Core knowledge science
Wikipedia/Energy_conversion
A computed tomography scan (CT scan), formerly called computed axial tomography scan (CAT scan), is a medical imaging technique used to obtain detailed internal images of the body. The personnel that perform CT scans are called radiographers or radiology technologists. CT scanners use a rotating X-ray tube and a row of detectors placed in a gantry to measure X-ray attenuations by different tissues inside the body. The multiple X-ray measurements taken from different angles are then processed on a computer using tomographic reconstruction algorithms to produce tomographic (cross-sectional) images (virtual "slices") of a body. CT scans can be used in patients with metallic implants or pacemakers, for whom magnetic resonance imaging (MRI) is contraindicated. Since its development in the 1970s, CT scanning has proven to be a versatile imaging technique. While CT is most prominently used in medical diagnosis, it can also be used to form images of non-living objects. The 1979 Nobel Prize in Physiology or Medicine was awarded jointly to South African-American physicist Allan MacLeod Cormack and British electrical engineer Godfrey Hounsfield "for the development of computer-assisted tomography". == Types == On the basis of image acquisition and procedures, various type of scanners are available in the market. === Sequential CT === Sequential CT, also known as step-and-shoot CT, is a type of scanning method in which the CT table moves stepwise. The table increments to a particular location and then stops which is followed by the X-ray tube rotation and acquisition of a slice. The table then increments again, and another slice is taken. The table movement stops while taking slices. This results in an increased time of scanning. === Spiral CT === Spinning tube, commonly called spiral CT, or helical CT, is an imaging technique in which an entire X-ray tube is spun around the central axis of the area being scanned. These are the dominant type of scanners on the market because they have been manufactured longer and offer a lower cost of production and purchase. The main limitation of this type of CT is the bulk and inertia of the equipment (X-ray tube assembly and detector array on the opposite side of the circle) which limits the speed at which the equipment can spin. Some designs use two X-ray sources and detector arrays offset by an angle, as a technique to improve temporal resolution. === Electron beam tomography === Electron beam tomography (EBT) is a specific form of CT in which a large enough X-ray tube is constructed so that only the path of the electrons, travelling between the cathode and anode of the X-ray tube, are spun using deflection coils. This type had a major advantage since sweep speeds can be much faster, allowing for less blurry imaging of moving structures, such as the heart and arteries. Fewer scanners of this design have been produced when compared with spinning tube types, mainly due to the higher cost associated with building a much larger X-ray tube and detector array and limited anatomical coverage. === Dual energy CT === Dual energy CT, also known as spectral CT, is an advancement of computed Tomography in which two energies are used to create two sets of data. A dual energy CT may employ dual source, single source with dual detector layer, single source with energy switching methods to get two different sets of data. Dual source CT is an advanced scanner with a two X-ray tube detector system, unlike conventional single tube systems. These two detector systems are mounted on a single gantry at 90° in the same plane. Dual source CT scanners allow fast scanning with higher temporal resolution by acquiring a full CT slice in only half a rotation. Fast imaging reduces motion blurring at high heart rates and potentially allowing for shorter breath-hold time. This is particularly useful for ill patients having difficulty holding their breath or unable to take heart-rate lowering medication. Single source with energy switching is another mode of dual energy CT in which a single tube is operated at two different energies by switching the energies frequently. === CT perfusion imaging === CT perfusion imaging is a specific form of CT to assess flow through blood vessels whilst injecting a contrast agent. Blood flow, blood transit time, and organ blood volume, can all be calculated with reasonable sensitivity and specificity. This type of CT may be used on the heart, although sensitivity and specificity for detecting abnormalities are still lower than for other forms of CT. This may also be used on the brain, where CT perfusion imaging can often detect poor brain perfusion well before it is detected using a conventional spiral CT scan. This is better for stroke diagnosis than other CT types. === PET CT === Positron emission tomography–computed tomography is a hybrid CT modality which combines, in a single gantry, a positron emission tomography (PET) scanner and an X-ray computed tomography (CT) scanner, to acquire sequential images from both devices in the same session, which are combined into a single superposed (co-registered) image. Thus, functional imaging obtained by PET, which depicts the spatial distribution of metabolic or biochemical activity in the body can be more precisely aligned or correlated with anatomic imaging obtained by CT scanning. PET-CT gives both anatomical and functional details of an organ under examination and is helpful in detecting different type of cancers. == Medical use == Since its introduction in the 1970s, CT has become an important tool in medical imaging to supplement conventional X-ray imaging and medical ultrasonography. It has more recently been used for preventive medicine or screening for disease, for example, CT colonography for people with a high risk of colon cancer, or full-motion heart scans for people with a high risk of heart disease. Several institutions offer full-body scans for the general population although this practice goes against the advice and official position of many professional organizations in the field primarily due to the radiation dose applied. The use of CT scans has increased dramatically over the last two decades in many countries. An estimated 72 million scans were performed in the United States in 2007 and more than 80 million in 2015. === Head === CT scanning of the head is typically used to detect infarction (stroke), tumors, calcifications, haemorrhage, and bone trauma. Of the above, hypodense (dark) structures can indicate edema and infarction, hyperdense (bright) structures indicate calcifications and haemorrhage and bone trauma can be seen as disjunction in bone windows. Tumors can be detected by the swelling and anatomical distortion they cause, or by surrounding edema. CT scanning of the head is also used in CT-guided stereotactic surgery and radiosurgery for treatment of intracranial tumors, arteriovenous malformations, and other surgically treatable conditions using a device known as the N-localizer. === Neck === Contrast CT is generally the initial study of choice for neck masses in adults. CT of the thyroid plays an important role in the evaluation of thyroid cancer. CT scan often incidentally finds thyroid abnormalities, and so is often the preferred investigation modality for thyroid abnormalities. === Lungs === A CT scan can be used for detecting both acute and chronic changes in the lung parenchyma, the tissue of the lungs. It is particularly relevant here because normal two-dimensional X-rays do not show such defects. A variety of techniques are used, depending on the suspected abnormality. For evaluation of chronic interstitial processes such as emphysema, and fibrosis, thin sections with high spatial frequency reconstructions are used; often scans are performed both on inspiration and expiration. This special technique is called high resolution CT that produces a sampling of the lung, and not continuous images. Bronchial wall thickening can be seen on lung CTs and generally (but not always) implies inflammation of the bronchi. An incidentally found nodule in the absence of symptoms (sometimes referred to as an incidentaloma) may raise concerns that it might represent a tumor, either benign or malignant. Perhaps persuaded by fear, patients and doctors sometimes agree to an intensive schedule of CT scans, sometimes up to every three months and beyond the recommended guidelines, in an attempt to do surveillance on the nodules. However, established guidelines advise that patients without a prior history of cancer and whose solid nodules have not grown over a two-year period are unlikely to have any malignant cancer. For this reason, and because no research provides supporting evidence that intensive surveillance gives better outcomes, and because of risks associated with having CT scans, patients should not receive CT screening in excess of those recommended by established guidelines. === Angiography === Computed tomography angiography (CTA) is a type of contrast CT to visualize the arteries and veins throughout the body. This ranges from arteries serving the brain to those bringing blood to the lungs, kidneys, arms and legs. An example of this type of exam is CT pulmonary angiogram (CTPA) used to diagnose pulmonary embolism (PE). It employs computed tomography and an iodine-based contrast agent to obtain an image of the pulmonary arteries. CT scans can reduce the risk of angiography by providing clinicians with more information about the positioning and number of clots prior to the procedure. === Cardiac === A CT scan of the heart is performed to gain knowledge about cardiac or coronary anatomy. Traditionally, cardiac CT scans are used to detect, diagnose, or follow up coronary artery disease. More recently CT has played a key role in the fast-evolving field of transcatheter structural heart interventions, more specifically in the transcatheter repair and replacement of heart valves. The main forms of cardiac CT scanning are: Coronary CT angiography (CCTA): the use of CT to assess the coronary arteries of the heart. The subject receives an intravenous injection of radiocontrast, and then the heart is scanned using a high-speed CT scanner, allowing radiologists to assess the extent of occlusion in the coronary arteries, usually to diagnose coronary artery disease. Coronary CT calcium scan: also used for the assessment of severity of coronary artery disease. Specifically, it looks for calcium deposits in the coronary arteries that can narrow arteries and increase the risk of a heart attack. A typical coronary CT calcium scan is done without the use of radiocontrast, but it can possibly be done from contrast-enhanced images as well. To better visualize the anatomy, post-processing of the images is common. Most common are multiplanar reconstructions (MPR) and volume rendering. For more complex anatomies and procedures, such as heart valve interventions, a true 3D reconstruction or a 3D print is created based on these CT images to gain a deeper understanding. === Abdomen and pelvis === CT is an accurate technique for diagnosis of abdominal diseases like Crohn's disease, GIT bleeding, and diagnosis and staging of cancer, as well as follow-up after cancer treatment to assess response. It is commonly used to investigate acute abdominal pain. Non-contrast-enhanced CT scans are the gold standard for diagnosing kidney stone disease. They allow clinicians to estimate the size, volume, and density of stones, helping to guide further treatment; with size being especially important in predicting the time to spontaneous passage of a stone. === Axial skeleton and extremities === For the axial skeleton and extremities, CT is often used to image complex fractures, especially ones around joints, because of its ability to reconstruct the area of interest in multiple planes. Fractures, ligamentous injuries, and dislocations can easily be recognized with a 0.2 mm resolution. With modern dual-energy CT scanners, new areas of use have been established, such as aiding in the diagnosis of gout. === Biomechanical use === CT is used in biomechanics to quickly reveal the geometry, anatomy, density and elastic moduli of biological tissues. == Other uses == === Industrial use === Industrial CT scanning (industrial computed tomography) is a process which uses X-ray equipment to produce 3D representations of components both externally and internally. Industrial CT scanning has been used in many areas of industry for internal inspection of components. Some of the key uses for CT scanning have been flaw detection, failure analysis, metrology, assembly analysis, image-based finite element methods and reverse engineering applications. CT scanning is also employed in the imaging and conservation of museum artifacts. === Aviation security === CT scanning has also found an application in transport security (predominantly airport security) where it is currently used in a materials analysis context for explosives detection CTX (explosive-detection device) and is also under consideration for automated baggage/parcel security scanning using computer vision based object recognition algorithms that target the detection of specific threat items based on 3D appearance (e.g. guns, knives, liquid containers). Its usage in airport security pioneered at Shannon Airport in March 2022 has ended the ban on liquids over 100 ml there, a move that Heathrow Airport plans for a full roll-out on 1 December 2022 and the TSA spent $781.2 million on an order for over 1,000 scanners, ready to go live in the summer. === Geological use === X-ray CT is used in geological studies to quickly reveal materials inside a drill core. Dense minerals such as pyrite and barite appear brighter and less dense components such as clay appear dull in CT images. === Paleontological use === Traditional methods of studying fossils are often destructive, such as the use of thin sections and physical preparation. X-ray CT is used in paleontology to non-destructively visualize fossils in 3D. This has many advantages. For example, we can look at fragile structures that might never otherwise be able to be studied. In addition, one can freely move around models of fossils in virtual 3D space to inspect it without damaging the fossil. === Cultural heritage use === X-ray CT and micro-CT can also be used for the conservation and preservation of objects of cultural heritage. For many fragile objects, direct research and observation can be damaging and can degrade the object over time. Using CT scans, conservators and researchers are able to determine the material composition of the objects they are exploring, such as the position of ink along the layers of a scroll, without any additional harm. These scans have been optimal for research focused on the workings of the Antikythera mechanism or the text hidden inside the charred outer layers of the En-Gedi Scroll. However, they are not optimal for every object subject to these kinds of research questions, as there are certain artifacts like the Herculaneum papyri in which the material composition has very little variation along the inside of the object. After scanning these objects, computational methods can be employed to examine the insides of these objects, as was the case with the virtual unwrapping of the En-Gedi scroll and the Herculaneum papyri. Micro-CT has also proved useful for analyzing more recent artifacts such as still-sealed historic correspondence that employed the technique of letterlocking (complex folding and cuts) that provided a "tamper-evident locking mechanism". Further examples of use cases in archaeology is imaging the contents of sarcophagi or ceramics. Recently, CWI in Amsterdam has collaborated with Rijksmuseum to investigate art object inside details in the framework called IntACT. === Microorganism research === Varied types of fungus can degrade wood to different degrees, one Belgium research group has been used X-ray CT 3 dimension with sub-micron resolution unveiled fungi can penetrate micropores of 0.6 μm under certain conditions. === Timber sawmill === Sawmills use industrial CT scanners to detect round defects, for instance knots, to improve total value of timber productions. Most sawmills are planning to incorporate this robust detection tool to improve productivity in the long run, however initial investment cost is high. == Interpretation of results == === Presentation === The result of a CT scan is a volume of voxels, which may be presented to a human observer by various methods, which broadly fit into the following categories: Slices (of varying thickness). Thin slice is generally regarded as planes representing a thickness of less than 3 mm. Thick slice is generally regarded as planes representing a thickness between 3 mm and 5 mm. Projection, including maximum intensity projection and average intensity projection Volume rendering (VR) Technically, all volume renderings become projections when viewed on a 2-dimensional display, making the distinction between projections and volume renderings a bit vague. The epitomes of volume rendering models feature a mix of for example coloring and shading in order to create realistic and observable representations. Two-dimensional CT images are conventionally rendered so that the view is as though looking up at it from the patient's feet. Hence, the left side of the image is to the patient's right and vice versa, while anterior in the image also is the patient's anterior and vice versa. This left-right interchange corresponds to the view that physicians generally have in reality when positioned in front of patients. ==== Grayscale ==== Pixels in an image obtained by CT scanning are displayed in terms of relative radiodensity. The pixel itself is displayed according to the mean attenuation of the tissue(s) that it corresponds to on a scale from +3,071 (most attenuating) to −1,024 (least attenuating) on the Hounsfield scale. A pixel is a two dimensional unit based on the matrix size and the field of view. When the CT slice thickness is also factored in, the unit is known as a voxel, which is a three-dimensional unit. Water has an attenuation of 0 Hounsfield units (HU), while air is −1,000 HU, cancellous bone is typically +400 HU, and cranial bone can reach 2,000 HU. The attenuation of metallic implants depends on the atomic number of the element used: Titanium usually has an amount of +1000 HU, iron steel can completely block the X-ray and is, therefore, responsible for well-known line-artifacts in computed tomograms. Artifacts are caused by abrupt transitions between low- and high-density materials, which results in data values that exceed the dynamic range of the processing electronics. ==== Windowing ==== CT data sets have a very high dynamic range which must be reduced for display or printing. This is typically done via a process of "windowing", which maps a range (the "window") of pixel values to a grayscale ramp. For example, CT images of the brain are commonly viewed with a window extending from 0 HU to 80 HU. Pixel values of 0 and lower, are displayed as black; values of 80 and higher are displayed as white; values within the window are displayed as a gray intensity proportional to position within the window. The window used for display must be matched to the X-ray density of the object of interest, in order to optimize the visible detail. Window width and window level parameters are used to control the windowing of a scan. ==== Multiplanar reconstruction and projections ==== Multiplanar reconstruction (MPR) is the process of converting data from one anatomical plane (usually transverse) to other planes. It can be used for thin slices as well as projections. Multiplanar reconstruction is possible as present CT scanners provide almost isotropic resolution. MPR is used almost in every scan. The spine is frequently examined with it. An image of the spine in axial plane can only show one vertebral bone at a time and cannot show its relation with other vertebral bones. By reformatting the data in other planes, visualization of the relative position can be achieved in sagittal and coronal plane. New software allows the reconstruction of data in non-orthogonal (oblique) planes, which help in the visualization of organs which are not in orthogonal planes. It is better suited for visualization of the anatomical structure of the bronchi as they do not lie orthogonal to the direction of the scan. Curved-plane reconstruction (or curved planar reformation = CPR) is performed mainly for the evaluation of vessels. This type of reconstruction helps to straighten the bends in a vessel, thereby helping to visualize a whole vessel in a single image or in multiple images. After a vessel has been "straightened", measurements such as cross-sectional area and length can be made. This is helpful in preoperative assessment of a surgical procedure. For 2D projections used in radiation therapy for quality assurance and planning of external beam radiotherapy, including digitally reconstructed radiographs, see Beam's eye view. ==== Volume rendering ==== A threshold value of radiodensity is set by the operator (e.g., a level that corresponds to bone). With the help of edge detection image processing algorithms a 3D model can be constructed from the initial data and displayed on screen. Various thresholds can be used to get multiple models, each anatomical component such as muscle, bone and cartilage can be differentiated on the basis of different colours given to them. However, this mode of operation cannot show interior structures. Surface rendering is limited technique as it displays only the surfaces that meet a particular threshold density, and which are towards the viewer. However, In volume rendering, transparency, colours and shading are used which makes it easy to present a volume in a single image. For example, Pelvic bones could be displayed as semi-transparent, so that, even viewing at an oblique angle one part of the image does not hide another. === Image quality === ==== Dose versus image quality ==== An important issue within radiology today is how to reduce the radiation dose during CT examinations without compromising the image quality. In general, higher radiation doses result in higher-resolution images, while lower doses lead to increased image noise and unsharp images. However, increased dosage raises the adverse side effects, including the risk of radiation-induced cancer – a four-phase abdominal CT gives the same radiation dose as 300 chest X-rays. Several methods that can reduce the exposure to ionizing radiation during a CT scan exist. New software technology can significantly reduce the required radiation dose. New iterative tomographic reconstruction algorithms (e.g., iterative Sparse Asymptotic Minimum Variance) could offer super-resolution without requiring higher radiation dose. Individualize the examination and adjust the radiation dose to the body type and body organ examined. Different body types and organs require different amounts of radiation. Higher resolution is not always suitable, such as detection of small pulmonary masses. ==== Artifacts ==== Although images produced by CT are generally faithful representations of the scanned volume, the technique is susceptible to a number of artifacts, such as the following:Chapters 3 and 5 Streak artifact Streaks are often seen around materials that block most X-rays, such as metal or bone. Numerous factors contribute to these streaks: under sampling, photon starvation, motion, beam hardening, and Compton scatter. This type of artifact commonly occurs in the posterior fossa of the brain, or if there are metal implants. The streaks can be reduced using newer reconstruction techniques. Approaches such as metal artifact reduction (MAR) can also reduce this artifact. MAR techniques include spectral imaging, where CT images are taken with photons of different energy levels, and then synthesized into monochromatic images with special software such as GSI (Gemstone Spectral Imaging). Partial volume effect This appears as "blurring" of edges. It is due to the scanner being unable to differentiate between a small amount of high-density material (e.g., bone) and a larger amount of lower density (e.g., cartilage). The reconstruction assumes that the X-ray attenuation within each voxel is homogeneous; this may not be the case at sharp edges. This is most commonly seen in the z-direction (craniocaudal direction), due to the conventional use of highly anisotropic voxels, which have a much lower out-of-plane resolution, than in-plane resolution. This can be partially overcome by scanning using thinner slices, or an isotropic acquisition on a modern scanner. Ring artifact Probably the most common mechanical artifact, the image of one or many "rings" appears within an image. They are usually caused by the variations in the response from individual elements in a two dimensional X-ray detector due to defect or miscalibration. Ring artifacts can largely be reduced by intensity normalization, also referred to as flat field correction. Remaining rings can be suppressed by a transformation to polar space, where they become linear stripes. A comparative evaluation of ring artefact reduction on X-ray tomography images showed that the method of Sijbers and Postnov can effectively suppress ring artefacts. Noise This appears as grain on the image and is caused by a low signal to noise ratio. This occurs more commonly when a thin slice thickness is used. It can also occur when the power supplied to the X-ray tube is insufficient to penetrate the anatomy. Windmill Streaking appearances can occur when the detectors intersect the reconstruction plane. This can be reduced with filters or a reduction in pitch. Beam hardening This can give a "cupped appearance" when grayscale is visualized as height. It occurs because conventional sources, like X-ray tubes emit a polychromatic spectrum. Photons of higher photon energy levels are typically attenuated less. Because of this, the mean energy of the spectrum increases when passing the object, often described as getting "harder". This leads to an effect increasingly underestimating material thickness, if not corrected. Many algorithms exist to correct for this artifact. They can be divided into mono- and multi-material methods. == Advantages == CT scanning has several advantages over traditional two-dimensional medical radiography. First, CT eliminates the superimposition of images of structures outside the area of interest. Second, CT scans have greater image resolution, enabling examination of finer details. CT can distinguish between tissues that differ in radiographic density by 1% or less. Third, CT scanning enables multiplanar reformatted imaging: scan data can be visualized in the transverse (or axial), coronal, or sagittal plane, depending on the diagnostic task. The improved resolution of CT has permitted the development of new investigations. For example, CT angiography avoids the invasive insertion of a catheter. CT scanning can perform a virtual colonoscopy with greater accuracy and less discomfort for the patient than a traditional colonoscopy. Virtual colonography is far more accurate than a barium enema for detection of tumors and uses a lower radiation dose. CT is a moderate-to-high radiation diagnostic technique. The radiation dose for a particular examination depends on multiple factors: volume scanned, patient build, number and type of scan protocol, and desired resolution and image quality. Two helical CT scanning parameters, tube current and pitch, can be adjusted easily and have a profound effect on radiation. CT scanning is more accurate than two-dimensional radiographs in evaluating anterior interbody fusion, although they may still over-read the extent of fusion. == Adverse effects == === Cancer === The radiation used in CT scans can damage body cells, including DNA molecules, which can lead to radiation-induced cancer. The radiation doses received from CT scans is variable. Compared to the lowest dose X-ray techniques, CT scans can have 100 to 1,000 times higher dose than conventional X-rays. However, a lumbar spine X-ray has a similar dose as a head CT. Articles in the media often exaggerate the relative dose of CT by comparing the lowest-dose X-ray techniques (chest X-ray) with the highest-dose CT techniques. In general, a routine abdominal CT has a radiation dose similar to three years of average background radiation. Large scale population-based studies have consistently demonstrated that low dose radiation from CT scans has impacts on cancer incidence in a variety of cancers. For example, in a large population-based Australian cohort it was found that up to 3.7% of brain cancers were caused by CT scan radiation. Some experts project that in the future, between three and five percent of all cancers would result from medical imaging. An Australian study of 10.9 million people reported that the increased incidence of cancer after CT scan exposure in this cohort was mostly due to irradiation. In this group, one in every 1,800 CT scans was followed by an excess cancer. If the lifetime risk of developing cancer is 40% then the absolute risk rises to 40.05% after a CT. The risks of CT scan radiation are especially important in patients undergoing recurrent CT scans within a short time span of one to five years. Some experts note that CT scans are known to be "overused," and "there is distressingly little evidence of better health outcomes associated with the current high rate of scans." On the other hand, a recent paper analyzing the data of patients who received high cumulative doses showed a high degree of appropriate use. This creates an important issue of cancer risk to these patients. Moreover, a highly significant finding that was previously unreported is that some patients received >100 mSv dose from CT scans in a single day, which counteracts existing criticisms some investigators may have on the effects of protracted versus acute exposure. There are contrarian views and the debate is ongoing. Some studies have shown that publications indicating an increased risk of cancer from typical doses of body CT scans are plagued with serious methodological limitations and several highly improbable results, concluding that no evidence indicates such low doses cause any long-term harm. One study estimated that as many as 0.4% of cancers in the United States resulted from CT scans, and that this may have increased to as much as 1.5 to 2% based on the rate of CT use in 2007. Others dispute this estimate, as there is no consensus that the low levels of radiation used in CT scans cause damage. Lower radiation doses are used in many cases, such as in the investigation of renal colic. A person's age plays a significant role in the subsequent risk of cancer. Estimated lifetime cancer mortality risks from an abdominal CT of a one-year-old is 0.1%, or 1:1000 scans. The risk for someone who is 40 years old is half that of someone who is 20 years old with substantially less risk in the elderly. The International Commission on Radiological Protection estimates that the risk to a fetus being exposed to 10 mGy (a unit of radiation exposure) increases the rate of cancer before 20 years of age from 0.03% to 0.04% (for reference a CT pulmonary angiogram exposes a fetus to 4 mGy). A 2012 review did not find an association between medical radiation and cancer risk in children noting however the existence of limitations in the evidences over which the review is based. CT scans can be performed with different settings for lower exposure in children with most manufacturers of CT scans as of 2007 having this function built in. Furthermore, certain conditions can require children to be exposed to multiple CT scans. Current recommendations are to inform patients of the risks of CT scanning. However, employees of imaging centers tend not to communicate such risks unless patients ask. === Contrast reactions === In the United States half of CT scans are contrast CTs using intravenously injected radiocontrast agents. The most common reactions from these agents are mild, including nausea, vomiting, and an itching rash. Severe life-threatening reactions may rarely occur. Overall reactions occur in 1 to 3% with nonionic contrast and 4 to 12% of people with ionic contrast. Skin rashes may appear within a week to 3% of people. The old radiocontrast agents caused anaphylaxis in 1% of cases while the newer, low-osmolar agents cause reactions in 0.01–0.04% of cases. Death occurs in about 2 to 30 people per 1,000,000 administrations, with newer agents being safer. There is a higher risk of mortality in those who are female, elderly or in poor health, usually secondary to either anaphylaxis or acute kidney injury. The contrast agent may induce contrast-induced nephropathy. This occurs in 2 to 7% of people who receive these agents, with greater risk in those who have preexisting kidney failure, preexisting diabetes, or reduced intravascular volume. People with mild kidney impairment are usually advised to ensure full hydration for several hours before and after the injection. For moderate kidney failure, the use of iodinated contrast should be avoided; this may mean using an alternative technique instead of CT. Those with severe kidney failure requiring dialysis require less strict precautions, as their kidneys have so little function remaining that any further damage would not be noticeable and the dialysis will remove the contrast agent; it is normally recommended, however, to arrange dialysis as soon as possible following contrast administration to minimize any adverse effects of the contrast. In addition to the use of intravenous contrast, orally administered contrast agents are frequently used when examining the abdomen. These are frequently the same as the intravenous contrast agents, merely diluted to approximately 10% of the concentration. However, oral alternatives to iodinated contrast exist, such as very dilute (0.5–1% w/v) barium sulfate suspensions. Dilute barium sulfate has the advantage that it does not cause allergic-type reactions or kidney failure, but cannot be used in patients with suspected bowel perforation or suspected bowel injury, as leakage of barium sulfate from damaged bowel can cause fatal peritonitis. Side effects from contrast agents, administered intravenously in some CT scans, might impair kidney performance in patients with kidney disease, although this risk is now believed to be lower than previously thought. === Scan dose === The table reports average radiation exposures; however, there can be a wide variation in radiation doses between similar scan types, where the highest dose could be as much as 22 times higher than the lowest dose. A typical plain film X-ray involves radiation dose of 0.01 to 0.15 mGy, while a typical CT can involve 10–20 mGy for specific organs, and can go up to 80 mGy for certain specialized CT scans. For purposes of comparison, the world average dose rate from naturally occurring sources of background radiation is 2.4 mSv per year, equal for practical purposes in this application to 2.4 mGy per year. While there is some variation, most people (99%) received less than 7 mSv per year as background radiation. Medical imaging as of 2007 accounted for half of the radiation exposure of those in the United States with CT scans making up two thirds of this amount. In the United Kingdom it accounts for 15% of radiation exposure. The average radiation dose from medical sources is ≈0.6 mSv per person globally as of 2007. Those in the nuclear industry in the United States are limited to doses of 50 mSv a year and 100 mSv every 5 years. Lead is the main material used by radiography personnel for shielding against scattered X-rays. ==== Radiation dose units ==== The radiation dose reported in the gray or mGy unit is proportional to the amount of energy that the irradiated body part is expected to absorb, and the physical effect (such as DNA double strand breaks) on the cells' chemical bonds by X-ray radiation is proportional to that energy. The sievert unit is used in the report of the effective dose. The sievert unit, in the context of CT scans, does not correspond to the actual radiation dose that the scanned body part absorbs but to another radiation dose of another scenario, the whole body absorbing the other radiation dose and the other radiation dose being of a magnitude, estimated to have the same probability to induce cancer as the CT scan. Thus, as is shown in the table above, the actual radiation that is absorbed by a scanned body part is often much larger than the effective dose suggests. A specific measure, termed the computed tomography dose index (CTDI), is commonly used as an estimate of the radiation absorbed dose for tissue within the scan region, and is automatically computed by medical CT scanners. The equivalent dose is the effective dose of a case, in which the whole body would actually absorb the same radiation dose, and the sievert unit is used in its report. In the case of non-uniform radiation, or radiation given to only part of the body, which is common for CT examinations, using the local equivalent dose alone would overstate the biological risks to the entire organism. ==== Effects of radiation ==== Most adverse health effects of radiation exposure may be grouped in two general categories: deterministic effects (harmful tissue reactions) due in large part to the killing/malfunction of cells following high doses; stochastic effects, i.e., cancer and heritable effects involving either cancer development in exposed individuals owing to mutation of somatic cells or heritable disease in their offspring owing to mutation of reproductive (germ) cells. The added lifetime risk of developing cancer by a single abdominal CT of 8 mSv is estimated to be 0.05%, or 1 one in 2,000. Because of increased susceptibility of fetuses to radiation exposure, the radiation dosage of a CT scan is an important consideration in the choice of medical imaging in pregnancy. ==== Excess doses ==== In October, 2009, the US Food and Drug Administration (FDA) initiated an investigation of brain perfusion CT (PCT) scans, based on radiation burns caused by incorrect settings at one particular facility for this particular type of CT scan. Over 200 patients were exposed to radiation at approximately eight times the expected dose for an 18-month period; over 40% of them lost patches of hair. This event prompted a call for increased CT quality assurance programs. It was noted that "while unnecessary radiation exposure should be avoided, a medically needed CT scan obtained with appropriate acquisition parameter has benefits that outweigh the radiation risks." Similar problems have been reported at other centers. These incidents are believed to be due to human error. == Procedure == CT scan procedure varies according to the type of the study and the organ being imaged. The patient lies on the CT table and the centering of the table is done according to the body part. The IV line is established in case of contrast-enhanced CT. After selecting proper and rate of contrast from the pressure injector, the scout is taken to localize and plan the scan. Once the plan is selected, the contrast is given. The raw data is processed according to the study and proper windowing is done to make scans easy to diagnose. === Preparation === Patient preparation may vary according to the type of scan. The general patient preparation includes. Signing the informed consent. Removal of metallic objects and jewelry from the region of interest. Changing to the hospital gown according to hospital protocol. Checking of kidney function, especially creatinine and urea levels (in case of CECT). == Mechanism == Computed tomography operates by using an X-ray generator that rotates around the object; X-ray detectors are positioned on the opposite side of the circle from the X-ray source. As the X-rays pass through the patient, they are attenuated differently by various tissues according to the tissue density. A visual representation of the raw data obtained is called a sinogram, yet it is not sufficient for interpretation. Once the scan data has been acquired, the data must be processed using a form of tomographic reconstruction, which produces a series of cross-sectional images. These cross-sectional images are made up of small units of pixels or voxels. Pixels in an image obtained by CT scanning are displayed in terms of relative radiodensity. The pixel itself is displayed according to the mean attenuation of the tissue(s) that it corresponds to on a scale from +3,071 (most attenuating) to −1,024 (least attenuating) on the Hounsfield scale. A pixel is a two dimensional unit based on the matrix size and the field of view. When the CT slice thickness is also factored in, the unit is known as a voxel, which is a three-dimensional unit. Water has an attenuation of 0 Hounsfield units (HU), while air is −1,000 HU, cancellous bone is typically +400 HU, and cranial bone can reach 2,000 HU or more (os temporale) and can cause artifacts. The attenuation of metallic implants depends on the atomic number of the element used: Titanium usually has an amount of +1000 HU, iron steel can completely extinguish the X-ray and is, therefore, responsible for well-known line-artifacts in computed tomograms. Artifacts are caused by abrupt transitions between low- and high-density materials, which results in data values that exceed the dynamic range of the processing electronics. Two-dimensional CT images are conventionally rendered so that the view is as though looking up at it from the patient's feet. Hence, the left side of the image is to the patient's right and vice versa, while anterior in the image also is the patient's anterior and vice versa. This left-right interchange corresponds to the view that physicians generally have in reality when positioned in front of patients. Initially, the images generated in CT scans were in the transverse (axial) anatomical plane, perpendicular to the long axis of the body. Modern scanners allow the scan data to be reformatted as images in other planes. Digital geometry processing can generate a three-dimensional image of an object inside the body from a series of two-dimensional radiographic images taken by rotation around a fixed axis. These cross-sectional images are widely used for medical diagnosis and therapy. === Contrast === Contrast media used for X-ray CT, as well as for plain film X-ray, are called radiocontrasts. Radiocontrasts for CT are, in general, iodine-based. This is useful to highlight structures such as blood vessels that otherwise would be difficult to delineate from their surroundings. Using contrast material can also help to obtain functional information about tissues. Often, images are taken both with and without radiocontrast. == History == The history of X-ray computed tomography goes back to at least 1917 with the mathematical theory of the Radon transform. In October 1963, William H. Oldendorf received a U.S. patent for a "radiant energy apparatus for investigating selected areas of interior objects obscured by dense material". The first commercially viable CT scanner was invented by Godfrey Hounsfield in 1972. It is often claimed that revenues from the sales of The Beatles' records in the 1960s helped fund the development of the first CT scanner at EMI. The first production X-ray CT machines were in fact called EMI scanners. === Etymology === The word tomography is derived from the Greek tome 'slice' and graphein 'to write'. Computed tomography was originally known as the "EMI scan" as it was developed in the early 1970s at a research branch of EMI, a company best known today for its music and recording business. It was later known as computed axial tomography (CAT or CT scan) and body section röntgenography. The term CAT scan is no longer in technical use because current CT scans enable for multiplanar reconstructions. This makes CT scan the most appropriate term, which is used by radiologists in common vernacular as well as in textbooks and scientific papers. In Medical Subject Headings (MeSH), computed axial tomography was used from 1977 to 1979, but the current indexing explicitly includes X-ray in the title. The term sinogram was introduced by Paul Edholm and Bertil Jacobson in 1975. == Society and culture == === Campaigns === In response to increased concern by the public and the ongoing progress of best practices, the Alliance for Radiation Safety in Pediatric Imaging was formed within the Society for Pediatric Radiology. In concert with the American Society of Radiologic Technologists, the American College of Radiology and the American Association of Physicists in Medicine, the Society for Pediatric Radiology developed and launched the Image Gently Campaign which is designed to maintain high-quality imaging studies while using the lowest doses and best radiation safety practices available on pediatric patients. This initiative has been endorsed and applied by a growing list of various professional medical organizations around the world and has received support and assistance from companies that manufacture equipment used in Radiology. Following upon the success of the Image Gently campaign, the American College of Radiology, the Radiological Society of North America, the American Association of Physicists in Medicine and the American Society of Radiologic Technologists have launched a similar campaign to address this issue in the adult population called Image Wisely. The World Health Organization and International Atomic Energy Agency (IAEA) of the United Nations have also been working in this area and have ongoing projects designed to broaden best practices and lower patient radiation dose. === Prevalence === Use of CT has increased dramatically over the last two decades. An estimated 72 million scans were performed in the United States in 2007, accounting for close to half of the total per-capita dose rate from radiologic and nuclear medicine procedures. Of the CT scans, six to eleven percent are done in children, an increase of seven to eightfold from 1980. Similar increases have been seen in Europe and Asia. In Calgary, Canada, 12.1% of people who present to the emergency with an urgent complaint received a CT scan, most commonly either of the head or of the abdomen. The percentage who received CT, however, varied markedly by the emergency physician who saw them from 1.8% to 25%. In the emergency department in the United States, CT or MRI imaging is done in 15% of people who present with injuries as of 2007 (up from 6% in 1998). The increased use of CT scans has been the greatest in two fields: screening of adults (screening CT of the lung in smokers, virtual colonoscopy, CT cardiac screening, and whole-body CT in asymptomatic patients) and CT imaging of children. Shortening of the scanning time to around 1 second, eliminating the strict need for the subject to remain still or be sedated, is one of the main reasons for the large increase in the pediatric population (especially for the diagnosis of appendicitis). As of 2007, in the United States a proportion of CT scans are performed unnecessarily. Some estimates place this number at 30%. There are a number of reasons for this including: legal concerns, financial incentives, and desire by the public. For example, some healthy people avidly pay to receive full-body CT scans as screening. In that case, it is not at all clear that the benefits outweigh the risks and costs. Deciding whether and how to treat incidentalomas is complex, radiation exposure is not negligible, and the money for the scans involves opportunity cost. == Manufacturers == Major manufacturers of CT scanning devices and equipment are: Canon Medical Systems Corporation Fujifilm Healthcare GE HealthCare Neusoft Medical Systems Philips Siemens Healthineers United Imaging == Research == Photon-counting computed tomography is a CT technique currently under development. Typical CT scanners use energy integrating detectors; photons are measured as a voltage on a capacitor which is proportional to the X-rays detected. However, this technique is susceptible to noise and other factors which can affect the linearity of the voltage to X-ray intensity relationship. Photon counting detectors (PCDs) are still affected by noise but it does not change the measured counts of photons. PCDs have several potential advantages, including improving signal (and contrast) to noise ratios, reducing doses, improving spatial resolution, and through use of several energies, distinguishing multiple contrast agents. PCDs have only recently become feasible in CT scanners due to improvements in detector technologies that can cope with the volume and rate of data required. As of February 2016, photon counting CT is in use at three sites. Some early research has found the dose reduction potential of photon counting CT for breast imaging to be very promising. In view of recent findings of high cumulative doses to patients from recurrent CT scans, there has been a push for scanning technologies and techniques that reduce ionising radiation doses to patients to sub-milliSievert (sub-mSv in the literature) levels during the CT scan process, a goal that has been lingering. == See also == == References == == External links == Development of CT imaging CT Artefacts—PPT by David Platten Filler A (2009-06-30). "The History, Development and Impact of Computed Imaging in Neurological Diagnosis and Neurosurgery: CT, MRI, and DTI". Nature Precedings: 1. doi:10.1038/npre.2009.3267.4. ISSN 1756-0357. Boone JM, McCollough CH (2021). "Computed tomography turns 50". Physics Today. 74 (9): 34–40. Bibcode:2021PhT....74i..34B. doi:10.1063/PT.3.4834. ISSN 0031-9228. S2CID 239718717.
Wikipedia/Computerised_tomography
Functional magnetic resonance imaging or functional MRI (fMRI) measures brain activity by detecting changes associated with blood flow. This technique relies on the fact that cerebral blood flow and neuronal activation are coupled. When an area of the brain is in use, blood flow to that region also increases. The primary form of fMRI uses the blood-oxygen-level dependent (BOLD) contrast, discovered by Seiji Ogawa in 1990. This is a type of specialized brain and body scan used to map neural activity in the brain or spinal cord of humans or other animals by imaging the change in blood flow (hemodynamic response) related to energy use by brain cells. Since the early 1990s, fMRI has come to dominate brain mapping research because it does not involve the use of injections, surgery, the ingestion of substances, or exposure to ionizing radiation. This measure is frequently corrupted by noise from various sources; hence, statistical procedures are used to extract the underlying signal. The resulting brain activation can be graphically represented by color-coding the strength of activation across the brain or the specific region studied. The technique can localize activity to within millimeters but, using standard techniques, no better than within a window of a few seconds. Other methods of obtaining contrast are arterial spin labeling and diffusion MRI. Diffusion MRI is similar to BOLD fMRI but provides contrast based on the magnitude of diffusion of water molecules in the brain. In addition to detecting BOLD responses from activity due to tasks or stimuli, fMRI can measure resting state, or negative-task state, which shows the subjects' baseline BOLD variance. Since about 1998 studies have shown the existence and properties of the default mode network, a functionally connected neural network of apparent resting brain states. fMRI is used in research, and to a lesser extent, in clinical work. It can complement other measures of brain physiology such as electroencephalography (EEG), and near-infrared spectroscopy (NIRS). Newer methods which improve both spatial and time resolution are being researched, and these largely use biomarkers other than the BOLD signal. Some companies have developed commercial products such as lie detectors based on fMRI techniques, but the research is not believed to be developed enough for widespread commercial use. == Overview == The fMRI concept builds on the earlier MRI scanning technology and the discovery of properties of oxygen-rich blood. MRI brain scans use a strong, uniform, static magnetic field to align the spins of nuclei in the brain region being studied. Another magnetic field, with a gradient strength rather than a uniform one, is then applied to spatially distinguish different nuclei. Finally, a radiofrequency (RF) pulse is applied to flip the nuclear spins, with the effect depending on where they are located, due to the gradient field. After the RF pulse, the nuclei return to their original (equilibrium) spin populations, and the energy they emit is measured with a coil. The use of the gradient field allows the positions of the nuclei to be determined. MRI thus provides a static structural view of brain matter. The central thrust behind fMRI was to extend MRI to capture functional changes in the brain caused by neuronal activity. Differences in magnetic properties between arterial (oxygen-rich) and venous (oxygen-poor) blood provided this link. Since the 1890s, it has been known that changes in blood flow and blood oxygenation in the brain (collectively known as brain hemodynamics) are closely linked to neural activity. When neurons become active, local blood flow to those brain regions increases, and oxygen-rich (oxygenated) blood displaces oxygen-depleted (deoxygenated) blood around 2 seconds later. This rises to a peak over 4–6 seconds, before falling back to the original level (and typically undershooting slightly). Oxygen is carried by the hemoglobin molecule in red blood cells. Deoxygenated hemoglobin (dHb) is more magnetic (paramagnetic) than oxygenated hemoglobin (Hb), which is virtually resistant to magnetism (diamagnetic). This difference leads to an improved MR signal since the diamagnetic blood interferes with the magnetic MR signal less. This improvement can be mapped to show which neurons are active at a time. === History === During the late 19th century, Angelo Mosso invented the 'human circulation balance', which could non-invasively measure the redistribution of blood during emotional and intellectual activity. However, although briefly mentioned by William James in 1890, the details and precise workings of this balance and the experiments Mosso performed with it remained largely unknown until the recent discovery of the original instrument as well as Mosso's reports by Stefano Sandrone and colleagues. Angelo Mosso investigated several critical variables that are still relevant in modern neuroimaging such as the 'signal-to-noise ratio', the appropriate choice of the experimental paradigm and the need for the simultaneous recording of differing physiological parameters. Mosso's manuscripts do not provide direct evidence that the balance was really able to measure changes in cerebral blood flow due to cognition, however a modern replication performed by David T Field has now demonstrated—using modern signal processing techniques unavailable to Mosso—that a balance apparatus of this type is able to detect changes in cerebral blood volume related to cognition. In 1890, Charles Roy and Charles Sherrington first experimentally linked brain function to its blood flow, at Cambridge University. The next step to resolving how to measure blood flow to the brain was Linus Pauling's and Charles Coryell's discovery in 1936 that oxygen-rich blood with Hb was weakly repelled by magnetic fields, while oxygen-depleted blood with dHb was attracted to a magnetic field, though less so than ferromagnetic elements such as iron. Seiji Ogawa at AT&T Bell labs recognized that this could be used to augment MRI, which could study just the static structure of the brain, since the differing magnetic properties of dHb and Hb caused by blood flow to activated brain regions would cause measurable changes in the MRI signal. BOLD is the MRI contrast of dHb, discovered in 1990 by Ogawa. In a seminal 1990 study based on earlier work by Thulborn et al., Ogawa and colleagues scanned rodents in a strong magnetic field (7.0 T) MRI. To manipulate blood oxygen level, they changed the proportion of oxygen the animals breathed. As this proportion fell, a map of blood flow in the brain was seen in the MRI. They verified this by placing test tubes with oxygenated or deoxygenated blood and creating separate images. They also showed that gradient-echo images, which depend on a form of loss of magnetization called T2* decay, produced the best images. To show these blood flow changes were related to functional brain activity, they changed the composition of the air breathed by rats, and scanned them while monitoring brain activity with EEG. The first attempt to detect the regional brain activity using MRI was performed by Belliveau and colleagues at Harvard University using the contrast agent Magnevist, a paramagnetic substance remaining in the bloodstream after intravenous injection. However, this method is not popular in human fMRI, because of the inconvenience of the contrast agent injection, and because the agent stays in the blood only for a short time. Three studies in 1992 were the first to explore using the BOLD contrast in humans. Kenneth Kwong and colleagues, using both gradient-echo and inversion recovery echo-planar imaging (EPI) sequence at a magnetic field strength of 1.5 T published studies showing clear activation of the human visual cortex. The Harvard team thereby showed that both blood flow and blood volume increased locally in activity neural tissue. Ogawa and Ugurbil conducted a similar study using a higher magnetic field (4.0 T) in Ugurbil's laboratory at the University of Minnesota, generating higher resolution images that showed activity largely following the gray matter of the brain, as would be expected; in addition, they showed that fMRI signal depended on a decrease in T2*, consistent with the BOLD mechanism. T2* decay is caused by magnetized nuclei in a volume of space losing magnetic coherence (transverse magnetization) from both bumping into one another and from experiencing differences in the magnetic field strength across locations (field inhomogeneity from a spatial gradient). Bandettini and colleagues used EPI at 1.5 T to show activation in the primary motor cortex, a brain area at the last stage of the circuitry controlling voluntary movements. The magnetic fields, pulse sequences and procedures and techniques used by these early studies are still used in current-day fMRI studies. But today researchers typically collect data from more slices (using stronger magnetic gradients), and preprocess and analyze data using statistical techniques. === Physiology === The brain does not store a lot of glucose, its primary source of energy. When neurons become active, getting them back to their original state of polarization requires actively pumping ions across the neuronal cell membranes, in both directions. The energy for those ion pumps is mainly produced from glucose. More blood flows in to transport more glucose, also bringing in more oxygen in the form of oxygenated hemoglobin molecules in red blood cells. This is from both a higher rate of blood flow and an expansion of blood vessels. The blood-flow change is localized to within 2 or 3 mm of where the neural activity is. Usually the brought-in oxygen is more than the oxygen consumed in burning glucose (it is not yet settled whether most glucose consumption is oxidative), and this causes a net decrease in deoxygenated hemoglobin (dHb) in that brain area's blood vessels. This changes the magnetic property of the blood, making it interfere less with the magnetization and its eventual decay induced by the MRI process. The cerebral blood flow (CBF) corresponds to the consumed glucose differently in different brain regions. Initial results show there is more inflow than consumption of glucose in regions such as the amygdala, basal ganglia, thalamus and cingulate cortex, all of which are recruited for fast responses. In regions that are more deliberative, such as the lateral frontal and lateral parietal lobes, it seems that incoming flow is less than consumption. This affects BOLD sensitivity. Hemoglobin differs in how it responds to magnetic fields, depending on whether it has a bound oxygen molecule. The dHb molecule is more attracted to magnetic fields. Hence, it distorts the surrounding magnetic field induced by an MRI scanner, causing the nuclei there to lose magnetization faster via the T2* decay. Thus MR pulse sequences sensitive to T2* show more MR signal where blood is highly oxygenated and less where it is not. This effect increases with the square of the strength of the magnetic field. The fMRI signal hence needs both a strong magnetic field (1.5 T or higher) and a pulse sequence such as EPI, which is sensitive to T2* contrast. The physiological blood-flow response largely decides the temporal sensitivity, that is how accurately we can measure when neurons are active, in BOLD fMRI. The basic time resolution parameter (sampling time) is designated TR; the TR dictates how often a particular brain slice is excited and allowed to lose its magnetization. TRs could vary from the very short (500 ms) to the very long (3 s). For fMRI specifically, the hemodynamic response lasts over 10 seconds, rising multiplicatively (that is, as a proportion of current value), peaking at 4 to 6 seconds, and then falling multiplicatively. Changes in the blood-flow system, the vascular system, integrate responses to neuronal activity over time. Because this response is a smooth continuous function, sampling with ever-faster TRs does not help; it just gives more points on the response curve obtainable by simple linear interpolation anyway. Experimental paradigms such as staggering when a stimulus is presented at various trials can improve temporal resolution, but reduces the number of effective data points obtained. == BOLD hemodynamic response == The change in the MR signal from neuronal activity is called the hemodynamic response (HR). It lags the neuronal events triggering it by a couple of seconds, since it takes a while for the vascular system to respond to the brain's need for glucose. From this point it typically rises to a peak at about 5 seconds after the stimulus. If the neurons keep firing, say from a continuous stimulus, the peak spreads to a flat plateau while the neurons stay active. After activity stops, the BOLD signal falls below the original level, the baseline, a phenomenon called the undershoot. Over time the signal recovers to the baseline. There is some evidence that continuous metabolic requirements in a brain region contribute to the undershoot. The mechanism by which the neural system provides feedback to the vascular system of its need for more glucose is partly the release of glutamate as part of neuron firing. This glutamate affects nearby supporting cells, astrocytes, causing a change in calcium ion concentration. This, in turn, releases nitric oxide at the contact point of astrocytes and intermediate-sized blood vessels, the arterioles. Nitric oxide is a vasodilator causing arterioles to expand and draw in more blood. A single voxel's response signal over time is called its timecourse. Typically, the unwanted signal, called the noise, from the scanner, random brain activity and similar elements is as big as the signal itself. To eliminate these, fMRI studies repeat a stimulus presentation multiple times. === Spatial resolution === Spatial resolution of an fMRI study refers to how well it discriminates between nearby locations. It is measured by the size of voxels, as in MRI. A voxel is a three-dimensional rectangular cuboid, whose dimensions are set by the slice thickness, the area of a slice, and the grid imposed on the slice by the scanning process. Full-brain studies use larger voxels, while those that focus on specific regions of interest typically use smaller sizes. Sizes range from 4 to 5 mm, or with laminar resolution fMRI (lfMRI), to submillimeter. Smaller voxels contain fewer neurons on average, incorporate less blood flow, and hence have less signal than larger voxels. Smaller voxels imply longer scanning times, since scanning time directly rises with the number of voxels per slice and the number of slices. This can lead both to discomfort for the subject inside the scanner and to loss of the magnetization signal. A voxel typically contains a few million neurons and tens of billions of synapses, with the actual number depending on voxel size and the area of the brain being imaged. The vascular arterial system supplying fresh blood branches into smaller and smaller vessels as it enters the brain surface and within-brain regions, culminating in a connected capillary bed within the brain. The drainage system, similarly, merges into larger and larger veins as it carries away oxygen-depleted blood. The dHb contribution to the fMRI signal is from both the capillaries near the area of activity and larger draining veins that may be farther away. For good spatial resolution, the signal from the large veins needs to be suppressed, since it does not correspond to the area where the neural activity is. This can be achieved either by using strong static magnetic fields or by using spin-echo pulse sequences. With these, fMRI can examine a spatial range from millimeters to centimeters, and can hence identify Brodmann areas (centimeters), subcortical nuclei such as the caudate, putamen and thalamus, and hippocampal subfields such as the combined dentate gyrus/CA3, CA1, and subiculum. === Temporal resolution === Temporal resolution is the smallest time period of neural activity reliably separated out by fMRI. One element deciding this is the sampling time, the TR. Below a TR of 1 or 2 seconds, however, scanning just generates sharper hemodynamic response (HR) curves, without adding much additional information (e.g. beyond what is alternatively achieved by mathematically interpolating the curve gaps at a lower TR). Temporal resolution can be improved by staggering stimulus presentation across trials. If one-third of data trials are sampled normally, one-third at 1 s, 4 s, 7 s and so on, and the last third at 2 s, 5 s and 8 s, the combined data provide a resolution of 1 s, though with only one-third as many total events. The time resolution needed depends on brain processing time for various events. An example of the broad range here is given by the visual processing system. What the eye sees is registered on the photoreceptors of the retina within a millisecond or so. These signals get to the primary visual cortex via the thalamus in tens of milliseconds. Neuronal activity related to the act of seeing lasts for more than 100 ms. A fast reaction, such as swerving to avoid a car crash, takes around 200 ms. By about half a second, awareness and reflection of the incident sets in. Remembering a similar event may take a few seconds, and emotional or physiological changes such as fear arousal may last minutes or hours. Learned changes, such as recognizing faces or scenes, may last days, months, or years. Most fMRI experiments study brain processes lasting a few seconds, with the study conducted over some tens of minutes. Subjects may move their heads during that time, and this head motion needs to be corrected for. So does drift in the baseline signal over time. Boredom and learning may modify both subject behavior and cognitive processes. === Linear addition from multiple activation === When a person performs two tasks simultaneously or in overlapping fashion, the BOLD response is expected to add linearly. This is a fundamental assumption of many fMRI studies that is based on the principle that continuously differentiable systems can be expected to behave linearly when perturbations are small; they are linear to first order. Linear addition means the only operation allowed on the individual responses before they are combined (added together) is a separate scaling of each. Since scaling is just multiplication by a constant number, this means an event that evokes, say, twice the neural response as another, can be modeled as the first event presented twice simultaneously. The HR for the doubled-event is then just double that of the single event. To the extent that the behavior is linear, the time course of the BOLD response to an arbitrary stimulus can be modeled by convolution of that stimulus with the impulse BOLD response. Accurate time course modeling is important in estimating the BOLD response magnitude. This strong assumption was first studied in 1996 by Boynton and colleagues, who checked the effects on the primary visual cortex of patterns flickering 8 times a second and presented for 3 to 24 seconds. Their result showed that when visual contrast of the image was increased, the HR shape stayed the same but its amplitude increased proportionally. With some exceptions, responses to longer stimuli could also be inferred by adding together the responses for multiple shorter stimuli summing to the same longer duration. In 1997, Dale and Buckner tested whether individual events, rather than blocks of some duration, also summed the same way, and found they did. But they also found deviations from the linear model at time intervals less than 2 seconds. A source of nonlinearity in the fMRI response is from the refractory period, where brain activity from a presented stimulus suppresses further activity on a subsequent, similar, stimulus. As stimuli become shorter, the refractory period becomes more noticeable. The refractory period does not change with age, nor do the amplitudes of HRs. The period differs across brain regions. In both the primary motor cortex and the visual cortex, the HR amplitude scales linearly with duration of a stimulus or response. In the corresponding secondary regions, the supplementary motor cortex, which is involved in planning motor behavior, and the motion-sensitive V5 region, a strong refractory period is seen and the HR amplitude stays steady across a range of stimulus or response durations. The refractory effect can be used in a way similar to habituation to see what features of a stimulus a person discriminates as new. Further limits to linearity exist because of saturation: with large stimulation levels a maximum BOLD response is reached. == Matching neural activity to the BOLD signal == Researchers have checked the BOLD signal against both signals from implanted electrodes (mostly in monkeys) and signals of field potentials (that is the electric or magnetic field from the brain's activity, measured outside the skull) from EEG and MEG. The local field potential, which includes both post-neuron-synaptic activity and internal neuron processing, better predicts the BOLD signal. So the BOLD contrast reflects mainly the inputs to a neuron and the neuron's integrative processing within its body, and less the output firing of neurons. In humans, electrodes can be implanted only in patients who need surgery as treatment, but evidence suggests a similar relationship at least for the auditory cortex and the primary visual cortex. Activation locations detected by BOLD fMRI in cortical areas (brain surface regions) are known to tally with CBF-based functional maps from PET scans. Some regions just a few millimeters in size, such as the lateral geniculate nucleus (LGN) of the thalamus, which relays visual inputs from the retina to the visual cortex, have been shown to generate the BOLD signal correctly when presented with visual input. Nearby regions such as the pulvinar nucleus were not stimulated for this task, indicating millimeter resolution for the spatial extent of the BOLD response, at least in thalamic nuclei. In the rat brain, single-whisker touch has been shown to elicit BOLD signals from the somatosensory cortex. However, the BOLD signal cannot separate feedback and feedforward active networks in a region; the slowness of the vascular response means the final signal is the summed version of the whole region's network; blood flow is not discontinuous as the processing proceeds. Also, both inhibitory and excitatory input to a neuron from other neurons sum and contribute to the BOLD signal. Within a neuron these two inputs might cancel out. The BOLD response can also be affected by a variety of factors, including disease, sedation, anxiety, medications that dilate blood vessels, and attention (neuromodulation). The amplitude of the BOLD signal does not necessarily affect its shape. A higher-amplitude signal may be seen for stronger neural activity, but peaking at the same place as a weaker signal. Also, the amplitude does not necessarily reflect behavioral performance. A complex cognitive task may initially trigger high-amplitude signals associated with good performance, but as the subject gets better at it, the amplitude may decrease with performance staying the same. This is expected to be due to increased efficiency in performing the task. The BOLD response across brain regions cannot be compared directly even for the same task, since the density of neurons and the blood-supply characteristics are not constant across the brain. However, the BOLD response can often be compared across subjects for the same brain region and the same task. More recent characterization of the BOLD signal has used optogenetic techniques in rodents to precisely control neuronal firing while simultaneously monitoring the BOLD response using high field magnets (a technique sometimes referred to as "optofMRI"). These techniques suggest that neuronal firing is well correlated with the measured BOLD signal including approximately linear summation of the BOLD signal over closely spaced bursts of neuronal firing. Linear summation is an assumption of commonly used event-related fMRI designs. == Medical use == Physicians use fMRI to assess how risky brain surgery or similar invasive treatment is for a patient and to learn how a normal, diseased or injured brain is functioning. They map the brain with fMRI to identify regions linked to critical functions such as speaking, moving, sensing, or planning. This is useful to plan for surgery and radiation therapy of the brain. Clinical use of fMRI still lags behind research use. Patients with brain pathologies are more difficult to scan with fMRI than are young healthy volunteers, the typical research-subject population. Tumors and lesions can change the blood flow in ways not related to neural activity, masking the neural HR. Drugs such as antihistamines and even caffeine can affect HR. Some patients may have disorders such as compulsive lying, which makes certain studies impossible. It is harder for those with clinical problems to stay still for long. Using head restraints or bite bars may injure epileptics who have a seizure inside the scanner; bite bars may also discomfort those with dental prostheses. Despite these difficulties, fMRI has been used clinically to map functional areas, check left-right hemispherical asymmetry in language and memory regions, check the neural correlates of a seizure, study how the brain recovers partially from a stroke, and test how well a drug or behavioral therapy works. Mapping of functional areas and understanding lateralization of language and memory help surgeons avoid removing critical brain regions when they have to operate and remove brain tissue. This is of particular importance in removing tumors and in patients who have intractable temporal lobe epilepsy. Lesioning tumors requires pre-surgical planning to ensure no functionally useful tissue is removed needlessly. Recovered depressed patients have shown altered fMRI activity in the cerebellum, and this may indicate a tendency to relapse. Pharmacological fMRI, assaying brain activity after drugs are administered, can be used to check how much a drug penetrates the blood–brain barrier and dose vs effect information of the medication. == Animal research == Research is primarily performed in non-human primates such as the rhesus macaque. These studies can be used both to check or predict human results and to validate the fMRI technique itself. But the studies are difficult because it is hard to motivate an animal to stay still and typical inducements such as juice trigger head movement while the animal swallows it. It is also expensive to maintain a colony of larger animals such as the macaque. == Analyzing the data == The goal of fMRI data analysis is to detect correlations between brain activation and a task the subject performs during the scan. It also aims to discover correlations with the specific cognitive states, such as memory and recognition, induced in the subject. The BOLD signature of activation is relatively weak, however, so other sources of noise in the acquired data must be carefully controlled. This means that a series of processing steps must be performed on the acquired images before the actual statistical search for task-related activation can begin. Nevertheless, it is possible to predict, for example, the emotions a person is experiencing solely from their fMRI, with a high degree of accuracy. === Sources of noise === Noise is unwanted changes to the MR signal from elements not of interest to the study. The five main sources of noise in fMRI are thermal noise, system noise, physiological noise, random neural activity and differences in both mental strategies and behavior across people and across tasks within a person. Thermal noise multiplies in line with the static field strength, but physiological noise multiplies as the square of the field strength. Since the signal also multiplies as the square of the field strength, and since physiological noise is a large proportion of total noise, higher field strengths above 3 T do not always produce proportionately better images. Heat causes electrons to move around and distort the current in the fMRI detector, producing thermal noise. Thermal noise rises with the temperature. It also depends on the range of frequencies detected by the receiver coil and its electrical resistance. It affects all voxels similarly, independent of anatomy. System noise is from the imaging hardware. One form is scanner drift, caused by the superconducting magnet's field drifting over time. Another form is changes in the current or voltage distribution of the brain itself inducing changes in the receiver coil and reducing its sensitivity. A procedure called impedance matching is used to bypass this inductance effect. There could also be noise from the magnetic field not being uniform. This is often adjusted for by using shimming coils, small magnets physically inserted, say into the subject's mouth, to patch the magnetic field. The nonuniformities are often near brain sinuses such as the ear and plugging the cavity for long periods can be discomfiting. The scanning process acquires the MR signal in k-space, in which overlapping spatial frequencies (that is repeated edges in the sample's volume) are each represented with lines. Transforming this into voxels introduces some loss and distortions. Physiological noise is from head and brain movement in the scanner from breathing, heart beats, or the subject fidgeting, tensing, or making physical responses such as button presses. Head movements cause the voxel-to-neurons mapping to change while scanning is in progress. Noise due to head movement is a particular issue when working with children, although there are measures that can be taken to reduce head motion when scanning children, such as changes in experimental design and training prior to the scanning session. Since fMRI is acquired in slices, after movement, a voxel continues to refer to the same absolute location in space while the neurons underneath it would have changed. Another source of physiological noise is the change in the rate of blood flow, blood volume, and use of oxygen over time. This last component contributes to two-thirds of physiological noise, which, in turn, is the main contributor to total noise. Even with the best experimental design, it is not possible to control and constrain all other background stimuli impinging on a subject—scanner noise, random thoughts, physical sensations, and the like. These produce neural activity independent of the experimental manipulation. These are not amenable to mathematical modeling and have to be controlled by the study design. A person's strategies to respond or react to a stimulus, and to solve problems, often change over time and over tasks. This generates variations in neural activity from trial to trial within a subject. Across people too neural activity differs for similar reasons. Researchers often conduct pilot studies to see how participants typically perform for the task under consideration. They also often train subjects how to respond or react in a trial training session prior to the scanning one. === Preprocessing === The scanner platform generates a 3 D volume of the subject's head every TR. This consists of an array of voxel intensity values, one value per voxel in the scan. The voxels are arranged one after the other, unfolding the three-dimensional structure into a single line. Several such volumes from a session are joined to form a 4 D volume corresponding to a run, for the time period the subject stayed in the scanner without adjusting head position. This 4 D volume is the starting point for analysis. The first part of that analysis is preprocessing. The first step in preprocessing is conventionally slice timing correction. The MR scanner acquires different slices within a single brain volume at different times, and hence the slices represent brain activity at different timepoints. Since this complicates later analysis, a timing correction is applied to bring all slices to the same timepoint reference. This is done by assuming the timecourse of a voxel is smooth when plotted as a dotted line. Hence the voxel's intensity value at other times not in the sampled frames can be calculated by filling in the dots to create a continuous curve. Head motion correction is another common preprocessing step. When the head moves, the neurons under a voxel move and hence its timecourse now represents largely that of some other voxel in the past. Hence the timecourse curve is effectively cut and pasted from one voxel to another. Motion correction tries different ways of undoing this to see which undoing of the cut-and-paste produces the smoothest timecourse for all voxels. The undoing is by applying a rigid-body transform to the volume, by shifting and rotating the whole volume data to account for motion. The transformed volume is compared statistically to the volume at the first timepoint to see how well they match, using a cost function such as correlation or mutual information. The transformation that gives the minimal cost function is chosen as the model for head motion. Since the head can move in a vastly varied number of ways, it is not possible to search for all possible candidates; nor is there right now an algorithm that provides a globally optimal solution independent of the first transformations we try in a chain. Distortion corrections account for field nonuniformities of the scanner. One method, as described before, is to use shimming coils. Another is to recreate a field map of the main field by acquiring two images with differing echo times. If the field were uniform, the differences between the two images also would be uniform. Note these are not true preprocessing techniques since they are independent of the study itself. Bias field estimation is a real preprocessing technique using mathematical models of the noise from distortion, such as Markov random fields and expectation maximization algorithms, to correct for distortion. In general, fMRI studies acquire both many functional images with fMRI and a structural image with MRI. The structural image is usually of a higher resolution and depends on a different signal, the T1 magnetic field decay after excitation. To demarcate regions of interest in the functional image, one needs to align it with the structural one. Even when whole-brain analysis is done, to interpret the final results, that is to figure out which regions the active voxels fall in, one has to align the functional image to the structural one. This is done with a coregistration algorithm that works similar to the motion-correction one, except that here the resolutions are different, and the intensity values cannot be directly compared since the generating signal is different. Typical MRI studies scan a few different subjects. To integrate the results across subjects, one possibility is to use a common brain atlas, and adjust all the brains to align to the atlas, and then analyze them as a single group. The atlases commonly used are the Talairach one, a single brain of an elderly woman created by Jean Talairach, and the Montreal Neurological Institute (MNI) one. The second is a probabilistic map created by combining scans from over a hundred individuals. This normalization to a standard template is done by mathematically checking which combination of stretching, squeezing, and warping reduces the differences between the target and the reference. While this is conceptually similar to motion correction, the changes required are more complex than just translation and rotation, and hence optimization even more likely to depend on the first transformations in the chain that is checked. Temporal filtering is the removal of frequencies of no interest from the signal. A voxel's intensity change over time can be represented as the sum of a number of different repeating waves with differing periods and heights. A plot with these periods on the x-axis and the heights on the y-axis is called a power spectrum, and this plot is created with the Fourier transform technique. Temporal filtering amounts to removing the periodic waves not of interest to us from the power spectrum, and then summing the waves back again, using the inverse Fourier transform to create a new timecourse for the voxel. A high-pass filter removes the lower frequencies, and the lowest frequency that can be identified with this technique is the reciprocal of twice the TR. A low-pass filter removes the higher frequencies, while a band-pass filter removes all frequencies except the particular range of interest. Smoothing, or spatial filtering, is the idea of averaging the intensities of nearby voxels to produce a smooth spatial map of intensity change across the brain or region of interest. The averaging is often done by convolution with a Gaussian filter, which, at every spatial point, weights neighboring voxels by their distance, with the weights falling exponentially following the bell curve. If the true spatial extent of activation, that is the spread of the cluster of voxels simultaneously active, matches the width of the filter used, this process improves the signal-to-noise ratio. It also makes the total noise for each voxel follow a bell-curve distribution, since adding together a large number of independent, identical distributions of any kind produces the bell curve as the limit case. But if the presumed spatial extent of activation does not match the filter, signal is reduced. === Statistical analysis === One common approach to analysing fMRI data is to consider each voxel separately within the framework of the general linear model. The model assumes, at every time point, that the hemodynamic response (HR) is equal to the scaled and summed version of the events active at that point. A researcher creates a design matrix specifying which events are active at any timepoint. One common way is to create a matrix with one column per overlapping event, and one row per time point, and to mark it if a particular event, say a stimulus, is active at that time point. One then assumes a specific shape for the HR, leaving only its amplitude changeable in active voxels. The design matrix and this shape are used to generate a prediction of the exact HR of the voxel at every timepoint, using the mathematical procedure of convolution. This prediction does not include the scaling required for every event before summing them. The basic model assumes the observed HR is the predicted HR scaled by the weights for each event and then added, with noise mixed in. This generates a set of linear equations with more equations than unknowns. A linear equation has an exact solution, under most conditions, when equations and unknowns match. Hence one could choose any subset of the equations, with the number equal to the number of variables, and solve them. But, when these solutions are plugged into the left-out equations, there will be a mismatch between the right and left sides, the error. The GLM model attempts to find the scaling weights that minimize the sum of the squares of the error. This method is provably optimal if the error were distributed as a bell curve, and if the scaling-and-summing model were accurate. For a more mathematical description of the GLM model, see generalized linear models. The GLM model does not take into account the contribution of relationships between multiple voxels. Whereas GLM analysis methods assess whether a voxel or region's signal amplitude is higher or lower for one condition than another, newer statistical models such as multi-voxel pattern analysis (MVPA), utilize the unique contributions of multiple voxels within a voxel-population. In a typical implementation, a classifier or more basic algorithm is trained to distinguish trials for different conditions within a subset of the data. The trained model is then tested by predicting the conditions of the remaining (independent) data. This approach is most typically achieved by training and testing on different scanner sessions or runs. If the classifier is linear, then the training model is a set of weights used to scale the value in each voxel before summing them to generate a single number that determines the condition for each testing set trial. More information on training and testing classifiers is at statistical classification. MVPA allows for inferences about the information content of the underlying neural representations reflected in the BOLD signal, though there is a controversy about whether information detected by this method reflects information encoded at the level of columns, or higher spatial scales. Moreover, its harder to decode information from the prefrontal cortex compared to visual cortex and such differences in sensitivity across regions makes comparisons across regions problematic. Another method used the same fMRI dataset for visual object recognition in the human brain is depending on multi-voxel pattern analysis (fMRI voxels) and multi-view learning which is described in, this method used meta-heuristic search and mutual information to eliminate noisy voxels and select the significant BOLD signals. == Combining with other methods == It is common to combine fMRI signal acquisition with tracking of participants' responses and reaction times. Physiological measures such as heart rate, breathing, skin conductance (rate of sweating), and eye movements are sometimes captured simultaneously with fMRI. The method can also be combined with other brain-imaging techniques such as transcranial stimulation, direct cortical stimulation and, especially, EEG. The fMRI procedure can also be combined with near-infrared spectroscopy (NIRS) to have supplementary information about both oxyhemoglobin and deoxyhemoglobin. The fMRI technique can complement or supplement other techniques because of its unique strengths and gaps. It can noninvasively record brain signals without risks of ionising radiation inherent in other scanning methods, such as CT or PET scans. It can also record signal from all regions of the brain, unlike EEG/MEG, which are biased toward the cortical surface. But fMRI temporal resolution is poorer than that of EEG since the HR takes tens of seconds to climb to its peak. Combining EEG with fMRI is hence potentially powerful because the two have complementary strengths—EEG has high temporal resolution, and fMRI high spatial resolution. But simultaneous acquisition needs to account for the EEG signal from varying blood flow triggered by the fMRI gradient field, and the EEG signal from the static field. For details, see EEG vs fMRI. While fMRI stands out due to its potential to capture neural processes associated with health and disease, brain stimulation techniques such as transcranial magnetic stimulation (TMS) have the power to alter these neural processes. Therefore, a combination of both is needed to investigate the mechanisms of action of TMS treatment and on the other hand introduce causality into otherwise pure correlational observations. The current state-of-the-art setup for these concurrent TMS/fMRI experiments comprises a large-volume head coil, usually a birdcage coil, with the MR-compatible TMS coil being mounted inside that birdcage coil. It was applied in a multitude of experiments studying local and network interactions. However, classic setups with the TMS coil placed inside MR birdcage-type head coil are characterised by poor signal to noise ratios compared to multi-channel receive arrays used in clinical neuroimaging today. Moreover, the presence of the TMS coil inside the MR birdcage coil causes artefacts beneath the TMS coil, i.e. at the stimulation target. For these reasons new MR coil arrays were currently developed dedicated to concurrent TMS/fMRI experiments. == Issues in fMRI == === Design === If the baseline condition is too close to maximum activation, certain processes may not be represented appropriately. Another limitation on experimental design is head motion, which can lead to artificial intensity changes of the fMRI signal. === Block versus event-related design === In a block design, two or more conditions are alternated by blocks. Each block will have a duration of a certain number of fMRI scans and within each block only one condition is presented. By making the conditions differ in only the cognitive process of interest, the fMRI signal that differentiates the conditions should represent this cognitive process of interest. This is known as the subtraction paradigm. The increase in fMRI signal in response to a stimulus is additive. This means that the amplitude of the hemodynamic response (HR) increases when multiple stimuli are presented in rapid succession. When each block is alternated with a rest condition in which the HR has enough time to return to baseline, a maximum amount of variability is introduced in the signal. As such, we conclude that block designs offer considerable statistical power. There are however severe drawbacks to this method, as the signal is very sensitive to signal drift, such as head motion, especially when only a few blocks are used. Another limiting factor is a poor choice of baseline, as it may prevent meaningful conclusions from being drawn. There are also problems with many tasks lacking the ability to be repeated. Since within each block only one condition is presented, randomization of stimulus types is not possible within a block. This makes the type of stimulus within each block very predictable. As a consequence, participants may become aware of the order of the events. Event-related designs allow more real world testing, however, the statistical power of event related designs is inherently low, because the signal change in the BOLD fMRI signal following a single stimulus presentation is small. Both block and event-related designs are based on the subtraction paradigm, which assumes that specific cognitive processes can be added selectively in different conditions. Any difference in blood flow (the BOLD signal) between these two conditions is then assumed to reflect the differing cognitive process. In addition, this model assumes that a cognitive process can be selectively added to a set of active cognitive processes without affecting them. === Overlap of signals === Overlapping signals in fMRI are a significant challenge in cognitive neuroscience research, particularly when multiple stimuli or tasks are presented in close temporal proximity. The BOLD response has a slow temporal resolution compared to the rapid succession of cognitive events. This causes signals from different brain processes to overlap, making it difficult to differentiate which neural activity is associated with specific stimuli or tasks. This overlap reduces the precision of event-related fMRI analyses, complicating interpretations of brain function. Traditional fMRI designs, such as block or event-related designs, face limitations in managing this signal overlap, especially in studies with non-randomized, alternating designs, where the same tasks or stimuli may appear repeatedly and close together. As a result, the timing of stimuli becomes a crucial factor in ensuring that the fMRI signal from one event is sufficiently distinct from the next. To address these challenges, researchers employ techniques like deconvolution, which mathematically separates the overlapping BOLD responses. In 2023, Das and colleagues demonstrated various ways on optimizing timing between individual events such that convolution of signals evoked by them that are close to each other can be minimized. These methods attempt to estimate the contribution of each neural event to the overall signal, allowing for more accurate interpretation of brain activity. Advances in analytical methods, such as specialized tools for optimizing experimental designs, are crucial for mitigating the effects of signal overlap and improving the reliability of fMRI studies. === Baseline versus activity conditions === The brain is never completely at rest. It never stops functioning and firing neuronal signals, as well as using oxygen as long as the person in question is alive. In fact, in Stark and Squire's, 2001 study When zero is not zero: The problem of ambiguous baseline conditions in fMRI, activity in the medial temporal lobe (as well as in other brain regions) was substantially higher during rest than during several alternative baseline conditions. The effect of this elevated activity during rest was to reduce, eliminate, or even reverse the sign of the activity during task conditions relevant to memory functions. These results demonstrate that periods of rest are associated with significant cognitive activity and are therefore not an optimal baseline for cognition tasks. In order to discern baseline and activation conditions it is necessary to interpret a lot of information. This includes situations as simple as breathing. Periodic blocks may result in identical data of other variance in the data if the person breathes at a regular rate of 1 breath/5sec, and the blocks occur every 10s, thus impairing the data. === Reverse inference === Neuroimaging methods such as fMRI and MRI offer a measure of the activation of certain brain areas in response to cognitive tasks engaged in during the scanning process. Data obtained during this time allow cognitive neuroscientists to gain information regarding the role of particular brain regions in cognitive function. However, an issue arises when certain brain regions are alleged by researchers to identify the activation of previously labeled cognitive processes. Poldrack clearly describes this issue: The usual kind of inference that is drawn from neuroimaging data is of the form 'if cognitive process X is engaged, then brain area Z is active.' Perusal of the discussion sections of a few fMRI articles will quickly reveal, however, an epidemic of reasoning taking the following form: (1) In the present study, when task comparison A was presented, brain area Z was active. (2) In other studies, when cognitive process X was putatively engaged, then brain area Z was active. (3) Thus, the activity of area Z in the present study demonstrates engagement of cognitive process X by task comparison A. This is a 'reverse inference', in that it reasons backwards from the presence of brain activation to the engagement of a particular cognitive function. Reverse inference demonstrates the logical fallacy of affirming what you just found, although this logic could be supported by instances where a certain outcome is generated solely by a specific occurrence. With regard to the brain and brain function it is seldom that a particular brain region is activated solely by one cognitive process. Some suggestions to improve the legitimacy of reverse inference have included both increasing the selectivity of response in the brain region of interest and increasing the prior probability of the cognitive process in question. However, Poldrack suggests that reverse inference should be used merely as a guide to direct further inquiry rather than a direct means to interpret results. === Forward inference === Forward inference is a data driven method that uses patterns of brain activation to distinguish between competing cognitive theories. It shares characteristics with cognitive psychology's dissociation logic and philosophy's forward chaining. For example, Henson discusses forward inference's contribution to the "single process theory vs. dual process theory" debate with regard to recognition memory. Forward inference supports the dual process theory by demonstrating that there are two qualitatively different brain activation patterns when distinguishing between "remember vs. know judgments". The main issue with forward inference is that it is a correlational method. Therefore, one cannot be completely confident that brain regions activated during cognitive process are completely necessary for that execution of those processes. In fact, there are many known cases that demonstrate just that. For example, the hippocampus has been shown to be activated during classical conditioning, however lesion studies have demonstrated that classical conditioning can occur without the hippocampus. == Health risks == The most common risk to participants in an fMRI study is claustrophobia and there are reported risks for pregnant women to go through the scanning process. Scanning sessions also subject participants to loud high-pitched noises from Lorentz forces induced in the gradient coils by the rapidly switching current in the powerful static field. The gradient switching can also induce currents in the body causing nerve tingling. Implanted medical devices such as pacemakers could malfunction because of these currents. The radio-frequency field of the excitation coil may heat up the body, and this has to be monitored more carefully in those running a fever, the diabetic, and those with circulatory problems. Local burning from metal necklaces and other jewellery is also a risk. The strong static magnetic field can cause damage by pulling in nearby heavy metal objects converting them to projectiles. There is no proven risk of biological harm from even very powerful static magnetic fields. However, genotoxic (i.e., potentially carcinogenic) effects of MRI scanning have been demonstrated in vivo and in vitro, leading a recent review to recommend "a need for further studies and prudent use in order to avoid unnecessary examinations, according to the precautionary principle". In a comparison of genotoxic effects of MRI compared with those of CT scans, Knuuti et al. reported that even though the DNA damage detected after MRI was at a level comparable to that produced by scans using ionizing radiation (low-dose coronary CT angiography, nuclear imaging, and X-ray angiography), differences in the mechanism by which this damage takes place suggests that the cancer risk of MRI, if any, is unknown. == Advanced methods == The first fMRI studies validated the technique against brain activity known, from other techniques, to be correlated to tasks. By the early 2000s, fMRI studies began to discover novel correlations. Still their technical disadvantages have spurred researchers to try more advanced ways to increase the power of both clinical and research studies. === Better spatial resolution === MRI, in general, has better spatial resolution than EEG and MEG, but not as good a resolution as invasive procedures such as single-unit electrodes. While typical resolutions are in the millimeter range, ultra-high-resolution MRI or MR spectroscopy works at a resolution of tens of micrometers. It uses 7 T fields, small-bore scanners that can fit small animals such as rats, and external contrast agents such as fine iron oxide. Fitting a human requires larger-bore scanners, which make higher fields strengths harder to achieve, especially if the field has to be uniform; it also requires either internal contrast such as BOLD or a non-toxic external contrast agent unlike iron oxide. Parallel imaging is another technique to improve spatial resolution. This uses multiple coils for excitation and reception. Spatial resolution improves as the square root of the number of coils used. This can be done either with a phased array where the coils are combined in parallel and often sample overlapping areas with gaps in the sampling or with massive coil arrays, which are a much denser set of receivers separate from the excitation coils. These, however, pick up signals better from the brain surface, and less well from deeper structures such as the hippocampus. === Better temporal resolution === Temporal resolution of fMRI is limited by: (1) the feedback mechanism that raises the blood flow operating slowly; (2) having to wait until net magnetization recovers before sampling a slice again; and (3) having to acquire multiple slices to cover the whole brain or region of interest. Advanced techniques to improve temporal resolution address these issues. Using multiple coils speeds up acquisition time in exact proportion to the coils used. Another technique is to decide which parts of the signal matter less and drop those. This could be either those sections of the image that repeat often in a spatial map (that is small clusters dotting the image periodically) or those sections repeating infrequently (larger clusters). The first, a high-pass filter in k-space, has been proposed by Gary H. Glover and colleagues at Stanford. These mechanisms assume the researcher has an idea of the expected shape of the activation image. Typical gradient-echo EPI uses two gradient coils within a slice, and turns on first one coil and then the other, tracing a set of lines in k-space. Turning on both gradient coils can generate angled lines, which cover the same grid space faster. Both gradient coils can also be turned on in a specific sequence to trace a spiral shape in k-space. This spiral imaging sequence acquires images faster than gradient-echo sequences, but needs more math transformations (and consequent assumptions) since converting back to voxel space requires the data be in grid form (a set of equally spaced points in both horizontal and vertical directions). === New contrast mechanisms === BOLD contrast depends on blood flow, which is both sluggish in response to stimulus and subject to noisy influences. Other biomarkers now looked at to provide better contrast include temperature, acidity/alkalinity (pH), calcium-sensitive agents, neuronal magnetic field, and the Lorentz effect. Temperature contrast depends on changes in brain temperature from its activity. The initial burning of glucose raises the temperature, and the subsequent inflow of fresh, cold blood lowers it. These changes alter the magnetic properties of tissue. Since the internal contrast is too difficult to measure, external agents such as thulium compounds are used to enhance the effect. Contrast based on pH depends on changes in the acid/alkaline balance of brain cells when they go active. This too often uses an external agent. Calcium-sensitive agents make MRI more sensitive to calcium concentrations, with calcium ions often being the messengers for cellular signalling pathways in active neurons. Neuronal magnetic field contrast measures the magnetic and electric changes from neuronal firing directly. Lorentz-effect imaging tries to measure the physical displacement of active neurons carrying an electric current within the strong static field. Vascular-space occupancy (VASO) measures changes in cerebral blood volume (CBV), which is a less ambiguous marker of neural activity than BOLD. The signal intensity is inversely proportional to CBV, with typical signal changes around -1.5%. It relies on the T₁ difference between blood and the surrounding tissue. Because it nulls the blood signal, VASO eliminates the large veins contribution to the MR signal, reducing the risk of falsely localizing brain activity to nearby draining veins, and achieving high spatial specificity, which makes the technique well-suited for layer-specific fMRI. VASO allows for the reliable quantification of physiological parameters such as the cerebral metabolic rate of oxygen or the oxygen extraction fraction. However, the approach suffers from some limitations compared to techniques that rely on the BOLD signal. VASO generally has lower sensitivity, lower SNR and lower CNR, as well as reduced imaging efficiency and higher susceptibility to motion artifacts. === Emerging acquisition strategies === Balanced Steady-State Free Precession (bSSFP) fMRI: bSSFP offers high SNR efficiency due to optimized magnetization usage, leading to high tSNR. It also features reduced distortion because of its short readouts, minimizing susceptibility-induced distortion and signal dropout compared to GRE-EPI. It's compatible with advanced short-readout strategies like multi-shot, spiral, and radial trajectories. However, bSSFP is prone to banding artifacts from off-resonance effects and has a complex contrast mechanism influenced by T₂/T₁, diffusion, and off-resonance. It also tends to have a higher SAR compared to GRE sequences. Pass-Band bSSFP: This variant has a flat magnitude response over a broad frequency range (approximately 75% of 1/TR). Its contrast is T₂-based, driven by diffusion in susceptibility gradients and intravascular T₂ changes, with low sensitivity to frequency shifts. While offering lower contrast than long-TE GRE, it provides superior robustness to off-resonance and shim variations, making efficient whole-brain coverage achievable with minimal phase cycling. Larger Flip angles could be used to operate in Passband region. Transition-Band bSSFP: This type operates within a narrow frequency range (around 10–15% of 1/TR) with a steep magnitude and phase response. Its BOLD sensitivity is frequency-driven, where small frequency shifts cause large signal changes, potentially yielding high functional contrast (10–40%). However, this makes it highly dependent on the baseline resonance frequency of the voxel and critical of shimming quality, challenging whole-brain coverage without multiple acquisitions. Applications: bSSFP techniques can be beneficial for layer-specific imaging or studies requiring high SNR and excellent temporal resolution, especially in low-field or low-SNR brain regions. Multi-echo, Multi-contrast (SAGE) fMRI: SAGE techniques aim to combine the high sensitivity of multi-gradient echo (MGE) with the microvascular specificity of spin-echo (SE). They simultaneously acquire multiple gradient-echoes and spin-echoes (e.g., 2 GRE, 2 Asymmetric SE, 1 SE), allowing for optimized Contrast-to-Noise Ratio (CNR) and temporal Signal-to-Noise Ratio (tSNR) across varying T₂* values. Echoes can be weighted to emphasize either T₂*-BOLD contrast (wT₂*), similar to GRE, or T₂-BOLD contrast (wT₂), similar to SE. For instance, wT₂* weighting uses pre-refocusing GREs and Asymmetric Spin Echoes (ASEs), while wT₂ weighting uses post-refocusing ASEs and the pure SE. This method is particularly useful for studies focused on vascular physiology or when needing to avoid interscan variability. A limitation is its reliance on acceleration methods like multiband (MB) or SENSE. Silent fMRI: Based on 3D radial Rotating Ultra-Fast Imaging Sequence (RUFIS), Looping Star enhances RUFIS/Zero Echo Time (ZTE) by adding time-multiplexed gradient refocusing. This enables multi-echo acquisition beyond the initial FID (TE ≈ 0) to provide essential T₂* BOLD contrast. Its primary advantage is significantly reduced acoustic noise, approximately 25 dBA quieter than conventional GRE-EPI (a ~98% reduction). Limitations include potential echo mixing artifacts requiring correction and possibly reduced tSNR and BOLD sensitivity compared to GRE-EPI. Studies have shown that while looping star identifies similar activation networks to GE-EPI, it may exhibit reduced activation extent and lower T-values due to lower tSNR. Its BOLD sensitivity was found to be around 69.5% relative to GE-EPI. Applications: Silent fMRI techniques are particularly valuable for auditory tasks (to avoid stimulus-correlated scanner noise), pediatric studies, or examining subjects sensitive to loud noise. == Commercial use == Some experiments have shown the neural correlates of peoples' brand preferences. Samuel M. McClure used fMRI to show the dorsolateral prefrontal cortex, hippocampus and midbrain were more active when people knowingly drank Coca-Cola as opposed to when they drank unlabeled Coke. Other studies have shown the brain activity that characterizes men's preference for sports cars, and even differences between Democrats and Republicans in their reaction to campaign commercials with images of the 9/11 attacks. Neuromarketing companies have seized on these studies as a better tool to poll user preferences than the conventional survey technique. One such company was BrightHouse, now shut down. Another is Oxford, UK-based Neurosense, which advises clients how they could potentially use fMRI as part of their marketing business activity. A third is Sales Brain in California. At least two companies have been set up to use fMRI in lie detection: No Lie MRI and the Cephos Corporation. No Lie MRI charges close to $5000 for its services. These companies depend on evidence such as that from a study by Joshua Greene at Harvard University suggesting the prefrontal cortex is more active in those contemplating lying. However, there is still a fair amount of controversy over whether these techniques are reliable enough to be used in a legal setting. Some studies indicate that while there is an overall positive correlation, there is a great deal of variation between findings and in some cases considerable difficulty in replicating the findings. A federal magistrate judge in Tennessee prohibited fMRI evidence to back up a defendant's claim of telling the truth, on the grounds that such scans do not measure up to the legal standard of scientific evidence.. Most researchers agree that the ability of fMRI to detect deception in a real life setting has not been established. Use of the fMRI has been left out of legal debates throughout its history. Use of this technology has not been allowed due to holes in the evidence supporting fMRI. First, most evidence supporting fMRIs accuracy was done in a lab under controlled circumstances with solid facts. This type of testing does not pertain to real life. Real-life scenarios can be much more complicated with many other affecting factors. It has been shown that many other factors affect BOLD other than a typical lie. There have been tests done showing that drug use alters blood flow in the brain, which drastically affects the outcome of BOLD testing. Furthermore, individuals with diseases or disorders such as schizophrenia or compulsive lying can lead to abnormal results as well. Lastly, there is an ethical question relating to fMRI scanning. This testing of BOLD has led to controversy over if fMRIs are an invasion of privacy. Being able to scan and interpret what people are thinking may be thought of as immoral and the controversy still continues. Because of these factors and more, fMRI evidence has been excluded from any form of legal system. The testing is too uncontrolled and unpredictable. Therefore, it has been stated that fMRI has much more testing to do before it can be considered viable in the eyes of the legal system. == Criticism == Some scholars have criticized fMRI studies for problematic statistical analyses, often based on low-power, small-sample studies. Other fMRI researchers have defended their work as valid. In 2018, Turner and colleagues have suggested that the small sizes affect the replicability of task-based fMRI studies and claimed that even datasets with at least 100 participants the results may not be well replicated, although there are debates on it. In one real but satirical fMRI study, a dead salmon was shown pictures of humans in different emotional states. The authors provided evidence, according to two different commonly used statistical tests, of areas in the salmon's brain suggesting meaningful activity. The study was used to highlight the need for more careful statistical analyses in fMRI research, given the large number of voxels in a typical fMRI scan and the multiple comparisons problem. Before the controversies were publicized in 2010, between 25 and 40% of studies on fMRI being published were not using the corrected comparisons. But by 2012, that number had dropped to 10%. Dr. Sally Satel, writing in Time, cautioned that while brain scans have scientific value, individual brain areas often serve multiple purposes and "reverse inferences" as commonly used in press reports carry a significant chance of drawing invalid conclusions. In 2015, it was discovered that a statistical bug was found in the fMRI computations which likely invalidated at least 40,000 fMRI studies preceding 2015, and researchers suggest that results prior to the bug fix cannot be relied upon. Furthermore, it was later shown that how one sets the parameters in the software determines the false positive rate. In other words, study outcome can be determined by changing software parameters. In 2020 professor Ahmad Hariri, (Duke University) one of the first researchers to use fMRI, performed a largescale experiment that sought to test the reliability of fMRI on individual people. In the study, he copied protocols from 56 published papers in psychology that used fMRI. The results suggest that fMRI has poor reliability when it comes to individual cases, but good reliability when it comes to general human thought patterns == See also == Brain function Brain mapping Event related fMRI Functional neuroimaging Functional ultrasound imaging Linear transform model (MRI) List of neuroscience databases Signal enhancement by extravascular water protons (SEEP fMRI) Functional MRI methods and findings in schizophrenia == Notes == === Citations === == References == === Textbooks === EMRF/TRTF (Peter A. Rinck, ed.), Magnetic Resonance: A peer-reviewed, critical introduction, 13th edition, 2023 (Free offprint) Joseph P. Hornak, The basics of MRI (online) Richard B. Buxton, Introduction to functional magnetic resonance imaging: Principles and techniques, Cambridge University Press, 2002, ISBN 0-521-58113-3 Roberto Cabeza and Alan Kingstone, Editors, Handbook of Functional Neuroimaging of Cognition, Second Edition, MIT Press, 2006, ISBN 0-262-03344-5 Huettel, S. A.; Song, A. W.; McCarthy, G., Functional Magnetic Resonance Imaging Second Edition, 2009, Massachusetts: Sinauer, ISBN 978-0-87893-286-3 == Further reading == Langleben, D.D.; Schroeder, L.; Maldjian, J.A.; Gur, R.C.; McDonald, S.; Ragland, J.D.; O'Brien, C.P.; Childress, A.R. (March 2002). "Brain Activity during Simulated Deception: An Event-Related Functional Magnetic Resonance Study". NeuroImage. 15 (3): 727–732. doi:10.1006/nimg.2001.1003. PMID 11848716. S2CID 14676750. Mehagnoul-Schipper, D. Jannet; van der Kallen, Bas F.W.; Colier, Willy N.J.M.; van der Sluijs, Marco C.; van Erning, Leon J.Th.O.; Thijssen, Henk O.M.; Oeseburg, Berend; Hoefnagels, Willibrord H.L.; Jansen, René W.M.M. (May 2002). "Simultaneous measurements of cerebral oxygenation changes during brain activation by near-infrared spectroscopy and functional magnetic resonance imaging in healthy young and elderly subjects". Human Brain Mapping. 16 (1): 14–23. doi:10.1002/hbm.10026. PMC 6871837. PMID 11870923. Sally Satel; Scott O. Lilienfeld (2015). Brainwashed: The Seductive Appeal of Mindless Neuroscience. Basic Books. ISBN 978-0465062911. == External links == www.mri-tutorial.com MRI-TUTORIAL.COM – A free learning repository about neuroimaging BrainMapping.ORG project – Community web site for information Brain Mapping and methods fMRI Videos at RadiologyTube.com Archived 2020-08-18 at the Wayback Machine – A collection of fMRI videos Columbia University Program for Imaging and Cognitive Sciences: fMRI
Wikipedia/Functional_magnetic_resonance_imaging
Neutron capture therapy (NCT) is a type of radiotherapy for treating locally invasive malignant tumors such as primary brain tumors, recurrent cancers of the head and neck region, and cutaneous and extracutaneous melanomas. It is a two-step process: first, the patient is injected with a tumor-localizing drug containing the stable isotope boron-10 (10B), which has a high propensity to capture low energy "thermal" neutrons. The neutron cross section of 10B (3,837 barns) is 1,000 times more than that of other elements, such as nitrogen, hydrogen, or oxygen, that occur in tissue. In the second step, the patient is radiated with epithermal neutrons, the sources of which in the past have been nuclear reactors and now are accelerators that produce higher energy epithermal neutrons. After losing energy as they penetrate tissue, the resultant low energy "thermal" neutrons are captured by the 10B atoms. The resulting decay reaction yields high-energy alpha particles that kill the cancer cells that have taken up enough 10B. All clinical experience with NCT to date is with boron-10; hence this method is known as boron neutron capture therapy (BNCT). Use of another non-radioactive isotope, such as gadolinium, has been limited to experimental animal studies and has not been done clinically. BNCT has been evaluated as an alternative to conventional radiation therapy for malignant brain tumors such as glioblastomas, which presently are incurable, and more recently, locally advanced recurrent cancers of the head and neck region and, much less often, superficial melanomas mainly involving the skin and genital region. == Boron neutron capture therapy == === History === James Chadwick discovered the neutron in 1932. Shortly thereafter, H. J. Taylor reported that boron-10 nuclei had a high propensity to capture low energy "thermal" neutrons. This reaction causes nuclear decay of the boron-10 nuclei into helium-4 nuclei (alpha particles) and lithium-7 ions. In 1936, G.L. Locher, a scientist at the Franklin Institute in Philadelphia, Pennsylvania, recognized the therapeutic potential of this discovery and suggested that this specific type of neutron capture reaction could be used to treat cancer. William Sweet, a neurosurgeon at the Massachusetts General Hospital, first suggested the possibility of using BNCT to treat malignant brain tumors to evaluate BNCT for treatment of the most malignant of all brain tumors, glioblastoma multiforme (GBMs), using borax as the boron delivery agent in 1951. A clinical trial subsequently was initiated by Lee Farr using a specially constructed nuclear reactor at the Brookhaven National Laboratory in Long Island, New York, U.S.A. Another clinical trial was initiated in 1954 by Sweet at the Massachusetts General Hospital using the Research Reactor at the Massachusetts Institute of Technology (MIT) in Boston. A number of research groups worldwide have continued the early ground-breaking clinical studies of Sweet and Farr, and subsequently the pioneering clinical studies of Hiroshi Hatanaka (畠中洋) in the 1960s, to treat patients with brain tumors. Since then, clinical trials have been done in a number of countries including Japan, the United States, Sweden, Finland, the Czech Republic, Taiwan, and Argentina. After the nuclear accident at Fukushima (2011), the clinical program there transitioned from a reactor neutron source to accelerators that would produce high energy neutrons that become thermalized as they penetrate tissue. == Basic principles == Neutron capture therapy is a binary system that consists of two separate components to achieve its therapeutic effect. Each component in itself is non-tumoricidal, but when combined they can be highly lethal to cancer cells. BNCT is based on the nuclear capture and decay reactions that occur when non-radioactive boron-10, which makes up approximately 20% of natural elemental boron, is irradiated with neutrons of the appropriate energy to yield excited boron-11 (11B*). This undergoes radioactive decay to produce high-energy alpha particles (4He nuclei) and high-energy lithium-7 (7Li) nuclei. The nuclear reaction is: 10B + nth → [11B] *→ α + 7Li + 2.31 MeV Both the alpha particles and the lithium nuclei produce closely spaced ionizations in the immediate vicinity of the reaction, with a range of 5–9 μm. This approximately is the diameter of the target cell, and thus the lethality of the capture reaction is limited to boron-containing cells. BNCT, therefore, can be regarded as both a biologically and a physically targeted type of radiation therapy. The success of BNCT is dependent upon the selective delivery of sufficient amounts of 10B to the tumor with only small amounts localized in the surrounding normal tissues. Thus, normal tissues, if they have not taken up sufficient amounts of boron-10, can be spared from the neutron capture and decay reactions. Normal tissue tolerance, however, is determined by the nuclear capture reactions that occur with normal tissue hydrogen and nitrogen. A wide variety of boron delivery agents have been synthesized. The first, which has mainly been used in Japan, is a polyhedral borane anion, sodium borocaptate or BSH (Na2B12H11SH), and the second is a dihydroxyboryl derivative of phenylalanine, called boronophenylalanine or BPA. The latter has been used in many clinical trials. Following administration of either BPA or BSH by intravenous infusion, the tumor site is irradiated with neutrons, the source of which, until recently, has been specially designed nuclear reactors and now is neutron accelerators. Until 1994, low-energy (< 0.5 eV) thermal neutron beams were used in Japan and the United States, but since they have a limited depth of penetration in tissues, higher energy (> .5eV < 10 keV) epithermal neutron beams, which have a greater depth of penetration, were used in clinical trials in the United States, Europe, Japan, Argentina, Taiwan, and China until recently when accelerators replaced the reactors. In theory BNCT is a highly selective type of radiation therapy that can target tumor cells without causing radiation damage to the adjacent normal cells and tissues. Doses up to 60–70 grays (Gy) can be delivered to the tumor cells in one or two applications compared to 6–7 weeks for conventional fractionated external beam photon irradiation. However, the effectiveness of BNCT is dependent upon a relatively homogeneous cellular distribution of 10B within the tumor, and more specifically within the constituent tumor cells, and this is still one of the main unsolved problems that have limited its success. == Radiobiological considerations == The radiation doses to tumor and normal tissues in BNCT are due to energy deposition from three types of directly ionizing radiation that differ in their linear energy transfer (LET), which is the rate of energy loss along the path of an ionizing particle: 1. Low-LET gamma rays, resulting primarily from the capture of thermal neutrons by normal tissue hydrogen atoms [1H(n,γ)2H]; 2. High-LET protons, produced by the scattering of fast neutrons and from the capture of thermal neutrons by nitrogen atoms [14N(n,p)14C]; and 3. High-LET, heavier charged alpha particles (stripped down helium [4He] nuclei) and lithium-7 ions, released as products of the thermal neutron capture and decay reactions with 10B [10B(n,α)7Li]. Since both the tumor and surrounding normal tissues are present in the radiation field, even with an ideal epithermal neutron beam, there will be an unavoidable, non-specific background dose, consisting of both high- and low-LET radiation. However, a higher concentration of 10B in the tumor will result in it getting a higher total dose than that of adjacent normal tissues, which is the basis for the therapeutic gain in BNCT. The total radiation dose in Gy delivered to any tissue can be expressed in photon-equivalent units as the sum of each of the high-LET dose components multiplied by weighting factors (Gyw), which depend on the increased radiobiological effectiveness of each of these components. == Clinical dosimetry == Biological weighting factors have been used in all of the more recent clinical trials in patients with high-grade gliomas, using boronophenylalanine (BPA) in combination with an epithermal neutron beam. The 10B(n,α)7Li part of the radiation dose to the scalp has been based on the measured boron concentration in the blood at the time of BNCT, assuming a blood: scalp boron concentration ratio of 1.5:1 and a compound biological effectiveness (CBE) factor for BPA in skin of 2.5. A relative biological effectiveness (RBE) or CBE factor of 3.2 has been used in all tissues for the high-LET components of the beam, such as alpha particles. The RBE factor is used to compare the biologic effectiveness of different types of ionizing radiation. The high-LET components include protons resulting from the capture reaction with normal tissue nitrogen, and recoil protons resulting from the collision of fast neutrons with hydrogen. It must be emphasized that the tissue distribution of the boron delivery agent in humans should be similar to that in the experimental animal model in order to use the experimentally derived values for estimation of the radiation doses for clinical radiations. For more detailed information relating to computational dosimetry and treatment planning, interested readers are referred to a comprehensive review on this subject. == Boron delivery agents == The development of boron delivery agents for BNCT began in the early 1960s and is an ongoing and difficult task. A number of boron-10 containing delivery agents have been synthesized for potential use in BNCT. The most important requirements for a successful boron delivery agent are: low systemic toxicity and normal tissue uptake with high tumor uptake and concomitantly high tumor: to brain (T:Br) and tumor: to blood (T:Bl) concentration ratios (> 3–4:1); tumor concentrations in the range of ~20-50 μg 10B/g tumor; rapid clearance from blood and normal tissues and persistence in tumor during BNCT. However, as of 2021 no single boron delivery agent fulfills all of these criteria. With the development of new chemical synthetic techniques and increased knowledge of the biological and biochemical requirements needed for an effective agent and their modes of delivery, a wide variety of new boron agents has emerged (see examples in Table 1). However, only one of these compounds has ever been tested in large animals, and only boronophenylalanine (BPA) and sodium borocaptate (BSH), have been used clinically. aThe delivery agents are not listed in any order that indicates their potential usefulness for BNCT. None of these agents have been evaluated in any animals larger than mice and rats, except for boronated porphyrin (BOPP) that also has been evaluated in dogs. However, due to the severe toxicity of BOPP in canines, no further studies were carried out. bSee Barth, R.F., Mi, P., and Yang, W., Boron delivery agents for neutron capture therapy of cancer, Cancer Communications, 38:35 (doi:10.1186/s40880-018-0299-7), Check |doi= value (help) 2018 for an updated review. cThe abbreviations used in this table are defined as follows: BNCT, boron neutron capture therapy; DNA, deoxyribonucleic acid; EGF, epidermal growth factor; EGFR, epidermal growth factor receptor; MoAbs, monoclonal antibodies; VEGF, vascular endothelial growth factor. The major challenge in the development of boron delivery agents has been the requirement for selective tumor targeting in order to achieve boron concentrations (20-50 μg/g tumor) sufficient to produce therapeutic doses of radiation at the site of the tumor with minimal radiation delivered to normal tissues. The selective destruction of infliltrative tumor (glioma) cells in the presence of normal brain cells represents an even greater challenge compared to malignancies at other sites in the body. Malignant gliomas are highly infiltrative of normal brain, histologically diverse, heterogeneous in their genomic profile and therefore it is very difficult to kill all of them. == Gadolinium neutron capture therapy (Gd NCT) == There also has been some interest in the possible use of gadolinium-157 (157Gd) as a capture agent for NCT for the following reasons: First, and foremost, has been its very high neutron capture cross section of 254,000 barns. Second, gadolinium compounds, such as Gd-DTPA (gadopentetate dimeglumine Magnevist), have been used routinely as contrast agents for magnetic resonance imaging (MRI) of brain tumors and have shown high uptake by brain tumor cells in tissue culture (in vitro). Third, gamma rays and internal conversion and Auger electrons are products of the 157Gd(n,γ)158Gd capture reaction (157Gd + nth (0.025eV) → [158Gd] → 158Gd + γ + 7.94 MeV). Though the gamma rays have longer pathlengths, orders of magnitude greater depths of penetration compared with alpha particles, the other radiation products (internal conversion and Auger electrons) have pathlengths of about one cell diameter and can directly damage DNA. Therefore, it would be highly advantageous for the production of DNA damage if the 157Gd were localized within the cell nucleus. However, the possibility of incorporating gadolinium into biologically active molecules is very limited and only a small number of potential delivery agents for Gd NCT have been evaluated. Relatively few studies with Gd have been carried out in experimental animals compared to the large number with boron containing compounds (Table 1), which have been synthesized and evaluated in experimental animals (in vivo). Although in vitro activity has been demonstrated using the Gd-containing MRI contrast agent Magnevist as the Gd delivery agent, there are very few studies demonstrating the efficacy of Gd NCT in experimental animal tumor models, and, as evidenced by a lack of citations in the literature, Gd NCT has not, as of 2019, been used clinically in humans. == Neutron sources == === Clinical Studies Using Nuclear reactors as Neutron Sources === Until 2014, neutron sources for NCT were limited to nuclear reactors. Reactor-derived neutrons are classified according to their energies as thermal (En < 0.5 eV), epithermal (0.5 eV < En < 10 keV), or fast (En >10 keV). Thermal neutrons are the most important for BNCT since they usually initiate the 10B(n,α)7Li capture reaction. However, because they have a limited depth of penetration, epithermal neutrons, which lose energy and fall into the thermal range as they penetrate tissues, are now preferred for clinical therapy, other than for skin tumors such as melanoma. A number of nuclear reactors with very good neutron beam quality have been developed and used clinically. These include: Kyoto University Research Reactor Institute (KURRI) in Kumatori, Japan; the Massachusetts Institute of Technology Research Reactor (MITR); the FiR1 (Triga Mk II) research reactor at VTT Technical Research Centre, Espoo, Finland; the RA-6 CNEA reactor in Bariloche, Argentina; the High Flux Reactor (HFR) at Petten in the Netherlands; and Tsing Hua Open-pool Reactor (THOR) at the National Tsing Hua University, Hsinchu, Taiwan. JRR-4 at Japan Atomic Energy Agency, Tokai, JAPAN A compact In-Hospital Neutron Irradiator (IHNI) in a free-standing facility in Beijing, China. As of May 2021, only the reactors in Argentina, China, and Taiwan are still being used clinically. It is anticipated that, beginning some time in 2022, clinical studies in Finland will utilize an accelerator neutron source designed and fabricated in the United States by Neutron Therapeutics, Danvers, Massachusetts. == Clinical studies of BNCT for brain tumors == === Early studies in the US and Japan === It was not until the 1950s that the first clinical trials were initiated by Farr at the Brookhaven National Laboratory (BNL) in New York and by Sweet and Brownell at the Massachusetts General Hospital (MGH) using the Massachusetts Institute of Technology (MIT) nuclear reactor (MITR) and several different low molecular weight boron compounds as the boron delivery agent. However, the results of these studies were disappointing, and no further clinical trials were carried out in the United States until the 1990s. Following a two-year Fulbright fellowship in Sweet's laboratory at the MGH, clinical studies were initiated by Hiroshi Hatanaka in Japan in 1967. He used a low-energy thermal neutron beam, which had low tissue penetrating properties, and sodium borocaptate (BSH) as the boron delivery agent, which had been evaluated as a boron delivery agent by Albert Soloway at the MGH. In Hatanaka's procedure, as much as possible of the tumor was surgically resected ("debulking"), and at some time thereafter, BSH was administered by a slow infusion, usually intra-arterially, but later intravenously. Twelve to 14 hours later, BNCT was carried out at one or another of several different nuclear reactors using low-energy thermal neutron beams. The poor tissue-penetrating properties of the thermal neutron beams necessitated reflecting the skin and raising a bone flap in order to directly irradiate the exposed brain, a procedure first used by Sweet and his collaborators. Approximately 200+ patients were treated by Hatanaka, and subsequently by his associate, Nakagawa. Due to the heterogeneity of the patient population, in terms of the microscopic diagnosis of the tumor and its grade, size, and the ability of the patients to carry out normal daily activities (Karnofsky performance status), it was not possible to come up with definitive conclusions about therapeutic efficacy. However, the survival data were no worse than those obtained by standard therapy at the time, and there were several patients who were long-term survivors, and most probably they were cured of their brain tumors. === Further clinical studies in the United States and Japan === ==== USA (2003) ==== BNCT of patients with brain tumors was resumed in the United States in the mid-1990s by Chanana, Diaz, and Coderre and their co-workers at the Brookhaven National Laboratory using the Brookhaven Medical Research Reactor (BMRR) and at Harvard/Massachusetts Institute of Technology (MIT) using the MIT Research Reactor (MITR). For the first time, BPA was used as the boron delivery agent, and patients were irradiated with a collimated beam of higher energy epithermal neutrons, which had greater tissue-penetrating properties than thermal neutrons. A research group headed up by Zamenhof at the Beth Israel Deaconess Medical Center/Harvard Medical School and MIT was the first to use an epithermal neutron beam for clinical trials. Initially patients with cutaneous melanomas were treated and this was expanded to include patients with brain tumors, specifically melanoma metastatic to the brain and primary glioblastomas (GBMs). Included in the research team were Otto Harling at MIT and the Radiation Oncologist Paul Busse at the Beth Israel Deaconess Medical Center in Boston. A total of 22 patients were treated by the Harvard-MIT research group. Five patients with cutaneous melanomas were also treated using an epithermal neutron beam at the MIT research reactor (MITR-II) and subsequently patients with brain tumors were treated using a redesigned beam at the MIT reactor that possessed far superior characteristics to the original MITR-II beam and BPA as the capture agent. The clinical outcome of the cases treated at Harvard-MIT has been summarized by Busse. Although the treatment was well tolerated, there were no significant differences in the mean survival times (MSTs)of patients that had received BNCT compared to those who received conventional external beam X-irradiation. ==== Japan (2009) / Glioblastomas ==== Shin-ichi Miyatake and Shinji Kawabata at Osaka Medical College in Japan have carried out extensive clinical studies employing BPA (500 mg/kg) either alone or in combination with BSH (100 mg/kg), infused intravenously (i.v.) over 2 h, followed by neutron irradiation at Kyoto University Research Reactor Institute (KURRI) on patients with newly diagnosed and recurrent glioblastomas. The Mean Survival Time (MST) of 10 patients with recurrent high grade gliomas in the first of their trials was 15.6 months, with one long-term survivor (>5 years). Based on experimental animal data, which showed that BNCT in combination with X-irradiation produced enhanced survival compared to BNCT alone, in another study, Miyatake and Kawabata combined BNCT, as described above, with an X-ray boost. A total dose of 20 to 30 Gy was administered, divided into 2 Gy daily fractions. The MST of this group of patients (with newly diagnosed glioblastomas) was 23.5 months and no significant toxicity was observed, other than hair loss (alopecia). However, a significant subset of these patients, a high proportion of which had small cell variant glioblastomas, developed cerebrospinal fluid dissemination of their tumors. ==== Japan (2011) / Glioblastomas ==== In another Japanese trial with patients with newly diagnosed glioblastomas, carried out by Yamamoto et al., BPA and BSH were infused over 1 h, followed by BNCT at the Japan Research Reactor (JRR)-4 reactor. Patients subsequently received an X-ray boost after completion of BNCT. The overall median survival time (MeST) was 27.1 months, and the 1 year and 2-year survival rates were 87.5 and 62.5%, respectively. Based on the reports of Miyatake, Kawabata, and Yamamoto, combining BNCT with an X-ray boost can produce a significant therapeutic gain. However, further studies are needed to optimize this combined therapy alone or in combination with other approaches including chemo- and immunotherapy, and to evaluate it using a larger patient population. ==== Japan (2021) / Meningiomas ==== Miyatake and his co-workers also have treated a cohort of 44 patients with recurrent high grade meningiomas (HGM) that were refractory to all other therapeutic approaches. The clinical regimen consisted of intravenous administration of boronophenylalanine two hours before neutron irradiation at the Kyoto University Research Reactor Institute in Kumatori, Japan. Effectiveness was determined using radiographic evidence of tumor shrinkage, overall survival (OS) after initial diagnosis, OS after BNCT, and radiographic patterns associated with treatment failure. The median OS after BNCT was 29.6 months and 98.4 months after diagnosis. Better responses were seen in patients with lower grade tumors. In 35 of 36 patients, there was tumor shrinkage, and the median progression-free survival (PFS) was 13.7 months. There was good local control of the patients' tumors, as evidenced by the fact that only 22.2% of them experienced local recurrence of their tumors. From these results, it was concluded that BNCT was effective in locally controlling tumor growth, shrinking tumors, and improving survival with acceptable safety in patients with therapeutically refractory HGMs. === Clinical studies in Finland === The technological and physical aspects of the Finnish BNCT program have been described in considerable detail by Savolainen et al. A team of clinicians led by Heikki Joensuu and Leena Kankaanranta and nuclear engineers led by Iro Auterinen and Hanna Koivunoro at the Helsinki University Central Hospital and VTT Technical Research Center of Finland have treated approximately 200+ patients with recurrent malignant gliomas (glioblastomas) and head and neck cancer who had undergone standard therapy, recurred, and subsequently received BNCT at the time of their recurrence using BPA as the boron delivery agent. The median time to progression in patients with gliomas was 3 months, and the overall MeST was 7 months. It is difficult to compare these results with other reported results in patients with recurrent malignant gliomas, but they are a starting point for future studies using BNCT as salvage therapy in patients with recurrent tumors. Due to a variety of reasons, including financial, no further studies have been carried out at this facility, which has been decommissioned. However, a new facility for BNCT treatment has been installed using an accelerator designed and fabricated by Neutron Therapeutics. This accelerator was specifically designed to be used in a hospital, and the BNCT treatment and clinical studies will be carried out there after dosimetric studies have been completed in 2021. Both Finnish and foreign patients are expected to be treated at the facility. === Clinical studies in Sweden === To conclude this section on treating brain tumors with BNCT using reactor neutron sources, a clinical trial that was carried out by Stenstam, Sköld, Capala and their co-workers in Studsvik, Sweden, using an epithermal neutron beam produced by the Studsvik nuclear reactor, which had greater tissue penetration properties than the thermal beams originally used in the United States and Japan, will be briefly summarized. This study differed significantly from all previous clinical trials in that the total amount of BPA administered was increased (900 mg/kg), and it was infused i.v. over 6 hours. This was based on experimental animal studies in glioma bearing rats demonstrating enhanced uptake of BPA by infiltrating tumor cells following a 6-hour infusion. The longer infusion time of the BPA was well tolerated by the 30 patients who were enrolled in this study. All were treated with 2 fields, and the average whole brain dose was 3.2–6.1 Gy (weighted), and the minimum dose to the tumor ranged from 15.4 to 54.3 Gy (w). There has been some disagreement among the Swedish investigators regarding the evaluation of the results. Based on incomplete survival data, the MeST was reported as 14.2 months and the time to tumor progression was 5.8 months. However, more careful examination of the complete survival data revealed that the MeST was 17.7 months compared to 15.5 months that has been reported for patients who received standard therapy of surgery, followed by radiotherapy (RT) and the drug temozolomide (TMZ). Furthermore, the frequency of adverse events was lower after BNCT (14%) than after radiation therapy (RT) alone (21%) and both of these were lower than those seen following RT in combination with TMZ. If this improved survival data, obtained using the higher dose of BPA and a 6-hour infusion time, can be confirmed by others, preferably in a randomized clinical trial, it could represent a significant step forward in BNCT of brain tumors, especially if combined with a photon boost. == Clinical Studies of BNCT for extracranial tumors == === Head and neck cancers === The single most important clinical advance over the past 15 years has been the application of BNCT to treat patients with recurrent tumors of the head and neck region who had failed all other therapy. These studies were first initiated by Kato et al. in Japan. and subsequently followed by several other Japanese groups and by Kankaanranta, Joensuu, Auterinen, Koivunoro and their co-workers in Finland. All of these studies employed BPA as the boron delivery agent, usually alone but occasionally in combination with BSH. A very heterogeneous group of patients with a variety of histopathologic types of tumors have been treated, the largest number of which had recurrent squamous cell carcinomas. Kato et al. have reported on a series of 26 patients with far-advanced cancer for whom there were no further treatment options. Either BPA + BSH or BPA alone were administered by a 1 or 2 h i.v. infusion, and this was followed by BNCT using an epithermal beam. In this series, there were complete regressions in 12 cases, 10 partial regressions, and progression in 3 cases. The MST was 13.6 months, and the 6-year survival was 24%. Significant treatment related complications ("adverse" events) included transient mucositis, alopecia and, rarely, brain necrosis and osteomyelitis. Kankaanranta et al. have reported their results in a prospective Phase I/II study of 30 patients with inoperable, locally recurrent squamous cell carcinomas of the head and neck region. Patients received either two or, in a few instances, one BNCT treatment using BPA (400 mg/kg), administered i.v. over 2 hours, followed by neutron irradiation. Of 29 evaluated patients, there were 13 complete and 9 partial remissions, with an overall response rate of 76%. The most common adverse event was oral mucositis, oral pain, and fatigue. Based on the clinical results, it was concluded that BNCT was effective for the treatment of inoperable, previously irradiated patients with head and neck cancer. Some responses were durable but progression was common, usually at the site of the previously recurrent tumor. As previously indicated in the section on neutron sources, all clinical studies have ended in Finland, for variety of reasons including economic difficulties of the two companies directly involved, VTT and Boneca. However, clinical studies using an accelerator neutron source designed and fabricated by Neutron Therapeutics and installed at the Helsinki University Hospital should be fully functional by 2022. Finally, a group in Taiwan, led by Ling-Wei Wang and his co-workers at the Taipei Veterans General Hospital, have treated 17 patients with locally recurrent head and neck cancers at the Tsing Hua Open-pool Reactor (THOR) of the National Tsing Hua University. Two-year overall survival was 47% and two-year loco-regional control was 28%. Further studies are in progress to further optimize their treatment regimen. === Other types of tumor === ==== Melanoma and extramammary Paget's disease ==== Other extracranial tumors that have been treated with BNCT include malignant melanomas. The original studies were carried out in Japan by the late Yutaka Mishima and his clinical team in the Department of Dermatology at Kobe University using locally injected BPA and a thermal neutron beam. It is important to point out that it was Mishima who first used BPA as a boron delivery agent, and this approach subsequently was extended to other types of tumors based on the experimental animal studies of Coderre et al. at the Brookhaven National Laboratory. Local control was achieved in almost all patients, and some were cured of their melanomas. Patients with melanoma of the head and neck region, vulva, and extramammary Paget's disease of the genital region have been treated by Hiratsuka et al. with promising clinical results. The first clinical trial of BNCT in Argentina for the treatment of melanomas was performed in October 2003 and since then several patients with cutaneous melanomas have been treated as part of a Phase II clinical trial at the RA-6 nuclear reactor in Bariloche. The neutron beam has a mixed thermal-hyperthermal neutron spectrum that can be used to treat superficial tumors. The In-Hospital Neutron Irradiator (IHNI) in Beijing has been used to treat a small number of patients with cutaneous melanomas with a complete response of the primary lesion and no evidence of late radiation injury during a 24+-month follow-up period. ==== Colorectal cancer ==== Two patients with colon cancer, which had spread to the liver, have been treated by Zonta and his co-workers at the University of Pavia in Italy. The first was treated in 2001 and the second in mid-2003. The patients received an i.v. infusion of BPA, followed by removal of the liver (hepatectomy), which was irradiated outside of the body (extracorporeal BNCT) and then re-transplanted into the patient. The first patient did remarkably well and survived for over 4 years after treatment, but the second died within a month of cardiac complications. Clearly, this is a very challenging approach for the treatment of hepatic metastases, and it is unlikely that it will ever be widely used. Nevertheless, the good clinical results in the first patient established proof of principle. Finally, Yanagie and his colleagues at Meiji Pharmaceutical University in Japan have treated several patients with recurrent rectal cancer using BNCT. Although no long-term results have been reported, there was evidence of short-term clinical responses. == Accelerators as Neutron Sources == Accelerators now are the primary source of epithermal neutrons for clinical BNCT. The first papers relating to their possible use were published in the 1980s, and, as summarized by Blue and Yanch, this topic became an active area of research in the early 2000s. However, it was the Fukushima nuclear disaster in Japan in 2011 that gave impetus to their development for clinical use. Accelerators also can be used to produce epithermal neutrons. Today several accelerator-based neutron sources (ABNS) are commercially available or under development. Most existing or planned systems use either the lithium-7 reaction, 7Li(p,n)7Be or the beryllium-9 reaction,9Be(p,n)9B, to generate neutrons, though other nuclear reactions also have been considered. The lithium-7 reaction requires a proton accelerator with energies between 1.9 and 3.0 MeV, while the beryllium-9 reaction typically uses accelerators with energies between 5 and 30 MeV. Aside from the lower proton energy that the lithium-7 reaction requires, its main benefit is the lower energy of the neutrons produced. This in turn allows the use of smaller moderators, "cleaner" neutron beams, and reduced neutron activation. Benefits of the beryllium-9 reaction include simplified target design and disposal, long target lifetime, and lower required proton beam current. Since the proton beams for BNCT are quite powerful (~20-100 kW), the neutron generating target must incorporate cooling systems capable of removing the heat safely and reliably to protect the target from damage. In the case of the lithium-7, this requirement is especially important due to the low melting point and chemical volatility of the target material. Liquid jets, micro-channels and rotating targets have been employed to solve this problem.Several researchers have proposed the use of liquid lithium-7 targets in which the target material doubles as the coolant. In the case of beryllium-9, "thin" targets, in which the protons come to rest and deposit much of their energy in the cooling fluid, can be employed. Target degradation due to beam exposure ("blistering") is another problem to be solved, either by using layers of materials resistant to blistering or by spreading the protons over a large target area. Since the nuclear reactions yield neutrons with energies ranging from < 100keV to tens of MeV, a Beam Shaping Assembly (BSA) must be used to moderate, filter, reflect and collimate the neutron beam to achieve the desired epithermal energy range, neutron beam size and direction. BSAs are typically composed of a range of materials with desirable nuclear properties for each function. A well-designed BSA should maximize neutron yield per proton while minimizing fast neutron, thermal neutron and gamma contamination. It should also produce a sharply delimited and generally forward directed beam enabling flexible positioning of the patient relative to the aperture. One key challenge for an ABNS is the duration of treatment time: depending on the neutron beam intensity, treatments can take up to an hour or more. Therefore, it is desirable to reduce the treatment time both for patient comfort during immobilization and to increase the number of patients that could be treated in a 24-hour period. Increasing the neutron beam intensity for the same proton current by adjusting the BSA is often achieved at the cost of reduced beam quality (higher levels of unwanted fast neutrons or gamma rays in the beam or poor beam collimation). Therefore, increasing the proton current delivered by ABNS BNCT systems remains a key goal of technology development programs. The table below summarizes the existing or planned ABNS installations for clinical use (Updated November, 2024). == Clinical Studies Using Accelerator Neutron Sources == Treatment of Recurrent Malignant Gliomas The single greatest advance in moving BNCT forward clinically has been the introduction of cyclotron-based neutron sources (c-BNS) in Japan. Shin-ichi Miyatake and Shinji Kawabata have led the way with the treatment of patients with recurrent glioblastomas (GBMs). In their Phase II clinical trial, they used the Sumitomo Heavy Industries accelerator at the Osaka Medical College, Kansai BNCT Medical Center to treat a total of 24 patients. These patients ranged in age from 20 to 75 years, and all previously had received standard treatment consisting of surgery followed by chemotherapy with temozolomide (TMZ) and conventional radiation therapy. They were candidates for treatment with BNCT because their tumors had recurred and were progressing in size. They received an intravenous infusion of a proprietary formulation of 10B-enriched boronophenylalanine ("Borofalan," StellaPharma Corporation, Osaka, Japan) prior to neutron irradiation. The primary endpoint of this study was the 1-year survival rate after BNCT, which was 79.2%, and the median overall survival rate was 18.9 months. Based on these results, it was concluded that c-BNS BNCT was safe and resulted in increased survival of patients with recurrent gliomas. Although there was an increased risk of brain edema due to re-irradiation, this was easily controlled. As a result of this trial, the Sumitomo accelerator was approved by the Japanese regulatory authority having jurisdiction over medical devices, and further studies are being carried out with patients who have recurrent, high-grade (malignant) meningiomas. However, further studies for the treatment of patients with GBMs have been put on hold pending additional analysis of the results. Treatment of Recurrent or Locally Advanced Cancers of the Head and Neck Katsumi Hirose and his co-workers at the Southern Tohoku BNCT Research Center in Koriyama, Japan, recently have reported on their results after treating 21 patients with recurrent tumors of the head and neck region. All of these patients had received surgery, chemotherapy, and conventional radiation therapy. Eight of them had recurrent squamous cell carcinomas (R-SCC), and 13 had either recurrent (R) or locally advanced (LA) non-squamous cell carcinomas (nSCC). The overall response rate was 71%, and the complete response and partial response rates were 50% and 25%, respectively, for patients with R-SCC and 80% and 62%, respectively, for those with R or LA SCC. The overall 2-year survival rates for patients with R-SCC or R/LA nSCC were 58% and 100%, respectively. The treatment was well tolerated, and adverse events were those usually associated with conventional radiation treatment of these tumors. These patients had received a proprietary formulation of 10B-enriched boronophenylalanine (Borofalan), which was administered intravenously. Although the manufacturer of the accelerator was not identified, it presumably was the one manufactured by Sumitomo Heavy Industries, Ltd., which was indicated in the Acknowledgements of their report. Based on this Phase II clinical trial, the authors suggested that BNCT using Borofalan and c-BENS was a promising treatment for recurrent head and neck cancers, although further studies would be required to firmly establish this. == The Future == Clinical BNCT first was used to treat highly malignant brain tumors and subsequently for melanomas of the skin that were difficult to treat by surgery. Later, it was used as a type of "salvage" therapy for patients with recurrent tumors of the head and neck region. The clinical results were sufficiently promising to lead to the development of accelerator neutron sources, which will be used almost exclusively in the future. Challenges for the future clinical success of BNCT that need to be met include the following: Optimizing the dosing and delivery paradigms and administration of BPA and BSH. The development of more tumor-selective boron delivery agents for BNCT and their evaluation in large animals and ultimately in humans. Accurate, real time dosimetry to better estimate the radiation doses delivered to the tumor and normal tissues in patients with brain tumors and head and neck cancer. Further clinical evaluation of accelerator-based neutron sources for the treatment of brain tumors, head and neck cancer, and other malignancies. Reducing the cost. == See also == Particle therapy, Neutrons, protons, or heavy ions (e.g. carbon) Fast neutron therapy Proton therapy == References == == External links == Boron and Gadolinium Neutron Capture Therapy for Cancer Treatment Destroying Cancer with Boron and Neutrons - Medical Frontiers - NHK February 21, 2022
Wikipedia/Boron_neutron_capture_therapy
Brachytherapy is a form of radiation therapy where a sealed radiation source is placed inside or next to the area requiring treatment. The word "brachytherapy" comes from the Greek word βραχύς, brachys, meaning "short-distance" or "short". Brachytherapy is commonly used as an effective treatment for cervical, prostate, breast, esophageal and skin cancer and can also be used to treat tumours in many other body sites. Treatment results have demonstrated that the cancer-cure rates of brachytherapy are either comparable to surgery and external beam radiotherapy (EBRT) or are improved when used in combination with these techniques. Brachytherapy can be used alone or in combination with other therapies such as surgery, EBRT and chemotherapy. Brachytherapy contrasts with unsealed source radiotherapy, in which a therapeutic radionuclide (radioisotope) is injected into the body to chemically localize to the tissue requiring destruction. It also contrasts to External Beam Radiation Therapy (EBRT), in which high-energy x-rays (or occasionally gamma-rays from a radioisotope like cobalt-60) are directed at the tumour from outside the body. Brachytherapy instead involves the precise placement of short-range radiation-sources (radioisotopes, iodine-125 or caesium-131 for instance) directly at the site of the cancerous tumour. These are enclosed in a protective capsule or wire, which allows the ionizing radiation to escape to treat and kill surrounding tissue but prevents the charge of radioisotope from moving or dissolving in body fluids. The capsule may be removed later, or (with some radioisotopes) it may be allowed to remain in place.: Ch. 1  A feature of brachytherapy is that the irradiation affects only a very localized area around the radiation sources. Exposure to radiation of healthy tissues farther away from the sources is therefore reduced. In addition, if the patient moves or if there is any movement of the tumour within the body during treatment, the radiation sources retain their correct position in relation to the tumour. These characteristics of brachytherapy provide advantages over EBRT – the tumour can be treated with very high doses of localised radiation whilst reducing the probability of unnecessary damage to surrounding healthy tissues.: Ch. 1  A course of brachytherapy can be completed in less time than other radiotherapy techniques. This can help reduce the chance for surviving cancer-cells to divide and grow in the intervals between each radiotherapy dose. Patients typically have to make fewer visits to the radiotherapy clinic compared with EBRT, and may receive the treatment as outpatients. This makes treatment accessible and convenient for many patients. These features of brachytherapy mean that most patients are able to tolerate the brachytherapy procedure very well. The global market for brachytherapy reached US$680 million in 2013, of which the high-dose rate (HDR) and LDR segments accounted for 70%. Microspheres and electronic brachytherapy comprised the remaining 30%. One analysis predicts that the brachytherapy market may reach over US$2.4 billion in 2030, growing by 8% annually, mainly driven by the microspheres market as well as electronic brachytherapy, which is gaining significant interest worldwide as a user-friendly technology. == Medical uses == Brachytherapy is commonly used to treat cancers of the cervix, prostate, breast, and skin. Brachytherapy can also be used in the treatment of tumours of the brain, eye, head and neck region (lip, floor of mouth, tongue, nasopharynx and oropharynx), respiratory tract (trachea and bronchi), digestive tract (oesophagus, gall bladder, bile-ducts, rectum, anus), urinary tract (bladder, urethra, penis), female reproductive tract (uterus, vagina, vulva), and soft tissues. As the radiation sources can be precisely positioned at the tumour treatment site, brachytherapy enables a high dose of radiation to be applied to a small area. Furthermore, because the radiation sources are placed in or next to the target tumour, the sources maintain their position in relation to the tumour when the patient moves or if there is any movement of the tumour within the body. Therefore, the radiation sources remain accurately targeted. This enables clinicians to achieve a high level of dose conformity – i.e. ensuring the whole of the tumour receives an optimal level of radiation. It also reduces the risk of damage to healthy tissue, organs or structures around the tumour, thus enhancing the chance of cure and preservation of organ function. The use of HDR brachytherapy enables overall treatment times to be reduced compared with EBRT. Patients receiving brachytherapy generally have to make fewer visits for radiotherapy compared with EBRT, and overall radiotherapy treatment plans can be completed in less time. Many brachytherapy procedures are performed on an outpatient basis. This convenience may be particularly relevant for patients who have to work, older patients, or patients who live some distance from treatment centres, to ensure that they have access to radiotherapy treatment and adhere to treatment plans. Shorter treatment times and outpatient procedures can also help improve the efficiency of radiotherapy clinics. Brachytherapy can be used with the aim of curing the cancer in cases of small or locally advanced tumours, provided the cancer has not metastasized (spread to other parts of the body). In appropriately selected cases, brachytherapy for primary tumours often represents a comparable approach to surgery, achieving the same probability of cure and with similar side effects. However, in locally advanced tumours, surgery may not routinely provide the best chance of cure and is often not technically feasible to perform. In these cases radiotherapy, including brachytherapy, offers the only chance of cure. In more advanced disease stages, brachytherapy can be used as palliative treatment for symptom relief from pain and bleeding. In cases where the tumour is not easily accessible or is too large to ensure an optimal distribution of irradiation to the treatment area, brachytherapy can be combined with other treatments, such as EBRT and/or surgery.: Ch. 1  Combination therapy of brachytherapy exclusively with chemotherapy is rare. === Cervical cancer === Brachytherapy is commonly used in the treatment of early or locally confined cervical cancer and is a standard of care in many countries.: Ch. 14  Cervical cancer can be treated with either LDR, PDR or HDR brachytherapy. Used in combination with EBRT, brachytherapy can provide better outcomes than EBRT alone. The precision of brachytherapy enables a high dose of targeted radiation to be delivered to the cervix, while minimising radiation exposure to adjacent tissues and organs. The chances of staying free of disease (disease-free survival) and of staying alive (overall survival) are similar for LDR, PDR and HDR treatments. However, a key advantage of HDR treatment is that each dose can be delivered on an outpatient basis with a short administration time providing greater convenience for many patients. Research shows locally advanced carcinoma of the cervix must be treated with a combination of external beam radiotherapy (EBRT) and intracavity brachytherapy (ICBT). === Prostate cancer === Brachytherapy to treat prostate cancer can be given either as permanent LDR seed implantation or as temporary HDR brachytherapy.: Ch. 20  Permanent seed implantation is suitable for patients with a localised tumour and good prognosis and has been shown to be a highly effective treatment to prevent the cancer from returning. The survival rate is similar to that found with EBRT or surgery (radical prostatectomy), but with fewer side effects such as impotence and incontinence. The procedure can be completed quickly and patients are usually able to go home on the same day of treatment and return to normal activities after one to two days. Permanent seed implantation is often a less invasive treatment option compared to the surgical removal of the prostate. Temporary HDR brachytherapy is a newer approach to treating prostate cancer, but is currently less common than seed implantation. It is predominantly used to provide an extra dose in addition to EBRT (known as "boost" therapy) as it offers an alternative method to deliver a high dose of radiation therapy that conforms to the shape of the tumour within the prostate, while sparing radiation exposure to surrounding tissues. HDR brachytherapy as a boost for prostate cancer also means that the EBRT course can be shorter than when EBRT is used alone. === Breast cancer === Radiation therapy is standard of care for women who have undergone lumpectomy or mastectomy surgery, and is an integral component of breast-conserving therapy.: Ch. 18  Brachytherapy can be used after surgery, before chemotherapy or palliatively in the case of advanced disease. Brachytherapy to treat breast cancer is usually performed with HDR temporary brachytherapy. Post surgery, breast brachytherapy can be used as a "boost" following whole breast irradiation (WBI) using EBRT. More recently, brachytherapy alone is used to deliver APBI (accelerated partial breast irradiation), involving delivery of radiation to only the immediate region surrounding the original tumour. The main benefit of breast brachytherapy compared to whole breast irradiation is that a high dose of radiation can be precisely applied to the tumour while sparing radiation to healthy breast tissues and underlying structures such as the ribs and lungs. APBI can typically be completed over the course of a week. The option of brachytherapy may be particularly important in ensuring that working women, the elderly or women without easy access to a treatment centre, are able to benefit from breast-conserving therapy due to the short treatment course compared with WBI (which often requires more visits over the course of 1–2 months). There are five methods that can be used to deliver breast brachytherapy: Interstitial breast brachytherapy, Intracavitary breast brachytherapy, Intraoperative radiation therapy, Permanent Breast Seed Implantation and non-invasive breast brachytherapy using mammography for target localization and an HDR source. ==== Interstitial breast brachytherapy ==== Interstitial breast brachytherapy involves the temporary placement of several flexible plastic catheters in the breast tissue. These are carefully positioned to allow optimal targeting of radiation to the treatment area while sparing the surrounding breast tissue. The catheters are connected to an afterloader, which delivers the planned radiation dose to the treatment area. Interstitial breast brachytherapy can be used as "boost" after EBRT, or as APBI. ==== Intraoperative radiation therapy ==== Intraoperative radiation therapy (IORT) delivers radiation at the same time as the surgery to remove the tumour (lumpectomy). An applicator is placed in the cavity left after tumour removal and a mobile electronic device generates radiation (either x-rays or electrons) and delivers it via the applicator. Radiation is delivered all at once and the applicator removed before closing the incision. ==== Intracavitary breast brachytherapy ==== Intracavitary breast brachytherapy (also known as "balloon brachytherapy") involves the placement of a single catheter into the breast cavity left after the removal of the tumour (lumpectomy). The catheter can be placed at the time of the lumpectomy or postoperatively. Via the catheter, a balloon is then inflated in the cavity. The catheter is then connected to an afterloader, which delivers the radiation dose through the catheter and into the balloon. Currently, intracavitary breast brachytherapy is only routinely used for APBI. There are also devices that combine the features of interstitial and intracavitary breast brachytherapy (e.g. SAVI). These devices use multiple catheters but are inserted through a single-entry point in the breast. Studies suggest the use of multiple catheters enables physicians to target the radiation more precisely. ==== Permanent breast seed implantation ==== Permanent breast seed implantation (PBSI) implants many radioactive "seeds" (small pellets) into the breast in the area surrounding the site of the tumour, similar to permanent seed prostate brachytherapy. The seeds are implanted in a single 1–2 hour procedure and deliver radiation over the following months as the radioactive material inside them decays. Risk of radiation from the implants to others (e.g. partner/spouse) has been studied and found to be safe. === Brain tumors === Surgically Targeted Radiation Therapy (STaRT), branded as GammaTile Therapy, is a type of brachytherapy implant specifically designed for use inside the brain. GammaTile is FDA-cleared to treat newly diagnosed, operable malignant intracranial neoplasms (i.e., brain tumors) and operable recurrent intracranial neoplasms, including meningiomas, metastases, high-grade gliomas, and glioblastomas. In a clinical study, GammaTile Therapy improved local tumor control compared to previous same-site treatments without an increased risk of side effects. === Esophageal cancer === For esophageal cancer radiation treatment, brachytherapy is one option for effective treatment, involves definitive radiotherapy (boost) or palliative treatments. Definitive radiotherapy (boost) can deliver the dose precisely and palliative treatments can be given to relieve dysphagia. The large diameter applicators or balloon type catheter are used with the afterloader to expand the esophagus and facilitate the delivery of radiation dose to tumor with sparing of nearby normal tissue. Brachytherapy followed EBRT or surgery have been shown to improve the survival rate and local recurrent rate than EBRT or surgery only for esophageal cancer patients. === Skin cancer === HDR brachytherapy for nonmelanomatous skin cancer, such as basal cell carcinoma and squamous cell carcinoma, provides an alternative treatment option to surgery. This is especially relevant for cancers on the nose, ears, eyelids or lips, where surgery may cause disfigurement or require extensive reconstruction.: Ch. 28  Various applicators can be used to ensure close contact between the radiation source(s) and the skin, which conform to the curvature of the skin and help ensure precision delivery of the optimal irradiation dose.: Ch. 28  Another type of brachytherapy which has similar advantages as the HDR is provided be the Rhenium-SCT (Skin Cancer Therapy). It makes use of the beta ray emissions of Rhenium-188 to treat basal cell or squamous cell carcinomas. the radiation source is enclosed in a compound which is applied to a thin protective foil directly over the lesion. This way the radiation source can be applied to complex locations and minimize radiation to healthy tissue. Brachytherapy for skin cancer provides good cosmetic results and clinical efficacy; studies with up to five years follow-up have shown that brachytherapy is highly effective in terms of local control, and is comparable to EBRT. Treatment times are typically short, providing convenience for patients. It has been suggested that brachytherapy may become a standard of treatment for skin cancer in the near future. === Blood vessels === Brachytherapy can be used in the treatment of coronary in-stent restenosis, in which a catheter is placed inside blood vessels, through which sources are inserted and removed. In treating In-stent restenosis (ISR) Drug Eluting stents (DES) have been found to be superior to Intracoronary Brachytherapy (ICBT). However, there is continued interest in vascular brachytherapy for persistent restenosis in failed stents and vein grafts. The therapy has also been investigated for use in the treatment of peripheral vasculature stenosis and considered for the treatment of atrial fibrillation. == Side effects == The likelihood and nature of potential acute, sub-acute or long-term side-effects associated with brachytherapy depends on the location of the tumour being treated and the type of brachytherapy being used. === Acute === Acute side effects associated with brachytherapy include localised bruising, swelling, bleeding, discharge or discomfort within the implanted region. These usually resolve within a few days following completion of treatment. Patients may also feel fatigued for a short period following treatment. Brachytherapy treatment for cervical or prostate cancer can cause acute and transient urinary symptoms such as urinary retention, urinary incontinence or painful urination (dysuria). Transient increased bowel frequency, diarrhoea, constipation or minor rectal bleeding may also occur. Acute and subacute side effects usually resolve over a matter of days or a few weeks. In the case of permanent (seed) brachytherapy for prostate cancer, there is a small chance that some seeds may migrate out of the treatment region into the bladder or urethra and be passed in the urine. Brachytherapy for skin cancer may result in a shedding of the outer layers of skin (desquamation) around the area of treatment in the weeks following therapy, which typically heals in 5–8 weeks.: Ch. 28  If the cancer is located on the lip, ulceration may occur as a result of brachytherapy, but usually resolves after 4–6 weeks. Most of the acute side effects associated with brachytherapy can be treated with medication or through dietary changes, and usually disappear over time (typically a matter of weeks), once the treatment is completed. The acute side effects of HDR brachytherapy are broadly similar to EBRT. === Long-term === In a small number of people, brachytherapy may cause long-term side effects due to damage or disruption of adjacent tissues or organs. Long-term side effects are usually mild or moderate in nature. For example, urinary and digestive problems may persist as a result of brachytherapy for cervical or prostate cancer, and may require ongoing management. Brachytherapy for prostate cancer may cause erectile dysfunction in approximately 15–30% of patients.: Ch. 20  However, the risk of erectile dysfunction is related to age (older men are at a greater risk than younger men) and also the level of erectile function prior to receiving brachytherapy. In patients who do experience erectile dysfunction, the majority of cases can successfully be treated with drugs such as Viagra.: Ch. 20  Importantly, the risk of erectile dysfunction after brachytherapy is less than after radical prostatectomy. Brachytherapy for breast or skin cancer may cause scar tissue to form around the treatment area. In the case of breast brachytherapy, fat necrosis may occur as a result of fatty acids entering the breast tissues. This can cause the breast tissue to become swollen and tender. Fat necrosis is a benign condition and typically occurs 4–12 months after treatment and affects about 2% of patients. == Safety around others == Patients often ask if they need to have special safety precautions around family and friends after receiving brachytherapy. If temporary brachytherapy is used, no radioactive sources remain in the body after treatment. Therefore, there is no radiation risk to friends or family from being in close proximity with them. If permanent brachytherapy is used, low dose radioactive sources (seeds) are left in the body after treatment – the radiation levels are very low and decrease over time. In addition, the irradiation only affects tissues within a few millimetres of the radioactive sources (i.e. the tumour being treated). As a precaution, some people receiving permanent brachytherapy may be advised not to hold any small children or be too close to pregnant women for a short time after treatment. Radiation oncologists or nurses can provide specific instructions to patients and advise for how long they need to be careful. == Types == Different types of brachytherapy can be defined according to (1) the placement of the radiation sources in the target treatment area, (2) the rate or 'intensity' of the irradiation dose delivered to the tumour, and (3) the duration of dose delivery. === Source placement === The two main types of brachytherapy treatment in terms of the placement of the radioactive source are interstitial and contact. In the case of interstitial brachytherapy, the sources are placed directly in the target tissue of the affected site, such as the prostate or breast.: Ch. 1  Contact brachytherapy involves placement of the radiation source in a space next to the target tissue.: Ch. 1  This space may be a body cavity (intracavitary brachytherapy) such as the cervix, uterus or vagina; a body lumen (intraluminal brachytherapy) such as the trachea or oesophagus; or externally (surface brachytherapy) such as the skin.: Ch. 1  A radiation source can also be placed in blood vessels (intravascular brachytherapy) for the treatment of coronary in-stent restenosis. === Dose rate === The dose rate of brachytherapy refers to the level or 'intensity' with which the radiation is delivered to the surrounding medium and is expressed in Grays per hour (Gy/h). Low-dose rate (LDR) brachytherapy involves implanting radiation sources that emit radiation at a rate of up to 2 Gy·h−1. LDR brachytherapy is commonly used for cancers of the oral cavity, oropharynx, sarcomas: Ch. 27  and prostate cancer: Ch. 20  Medium-dose rate (MDR) brachytherapy is characterized by a medium rate of dose delivery, ranging between 2 Gy·h−1 to 12 Gy·h−1. High-dose rate (HDR) brachytherapy is when the rate of dose delivery exceeds 12 Gy·h−1. The most common applications of HDR brachytherapy are in tumours of the cervix, esophagus, lungs, breasts and prostate. Most HDR treatments are performed on an outpatient basis, but this is dependent on the treatment site. Pulsed-dose rate (PDR) brachytherapy involves short pulses of radiation, typically once an hour, to simulate the overall rate and effectiveness of LDR treatment. Typical tumour sites treated by PDR brachytherapy are gynaecological: Ch. 14  and head and neck cancers. The calculation of radiation dose from radioactive seeds is crucial in the planning and administration of brachytherapy treatments. Most modern calculation are done using the formalism published by the American Association of Physicists in Medicine. For the geometry in figure 1, this formalism uses five parameters. Strength of the source: How much radiation is being emitted by the seed, expressed as air kerma strength and denoted by S k {\displaystyle S_{k}} . Dose rate of the source: How much dose the seed will deliver to the reference point over a certain period of time, denoted by Λ {\displaystyle \Lambda } . Geometry factor: How the shape of the seed will affect the dose at points away from the reference point, denoted by G ( r , θ ) {\displaystyle G(r,\theta )} . Anisotropy function: How the much radiation will be stopped before passing out of the seed, denoted by F ( r , θ ) {\displaystyle F(r,\theta )} . Radial dose function: How the radiation will interact with the material surrounding the seed, denoted by g ( r ) {\displaystyle g(r)} . The equation which links these parameters is, D ( r , θ ) = S k Λ G ( r , θ ) G ( r 0 , θ 0 ) g ( r ) F ( r , θ ) {\displaystyle D(r,\theta )=S_{k}\Lambda {\frac {G(r,\theta )}{G(r_{0},\theta _{0})}}g(r)F(r,\theta )} === Duration of dose delivery === The placement of radiation sources in the target area can be temporary or permanent. Temporary brachytherapy involves placement of radiation sources for a set duration (usually a number of minutes or hours) before being withdrawn.: Ch. 1  The specific treatment duration will depend on many different factors, including the required rate of dose delivery and the type, size and location of the cancer. In LDR and PDR brachytherapy, the source typically stays in place up to 24 hours before being removed, while in HDR brachytherapy this time is typically a few minutes. Permanent brachytherapy, also known as seed implantation, involves placing small LDR radioactive seeds or pellets (about the size of a grain of rice) in the tumour or treatment site and leaving them there permanently to gradually decay. Over a period of weeks or months, the level of radiation emitted by the sources will decline to almost zero. The inactive seeds then remain in the treatment site with no lasting effect. Permanent brachytherapy is most commonly used in the treatment of prostate cancer. == Procedure == === Initial planning === To accurately plan the brachytherapy procedure, a thorough clinical examination is performed to understand the characteristics of the tumour. In addition, a range of imaging modalities can be used to visualise the shape and size of the tumour and its relation to surrounding tissues and organs. These include x-ray radiography, ultrasound, computed axial tomography (CT or CAT) scans, and magnetic resonance imaging (MRI).: Ch. 5  The data from many of these sources can be used to create a 3D visualisation of the tumour and the surrounding tissues.: Ch. 5  Using this information, a plan of the optimal distribution of the radiation sources can be developed. This includes consideration of how the source carriers (applicators), which are used to deliver the radiation to the treatment site, should be placed and positioned.: Ch. 5  Applicators are non-radioactive and are typically needles or plastic catheters. The specific type of applicator used will depend on the type of cancer being treated and the characteristics of the target tumour.: Ch. 5  This initial planning helps to ensure that 'cold spots' (too little irradiation) and 'hot spots' (too much irradiation) are avoided during treatment, as these can respectively result in treatment failure and side-effects. === Insertion === Before radioactive sources can be delivered to the tumour site, the applicators have to be inserted and correctly positioned in line with the initial planning. Imaging techniques, such as x-ray, fluoroscopy and ultrasound are typically used to help guide the placement of the applicators to their correct positions and to further refine the treatment plan.: Ch. 5  CT scans and MRI can also be used.: Ch. 5  Once the applicators are inserted, they are held in place against the skin using sutures or adhesive tape to prevent them from moving. Once the applicators are confirmed as being in the correct position, further imaging can be performed to guide detailed treatment planning.: Ch. 5  === Creation of a virtual patient === The images of the patient with the applicators in situ are imported into treatment planning software and the patient is brought into a dedicated shielded room for treatment. The treatment planning software enables multiple 2D images of the treatment site to be translated into a 3D 'virtual patient', within which the position of the applicators can be defined.: Ch. 5  The spatial relationships between the applicators, the treatment site and the surrounding healthy tissues within this 'virtual patient' are a copy of the relationships in the actual patient. === Optimizing the irradiation plan === To identify the optimal spatial and temporal distribution of radiation sources within the applicators of the implanted tissue or cavity, the treatment planning software allows virtual radiation sources to be placed within the virtual patient. The software shows a graphical representation of the distribution of the irradiation. This serves as a guide for the brachytherapy team to refine the distribution of the sources and provide a treatment plan that is optimally tailored to the anatomy of each patient before actual delivery of the irradiation begins. This approach is sometimes called 'dose-painting'. === Treatment delivery === The radiation sources used for brachytherapy are always enclosed within a non-radioactive capsule. The sources can be delivered manually, but are more commonly delivered through a technique known as 'afterloading'. Manual delivery of brachytherapy is limited to a few LDR applications, due to risk of radiation exposure to clinical staff. In contrast, afterloading involves the accurate positioning of non-radioactive applicators in the treatment site, which are subsequently loaded with the radiation sources. In manual afterloading, the source is delivered into the applicator by the operator. Remote afterloading systems provide protection from radiation exposure to healthcare professionals by securing the radiation source in a shielded safe. Once the applicators are correctly positioned in the patient, they are connected to an 'afterloader' machine (containing the radioactive sources) through a series of connecting guide tubes. The treatment plan is sent to the afterloader, which then controls the delivery of the sources along the guide tubes into the pre-specified positions within the applicator. This process is only engaged once staff are removed from the treatment room. The sources remain in place for a pre-specified length of time, again following the treatment plan, following which they are returned along the tubes to the afterloader. On completion of delivery of the radioactive sources, the applicators are carefully removed from the body. Patients typically recover quickly from the brachytherapy procedure, enabling it to often be performed on an outpatient basis. Between 2003 and 2012 in United States community hospitals, the rate of hospital stays with brachytherapy had a 24.4 percent average annual decrease among adults aged 45–64 years and a 27.3 percent average annual decrease among adults aged 65–84 years. Brachytherapy was the OR procedure with the greatest change in occurrence among hospital stays paid by Medicare and private insurance. == Radiation sources == Commonly used radiation sources (radionuclides) for brachytherapy include: == History == Brachytherapy dates back to 1901 (shortly after the discovery of radioactivity by Henri Becquerel in 1896) when Pierre Curie suggested to Henri-Alexandre Danlos that a radioactive source could be inserted into a tumour. It was found that the radiation caused the tumour to shrink. Independently, Alexander Graham Bell also suggested the use of radiation in this way. In the early twentieth century, techniques for the application of brachytherapy were pioneered at the Curie institute in Paris by Danlos and at St Luke's and Memorial Hospital in New York by Robert Abbe.: Ch. 1  Working with the Curies in their radium research laboratory at the University of Paris, American physicist William Duane refined a technique for extracting radon-222 gas from radium sulfate solutions. Solutions containing 1 gram of radium were "milked" to create radon "seeds" of about 20 millicuries each. These "seeds" were distributed throughout Paris for use in an early form of brachytherapy named endocurietherapy. Duane perfected this "milking" technique during his time in Paris and referred to the device as a "radium cow". Duane returned to the United States in 1913 and worked in a joint role as assistant professor of physics at Harvard and Research Fellow in Physics of the Harvard Cancer Commission. The Cancer Commission was founded in 1901 and hired Duane to investigate the usage of radium emanations in the treatment of cancer. In 1915 he built Boston's first "radium cow" and thousands of patients were treated with the radon-222 generated from it. Interstitial radium therapy was common in the 1930s.: Ch. 1  Gold seeds filled with radon were used as early as 1942 until at least 1958. Gold shells were selected by Gino Failla around 1920 to shield beta rays while passing gamma rays. Cobalt needles were also used briefly after World War II.: Ch. 1  Radon and cobalt were replaced by radioactive tantalum and gold, before iridium rose in prominence.: Ch. 1  First used in 1958, iridium is the most commonly used artificial source for brachytherapy today.: Ch. 1  Following initial interest in brachytherapy in Europe and the US, its use declined in the middle of the twentieth century due to the problem of radiation exposure to operators from the manual application of the radioactive sources. However, the development of remote afterloading systems, which allow the radiation to be delivered from a shielded safe, and the use of new radioactive sources in the 1950s and 1960s, reduced the risk of unnecessary radiation exposure to the operator and patients. This, together with more recent advancements in three-dimensional imaging modalities, computerised treatment planning systems and delivery equipment has made brachytherapy a safe and effective treatment for many types of cancer today.: Ch. 1  == Environmental hazards and orphanhood == Due to their small size and the poor controls over their fate during the early decades of their use, a significant risk of loss exists for brachytherapy seeds. At least two instances have been recorded of seeds escaping from the tight radiological control under which they are held in hospitals, becoming what are known as orphaned sources. The first occurred sometime in the 1930s or 1940s, when a number of gold seeds filled with highly radioactive radon-222 were illicitly melted down and mixed with other gold scrap. This scrap gold was contaminated with that isotope's long-lived daughter nuclides, lead, bismuth and polonium-210. It was then allowed to enter jewelry production in Upstate New York, where at least 144 pieces of jewelry were fabricated using this radioactive gold. Over the following decades there were several instances of fingers that required amputation to save patients from cancers caused by radioactive wedding, engagement and class rings. One Pennsylvania man was reported to have died from metastatic cancer caused by radioactive gold jewelry. Although the phenomenon of gold jewelry made from recycled radon seeds retaining their radioactivity had been noted as early as the mid-1930s by Edith Quimby, one of the founders of nuclear medicine, and the distribution and serious medical consequences of radioactive gold jewelry reported in medical journals as early as April of 1967, it was not until February of 1981 that New York State's Department of Health took concerted action to identify items made from the contaminated gold. The second occurred in Prague, sometime before 2011. In September of that year a computer scientist named Pavel Bykov, who happened to have a small geiger counter built into his wristwatch, noticed abnormal readings in the Podolí neighborhood playground where he had taken his children. After retrieving a larger counter from his home, Bykov determined that a highly radioactive source was buried in the playground. The park was evacuated by authorities soon after. Specialized excavation equipment recovered a 20 x 2mm radium-226 needle from the playground's dirt. The source retained a high degree of activity and was found to emit 500 μSv per hour from a meter away, giving a dose during that period equivalent to one year of background exposure. == See also == External beam radiotherapy Prostate brachytherapy Targeted intra-operative radiotherapy Unsealed source radiotherapy Nuclear medicine Intraoperative radiation therapy Contact X-ray brachytherapy (also called "electronic brachytherapy") == References == == External links == American Brachytherapy Society (ABS)
Wikipedia/Brachytherapy
Molecular machines are a class of molecules typically described as an assembly of a discrete number of molecular components intended to produce mechanical movements in response to specific stimuli, mimicking macromolecular devices such as switches and motors. Naturally occurring or biological molecular machines are responsible for vital living processes such as DNA replication and ATP synthesis. Kinesins and ribosomes are examples of molecular machines, and they often take the form of multi-protein complexes. For the last several decades, scientists have attempted, with varying degrees of success, to miniaturize machines found in the macroscopic world. The first example of an artificial molecular machine (AMM) was reported in 1994, featuring a rotaxane with a ring and two different possible binding sites. In 2016 the Nobel Prize in Chemistry was awarded to Jean-Pierre Sauvage, Sir J. Fraser Stoddart, and Bernard L. Feringa for the design and synthesis of molecular machines. AMMs have diversified rapidly over the past few decades and their design principles, properties, and characterization methods have been outlined better. A major starting point for the design of AMMs is to exploit the existing modes of motion in molecules, such as rotation about single bonds or cis-trans isomerization. Different AMMs are produced by introducing various functionalities, such as the introduction of bistability to create switches. A broad range of AMMs has been designed, featuring different properties and applications; some of these include molecular motors, switches, and logic gates. A wide range of applications have been demonstrated for AMMs, including those integrated into polymeric, liquid crystal, and crystalline systems for varied functions (such as materials research, homogenous catalysis and surface chemistry). == Terminology == Several definitions describe a "molecular machine" as a class of molecules typically described as an assembly of a discrete number of molecular components intended to produce mechanical movements in response to specific stimuli. The expression is often more generally applied to molecules that simply mimic functions that occur at the macroscopic level. A few prime requirements for a molecule to be considered a "molecular machine" are: the presence of moving parts, the ability to consume energy, and the ability to perform a task. Molecular machines differ from other stimuli-responsive compounds that can produce motion (such as cis-trans isomers) in their relatively larger amplitude of movement (potentially due to chemical reactions) and the presence of a clear external stimulus to regulate the movements (as compared to random thermal motion). Piezoelectric, magnetostrictive, and other materials that produce a movement due to external stimuli on a macro-scale are generally not included, since despite the molecular origin of the motion the effects are not useable on the molecular scale. This definition generally applies to synthetic molecular machines, which have historically gained inspiration from the naturally occurring biological molecular machines (also referred to as "nanomachines"). Biological machines are considered to be nanoscale devices (such as molecular proteins) in a living system that convert various forms of energy to mechanical work in order to drive crucial biological processes such as intracellular transport, muscle contractions, ATP generation and cell division. == History == What would be the utility of such machines? Who knows? I cannot see exactly what would happen, but I can hardly doubt that when we have some control of the arrangement of things on a molecular scale we will get an enormously greater range of possible properties that substances can have, and of the different things we can do. Biological molecular machines have been known and studied for years given their vital role in sustaining life, and have served as inspiration for synthetically designed systems with similar useful functionality. The advent of conformational analysis, or the study of conformers to analyze complex chemical structures, in the 1950s gave rise to the idea of understanding and controlling relative motion within molecular components for further applications. This led to the design of "proto-molecular machines" featuring conformational changes such as cog-wheeling of the aromatic rings in triptycenes. By 1980, scientists could achieve desired conformations using external stimuli and utilize this for different applications. A major example is the design of a photoresponsive crown ether containing an azobenzene unit, which could switch between cis and trans isomers on exposure to light and hence tune the cation-binding properties of the ether. In his seminal 1959 lecture There's Plenty of Room at the Bottom, Richard Feynman alluded to the idea and applications of molecular devices designed artificially by manipulating matter at the atomic level. This was further substantiated by Eric Drexler during the 1970s, who developed ideas based on molecular nanotechnology such as nanoscale "assemblers", though their feasibility was disputed. Though these events served as inspiration for the field, the actual breakthrough in practical approaches to synthesize artificial molecular machines (AMMs) took place in 1991 with the invention of a "molecular shuttle" by Sir Fraser Stoddart. Building upon the assembly of mechanically linked molecules such as catenanes and rotaxanes as developed by Jean-Pierre Sauvage in the early 1980s, this shuttle features a rotaxane with a ring that can move across an "axle" between two ends or possible binding sites (hydroquinone units). This design realized the well-defined motion of a molecular unit across the length of the molecule for the first time. In 1994, an improved design allowed control over the motion of the ring by pH variation or electrochemical methods, making it the first example of an AMM. Here the two binding sites are a benzidine and a biphenol unit; the cationic ring typically prefers staying over the benzidine ring, but moves over to the biphenol group when the benzidine gets protonated at low pH or if it gets electrochemically oxidized. In 1998, a study could capture the rotary motion of a decacyclene molecule on a copper-base metallic surface using a scanning tunneling microscope. Over the following decade, a broad variety of AMMs responding to various stimuli were invented for different applications. In 2016, the Nobel Prize in Chemistry was awarded to Sauvage, Stoddart, and Bernard L. Feringa for the design and synthesis of molecular machines. == Artificial molecular machines == Over the past few decades, AMMs have diversified rapidly and their design principles, properties, and characterization methods have been outlined more clearly. A major starting point for the design of AMMs is to exploit the existing modes of motion in molecules. For instance, single bonds can be visualized as axes of rotation, as can be metallocene complexes. Bending or V-like shapes can be achieved by incorporating double bonds, that can undergo cis-trans isomerization in response to certain stimuli (typically irradiation with a suitable wavelength), as seen in numerous designs consisting of stilbene and azobenzene units. Similarly, ring-opening and -closing reactions such as those seen for spiropyran and diarylethene can also produce curved shapes. Another common mode of movement is the circumrotation of rings relative to one another as observed in mechanically interlocked molecules (primarily catenanes). While this type of rotation can not be accessed beyond the molecule itself (because the rings are confined within one another), rotaxanes can overcome this as the rings can undergo translational movements along a dumbbell-like axis. Another line of AMMs consists of biomolecules such as DNA and proteins as part of their design, making use of phenomena like protein folding and unfolding. AMM designs have diversified significantly since the early days of the field. A major route is the introduction of bistability to produce molecular switches, featuring two distinct configurations for the molecule to convert between. This has been perceived as a step forward from the original molecular shuttle which consisted of two identical sites for the ring to move between without any preference, in a manner analogous to the ring flip in an unsubstituted cyclohexane. If these two sites are different from each other in terms of features like electron density, this can give rise to weak or strong recognition sites as in biological systems — such AMMs have found applications in catalysis and drug delivery. This switching behavior has been further optimized to acquire useful work that gets lost when a typical switch returns to its original state. Inspired by the use of kinetic control to produce work in natural processes, molecular motors are designed to have a continuous energy influx to keep them away from equilibrium to deliver work. Various energy sources are employed to drive molecular machines today, but this was not the case during the early years of AMM development. Though the movements in AMMs were regulated relative to the random thermal motion generally seen in molecules, they could not be controlled or manipulated as desired. This led to the addition of stimuli-responsive moieties in AMM design, so that externally applied non-thermal sources of energy could drive molecular motion and hence allow control over the properties. Chemical energy (or "chemical fuels") was an attractive option at the beginning, given the broad array of reversible chemical reactions (heavily based on acid-base chemistry) to switch molecules between different states. However, this comes with the issue of practically regulating the delivery of the chemical fuel and the removal of waste generated to maintain the efficiency of the machine as in biological systems. Though some AMMs have found ways to circumvent this, more recently waste-free reactions such based on electron transfers or isomerization have gained attention (such as redox-responsive viologens). Eventually, several different forms of energy (electric, magnetic, optical and so on) have become the primary energy sources used to power AMMs, even producing autonomous systems such as light-driven motors. === Types === Various AMMs are tabulated below along with indicative images: == Biological molecular machines == Many macromolecular machines are found within cells, often in the form of multi-protein complexes. Examples of biological machines include motor proteins such as myosin, which is responsible for muscle contraction, kinesin, which moves cargo inside cells away from the nucleus along microtubules, and dynein, which moves cargo inside cells towards the nucleus and produces the axonemal beating of motile cilia and flagella. "[I]n effect, the [motile cilium] is a nanomachine composed of perhaps over 600 proteins in molecular complexes, many of which also function independently as nanomachines ... Flexible linkers allow the mobile protein domains connected by them to recruit their binding partners and induce long-range allostery via protein domain dynamics." Other biological machines are responsible for energy production, for example ATP synthase which harnesses energy from proton gradients across membranes to drive a turbine-like motion used to synthesise ATP, the energy currency of a cell. Still other machines are responsible for gene expression, including DNA polymerases for replicating DNA, RNA polymerases for producing mRNA, the spliceosome for removing introns, and the ribosome for synthesising proteins. These machines and their nanoscale dynamics are far more complex than any molecular machines that have yet been artificially constructed. Biological machines have potential applications in nanomedicine. For example, they could be used to identify and destroy cancer cells. Molecular nanotechnology is a speculative subfield of nanotechnology regarding the possibility of engineering molecular assemblers, biological machines which could re-order matter at a molecular or atomic scale. Nanomedicine would make use of these nanorobots, introduced into the body, to repair or detect damages and infections, but these are considered to be far beyond current capabilities. == Research and applications == Advances in this area are inhibited by the lack of synthetic methods. In this context, theoretical modeling has emerged as a pivotal tool to understand the self-assembly or -disassembly processes in these systems. Possible applications have been demonstrated for AMMs, including those integrated into polymeric, liquid crystal, and crystalline systems for varied functions. Homogenous catalysis is a prominent example, especially in areas like asymmetric synthesis, utilizing noncovalent interactions and biomimetic allosteric catalysis. AMMs have been pivotal in the design of several stimuli-responsive smart materials, such as 2D and 3D self-assembled materials and nanoparticle-based systems, for versatile applications ranging from 3D printing to drug delivery. AMMs are gradually moving from the conventional solution-phase chemistry to surfaces and interfaces. For instance, AMM-immobilized surfaces (AMMISs) are a novel class of functional materials consisting of AMMs attached to inorganic surfaces forming features like self-assembled monolayers; this gives rise to tunable properties such as fluorescence, aggregation and drug-release activity. Most of these "applications" remain at the proof-of-concept level. Challenges in streamlining macroscale applications include autonomous operation, the complexity of the machines, stability in the synthesis of the machines and the working conditions. == See also == Supramolecular chemistry Technorganic == References ==
Wikipedia/Nanophysics
Radiation therapy or radiotherapy (RT, RTx, or XRT) is a treatment using ionizing radiation, generally provided as part of cancer therapy to either kill or control the growth of malignant cells. It is normally delivered by a linear particle accelerator. Radiation therapy may be curative in a number of types of cancer if they are localized to one area of the body, and have not spread to other parts. It may also be used as part of adjuvant therapy, to prevent tumor recurrence after surgery to remove a primary malignant tumor (for example, early stages of breast cancer). Radiation therapy is synergistic with chemotherapy, and has been used before, during, and after chemotherapy in susceptible cancers. The subspecialty of oncology concerned with radiotherapy is called radiation oncology. A physician who practices in this subspecialty is a radiation oncologist. Radiation therapy is commonly applied to the cancerous tumor because of its ability to control cell growth. Ionizing radiation works by damaging the DNA of cancerous tissue leading to cellular death. To spare normal tissues (such as skin or organs which radiation must pass through to treat the tumor), shaped radiation beams are aimed from several angles of exposure to intersect at the tumor, providing a much larger absorbed dose there than in the surrounding healthy tissue. Besides the tumor itself, the radiation fields may also include the draining lymph nodes if they are clinically or radiologically involved with the tumor, or if there is thought to be a risk of subclinical malignant spread. It is necessary to include a margin of normal tissue around the tumor to allow for uncertainties in daily set-up and internal tumor motion. These uncertainties can be caused by internal movement (for example, respiration and bladder filling) and movement of external skin marks relative to the tumor position. Radiation oncology is the medical specialty concerned with prescribing radiation, and is distinct from radiology, the use of radiation in medical imaging and diagnosis. Radiation may be prescribed by a radiation oncologist with intent to cure or for adjuvant therapy. It may also be used as palliative treatment (where cure is not possible and the aim is for local disease control or symptomatic relief) or as therapeutic treatment (where the therapy has survival benefit and can be curative). It is also common to combine radiation therapy with surgery, chemotherapy, hormone therapy, immunotherapy or some mixture of the four. Most common cancer types can be treated with radiation therapy in some way. The precise treatment intent (curative, adjuvant, neoadjuvant therapeutic, or palliative) will depend on the tumor type, location, and stage, as well as the general health of the patient. Total body irradiation (TBI) is a radiation therapy technique used to prepare the body to receive a bone marrow transplant. Brachytherapy, in which a radioactive source is placed inside or next to the area requiring treatment, is another form of radiation therapy that minimizes exposure to healthy tissue during procedures to treat cancers of the breast, prostate, and other organs. Radiation therapy has several applications in non-malignant conditions, such as the treatment of trigeminal neuralgia, acoustic neuromas, severe thyroid eye disease, pterygium, pigmented villonodular synovitis, and prevention of keloid scar growth, vascular restenosis, and heterotopic ossification. The use of radiation therapy in non-malignant conditions is limited partly by worries about the risk of radiation-induced cancers. == Medical uses == It is estimated that half of the US' 1.2M invasive cancer cases diagnosed in 2022 received radiation therapy in their treatment program. Different cancers respond to radiation therapy in different ways. The response of a cancer to radiation is described by its radiosensitivity. Highly radiosensitive cancer cells are rapidly killed by modest doses of radiation. These include leukemias, most lymphomas, and germ cell tumors. The majority of epithelial cancers are only moderately radiosensitive, and require a significantly higher dose of radiation (60–70 Gy) to achieve a radical cure. Some types of cancer are notably radioresistant, that is, much higher doses are required to produce a radical cure than may be safe in clinical practice. Renal cell cancer and melanoma are generally considered to be radioresistant but radiation therapy is still a palliative option for many patients with metastatic melanoma. Combining radiation therapy with immunotherapy is an active area of investigation and has shown some promise for melanoma and other cancers. It is important to distinguish the radiosensitivity of a particular tumor, which to some extent is a laboratory measure, from the radiation "curability" of a cancer in actual clinical practice. For example, leukemias are not generally curable with radiation therapy, because they are disseminated through the body. Lymphoma may be radically curable if it is localized to one area of the body. Similarly, many of the common, moderately radioresponsive tumors are routinely treated with curative doses of radiation therapy if they are at an early stage. For example, non-melanoma skin cancer, head and neck cancer, breast cancer, non-small cell lung cancer, cervical cancer, anal cancer, and prostate cancer. With the exception of oligometastatic disease, metastatic cancers are incurable with radiation therapy because it is not possible to treat the whole body. Modern radiation therapy relies on a CT scan to identify the tumor and surrounding normal structures and to perform dose calculations for the creation of a complex radiation treatment plan. The patient receives small skin marks to guide the placement of treatment fields. Patient positioning is crucial at this stage as the patient will have to be placed in an identical position during each treatment. Many patient positioning devices have been developed for this purpose, including masks and cushions which can be molded to the patient. Image-guided radiation therapy is a method that uses imaging to correct for positional errors of each treatment session. Building on the principles of Image-guided radiation therapy, Daily MR-guided ART (MRgART) offers many dosimetric advantages over the traditional single-plan RT workflow, including the ability to conform the high-dose region to the tumor as the anatomy changes throughout the course of RT. The response of a tumor to radiation therapy is also related to its size. Due to complex radiobiology, very large tumors are affected less by radiation compared to smaller tumors or microscopic disease. Various strategies are used to overcome this effect. The most common technique is surgical resection prior to radiation therapy. This is most commonly seen in the treatment of breast cancer with wide local excision or mastectomy followed by adjuvant radiation therapy. Another method is to shrink the tumor with neoadjuvant chemotherapy prior to radical radiation therapy. A third technique is to enhance the radiosensitivity of the cancer by giving certain drugs during a course of radiation therapy. Examples of radiosensitizing drugs include cisplatin, nimorazole, and cetuximab. The impact of radiotherapy varies between different types of cancer and different groups. For example, for breast cancer after breast-conserving surgery, radiotherapy has been found to halve the rate at which the disease recurs. In pancreatic cancer, radiotherapy has increased survival times for inoperable tumors. == Side effects == Radiation therapy (RT) is in itself painless, but has iatrogenic side effect risks. Many low-dose palliative treatments (for example, radiation therapy to bony metastases) cause minimal or no side effects, although short-term pain flare-up can be experienced in the days following treatment due to oedema compressing nerves in the treated area. Higher doses can cause varying side effects during treatment (acute side effects), in the months or years following treatment (long-term side effects), or after re-treatment (cumulative side effects). The nature, severity, and longevity of side effects depends on the organs that receive the radiation, the treatment itself (type of radiation, dose, fractionation, concurrent chemotherapy), and the patient. Serious radiation complications may occur in 5% of RT cases. Acute (near immediate) or sub-acute (2 to 3 months post RT) radiation side effects may develop after 50 Gy RT dosing. Late or delayed radiation injury (6 months to decades) may develop after 65 Gy. Side effects from radiation are usually limited to the area of the patient's body that is under treatment. Side effects are dose-dependent; for example, higher doses of head and neck radiation can be associated with cardiovascular complications, thyroid dysfunction, and pituitary axis dysfunction. Modern radiation therapy aims to reduce side effects to a minimum and to help the patient understand and deal with side effects that are unavoidable. The main side effects reported are fatigue and skin irritation, like a mild to moderate sun burn. The fatigue often sets in during the middle of a course of treatment and can last for weeks after treatment ends. The irritated skin will heal, but may not be as elastic as it was before. === Acute side effects === Nausea and vomiting This is not a general side effect of radiation therapy, and mechanistically is associated only with treatment of the stomach or abdomen (which commonly react a few hours after treatment), or with radiation therapy to certain nausea-producing structures in the head during treatment of certain head and neck tumors, most commonly the vestibules of the inner ears. As with any distressing treatment, some patients vomit immediately during radiotherapy, or even in anticipation of it, but this is considered a psychological response. Nausea for any reason can be treated with antiemetics. Damage to the epithelial surfaces Epithelial surfaces may sustain damage from radiation therapy. Depending on the area being treated, this may include the skin, oral mucosa, pharyngeal, bowel mucosa, and ureter. The rates of onset of damage and recovery from it depend upon the turnover rate of epithelial cells. Typically the skin starts to become pink and sore several weeks into treatment. The reaction may become more severe during the treatment and for up to about one week following the end of radiation therapy, and the skin may break down. Although this moist desquamation is uncomfortable, recovery is usually quick. Skin reactions tend to be worse in areas where there are natural folds in the skin, such as underneath the female breast, behind the ear, and in the groin. Mouth, throat and stomach sores If the head and neck area is treated, temporary soreness and ulceration commonly occur in the mouth and throat. If severe, this can affect swallowing, and the patient may need painkillers and nutritional support/food supplements. The esophagus can also become sore if it is treated directly, or if, as commonly occurs, it receives a dose of collateral radiation during treatment of lung cancer. When treating liver malignancies and metastases, it is possible for collateral radiation to cause gastric, stomach, or duodenal ulcers This collateral radiation is commonly caused by non-targeted delivery (reflux) of the radioactive agents being infused. Methods, techniques and devices are available to lower the occurrence of this type of adverse side effect. Intestinal discomfort The lower bowel may be treated directly with radiation (treatment of rectal or anal cancer) or be exposed by radiation therapy to other pelvic structures (prostate, bladder, female genital tract). Typical symptoms are soreness, diarrhoea, and nausea. Nutritional interventions may be able to help with diarrhoea associated with radiotherapy. Studies in people having pelvic radiotherapy as part of anticancer treatment for a primary pelvic cancer found that changes in dietary fat, fibre and lactose during radiotherapy reduced diarrhoea at the end of treatment. Swelling As part of the general inflammation that occurs, swelling of soft tissues may cause problems during radiation therapy. This is a concern during treatment of brain tumors and brain metastases, especially where there is pre-existing raised intracranial pressure or where the tumor is causing near-total obstruction of a lumen (e.g., trachea or main bronchus). Surgical intervention may be considered prior to treatment with radiation. If surgery is deemed unnecessary or inappropriate, the patient may receive steroids during radiation therapy to reduce swelling. Infertility The gonads (ovaries and testicles) are very sensitive to radiation. They may be unable to produce gametes following direct exposure to most normal treatment doses of radiation. Treatment planning for all body sites is designed to minimize, if not completely exclude dose to the gonads if they are not the primary area of treatment. === Late side effects === Late side effects occur months to years after treatment and are generally limited to the area that has been treated. They are often due to damage of blood vessels and connective tissue cells. Many late effects are reduced by fractionating treatment into smaller parts. Fibrosis Tissues which have been irradiated tend to become less elastic over time due to a diffuse scarring process. Epilation Epilation (hair loss) may occur on any hair bearing skin with doses above 1 Gy. It only occurs within the radiation field(s). Hair loss may be permanent with a single dose of 10 Gy, but if the dose is fractionated permanent hair loss may not occur until dose exceeds 45 Gy. Dryness The salivary glands and tear glands have a radiation tolerance of about 30 Gy in 2 Gy fractions, a dose which is exceeded by most radical head and neck cancer treatments. Dry mouth (xerostomia) and dry eyes (xerophthalmia) can become irritating long-term problems and severely reduce the patient's quality of life. Similarly, sweat glands in treated skin (such as the armpit) tend to stop working, and the naturally moist vaginal mucosa is often dry following pelvic irradiation. Chronic sinus drainage Radiation therapy treatments to the head and neck regions for soft tissue, palate or bone cancer can cause chronic sinus tract draining and fistulae from the bone. Lymphedema Lymphedema, a condition of localized fluid retention and tissue swelling, can result from damage to the lymphatic system sustained during radiation therapy. It is the most commonly reported complication in breast radiation therapy patients who receive adjuvant axillary radiotherapy following surgery to clear the axillary lymph nodes . Cancer Radiation is a potential cause of cancer, and secondary malignancies are seen in some patients. Cancer survivors are already more likely than the general population to develop malignancies due to a number of factors including lifestyle choices, genetics, and previous radiation treatment. It is difficult to directly quantify the rates of these secondary cancers from any single cause. Studies have found radiation therapy as the cause of secondary malignancies for only a small minority of patients, e.g., exposure to ionizing radiation is an identified risk factor for subsequent glioma; see main topic Glioma#Causes. The combined risk of a radiation-induced glioblastoma or astrocytoma within 15 years of the initial radiotherapy is 0.5-2.7%. New techniques such as proton beam therapy and carbon ion radiotherapy which aim to reduce dose to healthy tissues will lower these risks. It starts to occur 4–6 years following treatment, although some haematological malignancies may develop within 3 years. In the vast majority of cases, this risk is greatly outweighed by the reduction in risk conferred by treating the primary cancer even in pediatric malignancies which carry a higher burden of secondary malignancies. Cardiovascular disease Radiation can increase the risk of heart disease and death as observed in previous breast cancer RT regimens. Therapeutic radiation increases the risk of a subsequent cardiovascular event (i.e., heart attack or stroke) by 1.5 to 4 times a person's normal rate, aggravating factors included. The increase is dose dependent, related to the RT's dose strength, volume and location. Use of concomitant chemotherapy, e.g. anthracyclines, is an aggravating risk factor. The occurrence rate of RT induced cardiovascular disease is estimated between 10 and 30%. Cardiovascular late side effects have been termed radiation-induced heart disease (RIHD) and radiation-induced cardiovascular disease (RIVD). Symptoms are dose dependent and include cardiomyopathy, myocardial fibrosis, valvular heart disease, coronary artery disease, heart arrhythmia and peripheral artery disease. Radiation-induced fibrosis, vascular cell damage and oxidative stress can lead to these and other late side effect symptoms. Most radiation-induced cardiovascular diseases occur 10 or more years post treatment, making causality determinations more difficult. Cognitive decline In cases of radiation applied to the head radiation therapy may cause cognitive decline. Cognitive decline was especially apparent in young children, between the ages of 5 and 11. Studies found, for example, that the IQ of 5-year-old children declined each year after treatment by several IQ points. Radiation enteropathy The gastrointestinal tract can be damaged following abdominal and pelvic radiotherapy. Atrophy, fibrosis and vascular changes produce malabsorption, diarrhea, steatorrhea and bleeding with bile acid diarrhea and vitamin B12 malabsorption commonly found due to ileal involvement. Pelvic radiation disease includes radiation proctitis, producing bleeding, diarrhoea and urgency, and can also cause radiation cystitis when the bladder is affected. Lung injury Radiation-induced lung injury (RILI) encompasses radiation pneumonitis and pulmonary fibrosis. Lung tissue is sensitive to ionizing radiation, tolerating only 18–20 Gy, a fraction of typical therapeutic dosage levels. The lung's terminal airways and associated alveoli can become damaged, preventing effective respiratory gas exchange. The adverse effects of radiation are often asymptomatic with clinically significant RILI occurrence rates varying widely in literature, affecting 5–25% of those treated for thoracic and mediastinal malignancies and 1–5% of those treated for breast cancer. Neurogenic lower urinary tract dysfunction Pelvic radiation therapy has been associated with acquired neurogenic bladder dysfunction, radiation impacting the nerves or structures involved with urinary continence and voiding. The voluntary micturition process is controlled by the central nervous system, synchronizing the bladder's detrusor muscle and the internal and external urethral sphincters. Trauma to the process's components, e.g., spinal cord, peripheral motor and sensor nerves, and the bladder and urethra sphincters, can cause dysfunction, e.g., dysuria, frequency and incontinence. Radiation-induced polyneuropathy Radiation treatments may damage nerves near the target area or within the delivery path as nerve tissue is also radiosensitive. Nerve damage from ionizing radiation occurs in phases, the initial phase from microvascular injury, capillary damage and nerve demyelination. Subsequent damage occurs from vascular constriction and nerve compression due to uncontrolled fibrous tissue growth caused by radiation. Radiation-induced polyneuropathy, ICD-10-CM Code G62.82, occurs in approximately 1–5% of those receiving radiation therapy. Depending upon the irradiated zone, late effect neuropathy may occur in either the central nervous system (CNS) or the peripheral nervous system (PNS). In the CNS for example, cranial nerve injury typically presents as a visual acuity loss 1–14 years post treatment. In the PNS, injury to the plexus nerves presents as radiation-induced brachial plexopathy or radiation-induced lumbosacral plexopathy appearing up to 3 decades post treatment. Myokymia (muscle cramping, spasms or twitching) may develop. Radiation-induced nerve injury, chronic compressive neuropathies and polyradiculopathies are the most common cause of myokymic discharges. Clinically, the majority of patients receiving radiation therapy have measurable myokymic discharges within their field of radiation which present as focal or segmental myokymia. Common areas affected include the arms, legs or face depending upon the location of nerve injury. Myokymia is more frequent when radiation doses exceed 10 gray (Gy). Radiation necrosis Radiation necrosis is the death of healthy tissue near the irradiated site. It is a type of coagulative necrosis that occurs because the radiation directly or indirectly damages blood vessels in the area, which reduces the blood supply to the remaining healthy tissue, causing it to die by ischemia, similar to what happens in an ischemic stroke. Because it is an indirect effect of the treatment, it occurs months to decades after radiation exposure. Radiation necrosis most commonly presents as osteoradionecrosis, vaginal radionecrosis, soft tissue radionecrosis, or laryngeal radionecrosis. === Cumulative side effects === Cumulative effects from this process should not be confused with long-term effects – when short-term effects have disappeared and long-term effects are subclinical, reirradiation can still be problematic. These doses are calculated by the radiation oncologist and many factors are taken into account before the subsequent radiation takes place. === Effects on reproduction === During the first two weeks after fertilization, radiation therapy is lethal but not teratogenic. High doses of radiation during pregnancy induce anomalies, impaired growth and intellectual disability, and there may be an increased risk of childhood leukemia and other tumors in the offspring. In males previously having undergone radiotherapy, there appears to be no increase in genetic defects or congenital malformations in their children conceived after therapy. However, the use of assisted reproductive technologies and micromanipulation techniques might increase this risk. === Effects on pituitary system === Hypopituitarism commonly develops after radiation therapy for sellar and parasellar neoplasms, extrasellar brain tumors, head and neck tumors, and following whole body irradiation for systemic malignancies. 40–50% of children treated for childhood cancer develop some endocrine side effect. Radiation-induced hypopituitarism mainly affects growth hormone and gonadal hormones. In contrast, adrenocorticotrophic hormone (ACTH) and thyroid stimulating hormone (TSH) deficiencies are the least common among people with radiation-induced hypopituitarism. Changes in prolactin-secretion is usually mild, and vasopressin deficiency appears to be very rare as a consequence of radiation. === Effects on subsequent surgery === Delayed tissue injury with impaired wound healing capability often develops after receiving doses in excess of 65 Gy. A diffuse injury pattern due to the external beam radiotherapy's holographic isodosing occurs. While the targeted tumor receives the majority of radiation, healthy tissue at incremental distances from the center of the tumor are also irradiated in a diffuse pattern due to beam divergence. These wounds demonstrate progressive, proliferative endarteritis, inflamed arterial linings that disrupt the tissue's blood supply. Such tissue ends up chronically hypoxic, fibrotic, and without an adequate nutrient and oxygen supply. Surgery of previously irradiated tissue has a very high failure rate, e.g. women who have received radiation for breast cancer develop late effect chest wall tissue fibrosis and hypovascularity, making successful reconstruction and healing difficult, if not impossible. === Radiation therapy accidents === There are rigorous procedures in place to minimise the risk of accidental overexposure of radiation therapy to patients. However, mistakes do occasionally occur; for example, the radiation therapy machine Therac-25 was responsible for at least six accidents between 1985 and 1987, where patients were given up to one hundred times the intended dose; two people were killed directly by the radiation overdoses. From 2005 to 2010, a hospital in Missouri overexposed 76 patients (most with brain cancer) during a five-year period because new radiation equipment had been set up incorrectly. Although medical errors are exceptionally rare, radiation oncologists, medical physicists and other members of the radiation therapy treatment team are working to eliminate them. In 2010 the American Society for Radiation Oncology (ASTRO) launched a safety initiative called Target Safely that, among other things, aimed to record errors nationwide so that doctors can learn from each and every mistake and prevent them from recurring. ASTRO also publishes a list of questions for patients to ask their doctors about radiation safety to ensure every treatment is as safe as possible. == Use in non-cancerous diseases == Radiation therapy is used to treat early stage Dupuytren's disease and Ledderhose disease. When Dupuytren's disease is at the nodules and cords stage or fingers are at a minimal deformation stage of less than 10 degrees, then radiation therapy is used to prevent further progress of the disease. Radiation therapy is also used post surgery in some cases to prevent the disease continuing to progress. Low doses of radiation are used typically three gray of radiation for five days, with a break of three months followed by another phase of three gray of radiation for five days. == Technique == === Mechanism of action === Radiation therapy works by damaging the DNA of cancer cells and can cause them to undergo mitotic catastrophe. This DNA damage is caused by one of two types of energy, photon or charged particle. This damage is either direct or indirect ionization of the atoms which make up the DNA chain. Indirect ionization happens as a result of the ionization of water, forming free radicals, notably hydroxyl radicals, which then damage the DNA. In photon therapy, most of the radiation effect is through free radicals. Cells have mechanisms for repairing single-strand DNA damage and double-stranded DNA damage. However, double-stranded DNA breaks are much more difficult to repair, and can lead to dramatic chromosomal abnormalities and genetic deletions. Targeting double-stranded breaks increases the probability that cells will undergo cell death. Cancer cells are generally less differentiated and more stem cell-like; they reproduce more than most healthy differentiated cells, and have a diminished ability to repair sub-lethal damage. Single-strand DNA damage is then passed on through cell division; damage to the cancer cells' DNA accumulates, causing them to die or reproduce more slowly. One of the major limitations of photon radiation therapy is that the cells of solid tumors become deficient in oxygen. Solid tumors can outgrow their blood supply, causing a low-oxygen state known as hypoxia. Oxygen is a potent radiosensitizer, increasing the effectiveness of a given dose of radiation by forming DNA-damaging free radicals. Tumor cells in a hypoxic environment may be as much as 2 to 3 times more resistant to radiation damage than those in a normal oxygen environment. Much research has been devoted to overcoming hypoxia including the use of high pressure oxygen tanks, hyperthermia therapy (heat therapy which dilates blood vessels to the tumor site), blood substitutes that carry increased oxygen, hypoxic cell radiosensitizer drugs such as misonidazole and metronidazole, and hypoxic cytotoxins (tissue poisons), such as tirapazamine. Newer research approaches are currently being studied, including preclinical and clinical investigations into the use of an oxygen diffusion-enhancing compound such as trans sodium crocetinate as a radiosensitizer. Charged particles such as protons and boron, carbon, and neon ions can cause direct damage to cancer cell DNA through high-LET (linear energy transfer) and have an antitumor effect independent of tumor oxygen supply because these particles act mostly via direct energy transfer usually causing double-stranded DNA breaks. Due to their relatively large mass, protons and other charged particles have little lateral side scatter in the tissue – the beam does not broaden much, stays focused on the tumor shape, and delivers small dose side-effects to surrounding tissue. They also more precisely target the tumor using the Bragg peak effect. See proton therapy for a good example of the different effects of intensity-modulated radiation therapy (IMRT) vs. charged particle therapy. This procedure reduces damage to healthy tissue between the charged particle radiation source and the tumor and sets a finite range for tissue damage after the tumor has been reached. In contrast, IMRT's use of uncharged particles causes its energy to damage healthy cells when it exits the body. This exiting damage is not therapeutic, can increase treatment side effects, and increases the probability of secondary cancer induction. This difference is very important in cases where the close proximity of other organs makes any stray ionization very damaging (example: head and neck cancers). This X-ray exposure is especially bad for children, due to their growing bodies, and while depending on a multitude of factors, they are around 10 times more sensitive to developing secondary malignancies after radiotherapy as compared to adults. === Dose === The amount of radiation used in photon radiation therapy is measured in grays (Gy), and varies depending on the type and stage of cancer being treated. For curative cases, the typical dose for a solid epithelial tumor ranges from 60 to 80 Gy, while lymphomas are treated with 20 to 40 Gy. Preventive (adjuvant) doses are typically around 45–60 Gy in 1.8–2 Gy fractions (for breast, head, and neck cancers.) Many other factors are considered by radiation oncologists when selecting a dose, including whether the patient is receiving chemotherapy, patient comorbidities, whether radiation therapy is being administered before or after surgery, and the degree of success of surgery. Delivery parameters of a prescribed dose are determined during treatment planning (part of dosimetry). Treatment planning is generally performed on dedicated computers using specialized treatment planning software. Depending on the radiation delivery method, several angles or sources may be used to sum to the total necessary dose. The planner will try to design a plan that delivers a uniform prescription dose to the tumor and minimizes dose to surrounding healthy tissues. In radiation therapy, three-dimensional dose distributions may be evaluated using the dosimetry technique known as gel dosimetry. ==== Fractionation ==== The total dose is fractionated (spread out over time) for several important reasons. Fractionation allows normal cells time to recover, while tumor cells are generally less efficient in repair between fractions. Fractionation also allows tumor cells that were in a relatively radio-resistant phase of the cell cycle during one treatment to cycle into a sensitive phase of the cycle before the next fraction is given. Similarly, tumor cells that were chronically or acutely hypoxic (and therefore more radioresistant) may reoxygenate between fractions, improving the tumor cell kill. Fractionation regimens are individualised between different radiation therapy centers and even between individual doctors. In North America, Australia, and Europe, the typical fractionation schedule for adults is 1.8 to 2 Gy per day, five days a week. In some cancer types, prolongation of the fraction schedule over too long can allow for the tumor to begin repopulating, and for these tumor types, including head-and-neck and cervical squamous cell cancers, radiation treatment is preferably completed within a certain amount of time. For children, a typical fraction size may be 1.5 to 1.8 Gy per day, as smaller fraction sizes are associated with reduced incidence and severity of late-onset side effects in normal tissues. In some cases, two fractions per day are used near the end of a course of treatment. This schedule, known as a concomitant boost regimen or hyperfractionation, is used on tumors that regenerate more quickly when they are smaller. In particular, tumors in the head-and-neck demonstrate this behavior. Patients receiving palliative radiation to treat uncomplicated painful bone metastasis should not receive more than a single fraction of radiation. A single treatment gives comparable pain relief and morbidity outcomes to multiple-fraction treatments, and for patients with limited life expectancy, a single treatment is best to improve patient comfort. ==== Schedules for fractionation ==== One fractionation schedule that is increasingly being used and continues to be studied is hypofractionation. This is a radiation treatment in which the total dose of radiation is divided into large doses. Typical doses vary significantly by cancer type, from 2.2 Gy/fraction to 20 Gy/fraction, the latter being typical of stereotactic treatments (stereotactic ablative body radiotherapy, or SABR – also known as SBRT, or stereotactic body radiotherapy) for subcranial lesions, or SRS (stereotactic radiosurgery) for intracranial lesions. The rationale of hypofractionation is to reduce the probability of local recurrence by denying clonogenic cells the time they require to reproduce and also to exploit the radiosensitivity of some tumors. In particular, stereotactic treatments are intended to destroy clonogenic cells by a process of ablation, i.e., the delivery of a dose intended to destroy clonogenic cells directly, rather than to interrupt the process of clonogenic cell division repeatedly (apoptosis), as in routine radiotherapy. ==== Estimation of dose based on target sensitivity ==== Different cancer types have different radiation sensitivity. While predicting the sensitivity based on genomic or proteomic analyses of biopsy samples has proven challenging, the predictions of radiation effect on individual patients from genomic signatures of intrinsic cellular radiosensitivity have been shown to associate with clinical outcome. An alternative approach to genomics and proteomics was offered by the discovery that radiation protection in microbes is offered by non-enzymatic complexes of manganese and small organic metabolites. The content and variation of manganese (measurable by electron paramagnetic resonance) were found to be good predictors of radiosensitivity, and this finding extends also to human cells. An association was confirmed between total cellular manganese contents and their variation, and clinically inferred radioresponsiveness in different tumor cells, a finding that may be useful for more precise radiodosages and improved treatment of cancer patients. == Types == Historically, the three main divisions of radiation therapy are: external beam radiation therapy (EBRT or XRT) or teletherapy; brachytherapy or sealed source radiation therapy; and systemic radioisotope therapy or unsealed source radiotherapy. The differences relate to the position of the radiation source; external is outside the body, brachytherapy uses sealed radioactive sources placed precisely in the area under treatment, and systemic radioisotopes are given by infusion or oral ingestion. Brachytherapy can use temporary or permanent placement of radioactive sources. The temporary sources are usually placed by a technique called afterloading. In afterloading a hollow tube or applicator is placed surgically in the organ to be treated, and the sources are loaded into the applicator after the applicator is implanted. This minimizes radiation exposure to health care personnel. Particle therapy is a special case of external beam radiation therapy where the particles are protons or heavier ions. A review of radiation therapy randomised clinical trials from 2018 to 2021 found many practice-changing data and new concepts that emerge from RCTs, identifying techniques that improve the therapeutic ratio, techniques that lead to more tailored treatments, stressing the importance of patient satisfaction, and identifying areas that require further study. === External beam radiation therapy === The following three sections refer to treatment using X-rays. ==== Conventional external beam radiation therapy ==== Historically conventional external beam radiation therapy (2DXRT) was delivered via two-dimensional beams using kilovoltage therapy X-ray units, medical linear accelerators that generate high-energy X-rays, or with machines that were similar to a linear accelerator in appearance, but used a sealed radioactive source like the one shown above. 2DXRT mainly consists of a single beam of radiation delivered to the patient from several directions: often front or back, and both sides. Conventional refers to the way the treatment is planned or simulated on a specially calibrated diagnostic X-ray machine known as a simulator because it recreates the linear accelerator actions (or sometimes by eye), and to the usually well-established arrangements of the radiation beams to achieve a desired plan. The aim of simulation is to accurately target or localize the volume which is to be treated. This technique is well established and is generally quick and reliable. The worry is that some high-dose treatments may be limited by the radiation toxicity capacity of healthy tissues which lie close to the target tumor volume. An example of this problem is seen in radiation of the prostate gland, where the sensitivity of the adjacent rectum limited the dose which could be safely prescribed using 2DXRT planning to such an extent that tumor control may not be easily achievable. Prior to the invention of the CT, physicians and physicists had limited knowledge about the true radiation dosage delivered to both cancerous and healthy tissue. For this reason, 3-dimensional conformal radiation therapy has become the standard treatment for almost all tumor sites. More recently other forms of imaging are used including MRI, PET, SPECT and Ultrasound. ==== Stereotactic radiation ==== Stereotactic radiation is a specialized type of external beam radiation therapy. It uses focused radiation beams targeting a well-defined tumor using extremely detailed imaging scans. Radiation oncologists perform stereotactic treatments, often with the help of a neurosurgeon for tumors in the brain or spine. There are two types of stereotactic radiation. Stereotactic radiosurgery (SRS) is when doctors use a single or several stereotactic radiation treatments of the brain or spine. Stereotactic body radiation therapy (SBRT) refers to one or several stereotactic radiation treatments with the body, such as the lungs. Some doctors say an advantage to stereotactic treatments is that they deliver the right amount of radiation to the cancer in a shorter amount of time than traditional treatments, which can often take 6 to 11 weeks. Plus treatments are given with extreme accuracy, which should limit the effect of the radiation on healthy tissues. One problem with stereotactic treatments is that they are only suitable for certain small tumors. Stereotactic treatments can be confusing because many hospitals call the treatments by the name of the manufacturer rather than calling it SRS or SBRT. Brand names for these treatments include Axesse, Cyberknife, Gamma Knife, Novalis, Primatom, Synergy, X-Knife, TomoTherapy, Trilogy and Truebeam. This list changes as equipment manufacturers continue to develop new, specialized technologies to treat cancers. ==== Virtual simulation, and 3-dimensional conformal radiation therapy ==== The planning of radiation therapy treatment has been revolutionized by the ability to delineate tumors and adjacent normal structures in three dimensions using specialized CT and/or MRI scanners and planning software. Virtual simulation, the most basic form of planning, allows more accurate placement of radiation beams than is possible using conventional X-rays, where soft-tissue structures are often difficult to assess and normal tissues difficult to protect. An enhancement of virtual simulation is 3-dimensional conformal radiation therapy (3DCRT), in which the profile of each radiation beam is shaped to fit the profile of the target from a beam's eye view (BEV) using a multileaf collimator (MLC) and a variable number of beams. When the treatment volume conforms to the shape of the tumor, the relative toxicity of radiation to the surrounding normal tissues is reduced, allowing a higher dose of radiation to be delivered to the tumor than conventional techniques would allow. ==== Intensity-modulated radiation therapy (IMRT) ==== Intensity-modulated radiation therapy (IMRT) is an advanced type of high-precision radiation that is the next generation of 3DCRT. IMRT also improves the ability to conform the treatment volume to concave tumor shapes, for example when the tumor is wrapped around a vulnerable structure such as the spinal cord or a major organ or blood vessel. Computer-controlled X-ray accelerators distribute precise radiation doses to malignant tumors or specific areas within the tumor. The pattern of radiation delivery is determined using highly tailored computing applications to perform optimization and treatment simulation (Treatment Planning). The radiation dose is consistent with the 3-D shape of the tumor by controlling, or modulating, the radiation beam's intensity. The radiation dose intensity is elevated near the gross tumor volume while radiation among the neighboring normal tissues is decreased or avoided completely. This results in better tumor targeting, lessened side effects, and improved treatment outcomes than even 3DCRT. 3DCRT is still used extensively for many body sites but the use of IMRT is growing in more complicated body sites such as CNS, head and neck, prostate, breast, and lung. Unfortunately, IMRT is limited by its need for additional time from experienced medical personnel. This is because physicians must manually delineate the tumors one CT image at a time through the entire disease site which can take much longer than 3DCRT preparation. Then, medical physicists and dosimetrists must be engaged to create a viable treatment plan. Also, the IMRT technology has only been used commercially since the late 1990s even at the most advanced cancer centers, so radiation oncologists who did not learn it as part of their residency programs must find additional sources of education before implementing IMRT. Proof of improved survival benefit from either of these two techniques over conventional radiation therapy (2DXRT) is growing for many tumor sites, but the ability to reduce toxicity is generally accepted. This is particularly the case for head and neck cancers in a series of pivotal trials performed by Professor Christopher Nutting of the Royal Marsden Hospital. Both techniques enable dose escalation, potentially increasing usefulness. There has been some concern, particularly with IMRT, about increased exposure of normal tissue to radiation and the consequent potential for secondary malignancy. Overconfidence in the accuracy of imaging may increase the chance of missing lesions that are invisible on the planning scans (and therefore not included in the treatment plan) or that move between or during a treatment (for example, due to respiration or inadequate patient immobilization). New techniques are being developed to better control this uncertainty – for example, real-time imaging combined with real-time adjustment of the therapeutic beams. This new technology is called image-guided radiation therapy or four-dimensional radiation therapy. Another technique is the real-time tracking and localization of one or more small implantable electric devices implanted inside or close to the tumor. There are various types of medical implantable devices that are used for this purpose. It can be a magnetic transponder which senses the magnetic field generated by several transmitting coils, and then transmits the measurements back to the positioning system to determine the location. The implantable device can also be a small wireless transmitter sending out an RF signal which then will be received by a sensor array and used for localization and real-time tracking of the tumor position. A well-studied issue with IMRT is the "tongue and groove effect" which results in unwanted underdosing, due to irradiating through extended tongues and grooves of overlapping MLC (multileaf collimator) leaves. While solutions to this issue have been developed, which either reduce the TG effect to negligible amounts or remove it completely, they depend upon the method of IMRT being used and some of them carry costs of their own. Some texts distinguish "tongue and groove error" from "tongue or groove error", according as both or one side of the aperture is occluded. ==== Volumetric modulated arc therapy (VMAT) ==== Volumetric modulated arc therapy (VMAT) is a radiation technique introduced in 2007 which can achieve highly conformal dose distributions on target volume coverage and sparing of normal tissues. The specificity of this technique is to modify three parameters during the treatment. VMAT delivers radiation by rotating gantry (usually 360° rotating fields with one or more arcs), changing speed and shape of the beam with a multileaf collimator (MLC) ("sliding window" system of moving) and fluence output rate (dose rate) of the medical linear accelerator. VMAT has an advantage in patient treatment, compared with conventional static field intensity modulated radiotherapy (IMRT), of reduced radiation delivery times. Comparisons between VMAT and conventional IMRT for their sparing of healthy tissues and Organs at Risk (OAR) depends upon the cancer type. In the treatment of nasopharyngeal, oropharyngeal and hypopharyngeal carcinomas VMAT provides equivalent or better protection of the organ at risk (OAR). In the treatment of prostate cancer the OAR protection result is mixed with some studies favoring VMAT, others favoring IMRT. ==== Temporally feathered radiation therapy (TFRT) ==== Temporally feathered radiation therapy (TFRT) is a radiation technique introduced in 2018 which aims to use the inherent non-linearities in normal tissue repair to allow for sparing of these tissues without affecting the dose delivered to the tumor. The application of this technique, which has yet to be automated, has been described carefully to enhance the ability of departments to perform it, and in 2021 it was reported as feasible in a small clinical trial, though its efficacy has yet to be formally studied. ==== Automated planning ==== Automated treatment planning has become an integrated part of radiotherapy treatment planning. There are in general two approaches of automated planning. 1) Knowledge based planning where the treatment planning system has a library of high quality plans, from which it can predict the target and dose-volume histogram of the organ at risk. 2) The other approach is commonly called protocol based planning, where the treatment planning system tried to mimic an experienced treatment planner and through an iterative process evaluates the plan quality from on the basis of the protocol. ==== Particle therapy ==== In particle therapy (proton therapy being one example), energetic ionizing particles (protons or carbon ions) are directed at the target tumor. The dose increases while the particle penetrates the tissue, up to a maximum (the Bragg peak) that occurs near the end of the particle's range, and it then drops to (almost) zero. The advantage of this energy deposition profile is that less energy is deposited into the healthy tissue surrounding the target tissue. ==== Auger therapy ==== Auger therapy (AT) makes use of a very high dose of ionizing radiation in situ that provides molecular modifications at an atomic scale. AT differs from conventional radiation therapy in several aspects; it neither relies upon radioactive nuclei to cause cellular radiation damage at a cellular dimension, nor engages multiple external pencil-beams from different directions to zero-in to deliver a dose to the targeted area with reduced dose outside the targeted tissue/organ locations. Instead, the in situ delivery of a very high dose at the molecular level using AT aims for in situ molecular modifications involving molecular breakages and molecular re-arrangements such as a change of stacking structures as well as cellular metabolic functions related to the said molecule structures. ==== Motion compensation ==== In many types of external beam radiotherapy, motion can negatively impact the treatment delivery by moving target tissue out of, or other healthy tissue into, the intended beam path. Some form of patient immobilisation is common, to prevent the large movements of the body during treatment, however this cannot prevent all motion, for example as a result of breathing. Several techniques have been developed to account for motion like this. Deep inspiration breath-hold (DIBH) is commonly used for breast treatments where it is important to avoid irradiating the heart. In DIBH the patient holds their breath after breathing in to provide a stable position for the treatment beam to be turned on. This can be done automatically using an external monitoring system such as a spirometer or a camera and markers. The same monitoring techniques, as well as 4DCT imaging, can also be for respiratory gated treatment, where the patient breathes freely and the beam is only engaged at certain points in the breathing cycle. Other techniques include using 4DCT imaging to plan treatments with margins that account for motion, and active movement of the treatment couch, or beam, to follow motion. === Contact X-ray brachytherapy === Contact X-ray brachytherapy (also called "CXB", "electronic brachytherapy" or the "Papillon Technique") is a type of radiation therapy using low energy (50 kVp) kilovoltage X-rays applied directly to the tumor to treat rectal cancer. The process involves endoscopic examination first to identify the tumor in the rectum and then inserting treatment applicator on the tumor through the anus into the rectum and placing it against the cancerous tissue. Finally, treatment tube is inserted into the applicator to deliver high doses of X-rays (30Gy) emitted directly onto the tumor at two weekly intervals for three times over four weeks period. It is typically used for treating early rectal cancer in patients who may not be candidates for surgery. A 2015 NICE review found the main side effect to be bleeding that occurred in about 38% of cases, and radiation-induced ulcer which occurred in 27% of cases. === Brachytherapy (sealed source radiotherapy) === Brachytherapy is delivered by placing radiation source(s) inside or next to the area requiring treatment. Brachytherapy is commonly used as an effective treatment for cervical, prostate, breast, and skin cancer and can also be used to treat tumors in many other body sites. In brachytherapy, radiation sources are precisely placed directly at the site of the cancerous tumor. This means that the irradiation only affects a very localized area – exposure to radiation of healthy tissues further away from the sources is reduced. These characteristics of brachytherapy provide advantages over external beam radiation therapy – the tumor can be treated with very high doses of localized radiation, whilst reducing the probability of unnecessary damage to surrounding healthy tissues. A course of brachytherapy can often be completed in less time than other radiation therapy techniques. This can help reduce the chance of surviving cancer cells dividing and growing in the intervals between each radiation therapy dose. As one example of the localized nature of breast brachytherapy, the SAVI device delivers the radiation dose through multiple catheters, each of which can be individually controlled. This approach decreases the exposure of healthy tissue and resulting side effects, compared both to external beam radiation therapy and older methods of breast brachytherapy. === Radionuclide therapy === Radionuclide therapy (also known as systemic radioisotope therapy, radiopharmaceutical therapy, or molecular radiotherapy), is a form of targeted therapy. Targeting can be due to the chemical properties of the isotope such as radioiodine which is specifically absorbed by the thyroid gland a thousandfold better than other bodily organs. Targeting can also be achieved by attaching the radioisotope to another molecule or antibody to guide it to the target tissue. The radioisotopes are delivered through infusion (into the bloodstream) or ingestion. Examples are the infusion of metaiodobenzylguanidine (MIBG) to treat neuroblastoma, of oral iodine-131 to treat thyroid cancer or thyrotoxicosis, and of hormone-bound lutetium-177 and yttrium-90 to treat neuroendocrine tumors (peptide receptor radionuclide therapy). Another example is the injection of radioactive yttrium-90 or holmium-166 microspheres into the hepatic artery to radioembolize liver tumors or liver metastases. These microspheres are used for the treatment approach known as selective internal radiation therapy. The microspheres are approximately 30 μm in diameter (about one-third of a human hair) and are delivered directly into the artery supplying blood to the tumors. These treatments begin by guiding a catheter up through the femoral artery in the leg, navigating to the desired target site and administering treatment. The blood feeding the tumor will carry the microspheres directly to the tumor enabling a more selective approach than traditional systemic chemotherapy. There are currently three different kinds of microspheres: SIR-Spheres, TheraSphere and QuiremSpheres. A major use of systemic radioisotope therapy is in the treatment of bone metastasis from cancer. The radioisotopes travel selectively to areas of damaged bone, and spare normal undamaged bone. Isotopes commonly used in the treatment of bone metastasis are radium-223, strontium-89 and samarium (153Sm) lexidronam. In 2002, the United States Food and Drug Administration (FDA) approved ibritumomab tiuxetan (Zevalin), which is an anti-CD20 monoclonal antibody conjugated to yttrium-90. In 2003, the FDA approved the tositumomab/iodine (131I) tositumomab regimen (Bexxar), which is a combination of an iodine-131 labelled and an unlabelled anti-CD20 monoclonal antibody. These medications were the first agents of what is known as radioimmunotherapy, and they were approved for the treatment of refractory non-Hodgkin's lymphoma. === Intraoperative radiotherapy === Intraoperative radiation therapy (IORT) is applying therapeutic levels of radiation to a target area, such as a cancer tumor, while the area is exposed during surgery. ==== Rationale ==== The rationale for IORT is to deliver a high dose of radiation precisely to the targeted area with minimal exposure of surrounding tissues which are displaced or shielded during the IORT. Conventional radiation techniques such as external beam radiotherapy (EBRT) following surgical removal of the tumor have several drawbacks: The tumor bed where the highest dose should be applied is frequently missed due to the complex localization of the wound cavity even when modern radiotherapy planning is used. Additionally, the usual delay between the surgical removal of the tumor and EBRT may allow a repopulation of the tumor cells. These potentially harmful effects can be avoided by delivering the radiation more precisely to the targeted tissues leading to immediate sterilization of residual tumor cells. Another aspect is that wound fluid has a stimulating effect on tumor cells. IORT was found to inhibit the stimulating effects of wound fluid. == History == Medicine has used radiation therapy as a treatment for cancer for more than 100 years, with its earliest roots traced from the discovery of X-rays in 1895 by Wilhelm Röntgen. Emil Grubbe of Chicago was possibly the first American physician to use X-rays to treat cancer, beginning in 1896. The field of radiation therapy began to grow in the early 1900s largely due to the groundbreaking work of Nobel Prize–winning scientist Marie Curie (1867–1934), who discovered the radioactive elements polonium and radium in 1898. This began a new era in medical treatment and research. Through the 1920s the hazards of radiation exposure were not understood, and little protection was used. Radium was believed to have wide curative powers and radiotherapy was applied to many diseases. Prior to World War 2, the only practical sources of radiation for radiotherapy were radium, its "emanation", radon gas, and the X-ray tube. External beam radiotherapy (teletherapy) began at the turn of the century with relatively low voltage (<150 kV) X-ray machines. It was found that while superficial tumors could be treated with low voltage X-rays, more penetrating, higher energy beams were required to reach tumors inside the body, requiring higher voltages. Orthovoltage X-rays, which used tube voltages of 200-500 kV, began to be used during the 1920s. To reach the most deeply buried tumors without exposing intervening skin and tissue to dangerous radiation doses required rays with energies of 1 MV or above, called "megavolt" radiation. Producing megavolt X-rays required voltages on the X-ray tube of 3 to 5 million volts, which required huge expensive installations. Megavoltage X-ray units were first built in the late 1930s but because of cost were limited to a few institutions. One of the first, installed at St. Bartholomew's hospital, London in 1937 and used until 1960, used a 30 foot long X-ray tube and weighed 10 tons. Radium produced megavolt gamma rays, but was extremely rare and expensive due to its low occurrence in ores. In 1937 the entire world supply of radium for radiotherapy was 50 grams, valued at £800,000, or $50 million in 2005 dollars. The invention of the nuclear reactor in the Manhattan Project during World War 2 made possible the production of artificial radioisotopes for radiotherapy. Cobalt therapy, teletherapy machines using megavolt gamma rays emitted by cobalt-60, a radioisotope produced by irradiating ordinary cobalt metal in a reactor, revolutionized the field between the 1950s and the early 1980s. Cobalt machines were relatively cheap, robust and simple to use, although due to its 5.27 year half-life the cobalt had to be replaced about every 5 years. Medical linear particle accelerators, developed since the 1940s, began replacing X-ray and cobalt units in the 1980s and these older therapies are now declining. The first medical linear accelerator was used at the Hammersmith Hospital in London in 1953. Linear accelerators can produce higher energies, have more collimated beams, and do not produce radioactive waste with its attendant disposal problems like radioisotope therapies. With Godfrey Hounsfield's invention of computed tomography (CT) in 1971, three-dimensional planning became a possibility and created a shift from 2-D to 3-D radiation delivery. CT-based planning allows physicians to more accurately determine the dose distribution using axial tomographic images of the patient's anatomy. The advent of new imaging technologies, including magnetic resonance imaging (MRI) in the 1970s and positron emission tomography (PET) in the 1980s, has moved radiation therapy from 3-D conformal to intensity-modulated radiation therapy (IMRT) and to image-guided radiation therapy tomotherapy. These advances allowed radiation oncologists to better see and target tumors, which have resulted in better treatment outcomes, more organ preservation and fewer side effects. While access to radiotherapy is improving globally, more than half of patients in low and middle income countries still do not have available access to the therapy as of 2017. == See also == == References == == Further reading == == External links == Information Human Health Campus The official website of the International Atomic Energy Agency dedicated to Professionals in Radiation Medicine. This site is managed by the Division of Human Health, Department of Nuclear Sciences and Applications RT Answers – ASTRO: patient information site The Radiation Therapy Oncology Group: an organisation for radiation oncology research RadiologyInfo -The radiology information resource for patients: Radiation Therapy Source of cancer stem cells' resistance to radiation explained on YouTube. Biologically equivalent dose calculator Radiobiology Treatment Gap Compensator Calculator About the profession PROS (Paediatric Radiation Oncology Society) American Society for Radiation Oncology European Society for Radiotherapy and Oncology Who does what in Radiation Oncology? – Responsibilities of the various personnel within Radiation Oncology in the United States Accidents and QA Verification of dose calculations in radiation therapy Radiation Safety in External Beam Radiotherapy (IAEA)
Wikipedia/Radiotherapy
Medical Physics is a monthly peer-reviewed scientific journal covering research on medical physics. The first issue was published in January 1974. Medical Physics is an official journal of the American Association of Physicists in Medicine, the Canadian Organization of Medical Physicists, the Canadian College of Physicists in Medicine and the International Organization for Medical Physics. The editor-in-chief is John M. Boone (UC Davis Medical Center). In 2013, the journal announced that it was changing from a traditional to a hybrid open-access format. In 2017, the journal transferred from being published by the American Institute of Physics to Wiley. == Abstracting and indexing == Medical Physics is indexed in: Chemical Abstracts Service Index Medicus/MEDLINE/PubMed Scopus According to the Journal Citation Reports, the journal has a 2021 impact factor of 4.506, ranking it 44th out of 136 journals in the category "Radiology, Nuclear Medicine & Medical Imaging". == References == == External links == Official website
Wikipedia/Medical_Physics_(journal)
Photodynamic therapy (PDT) is a form of phototherapy involving light and a photosensitizing chemical substance used in conjunction with molecular oxygen to elicit cell death (phototoxicity). PDT is used in treating acne, wet age-related macular degeneration, psoriasis, and herpes. It is used to treat malignant cancers, including head and neck, lung, bladder and skin. Advantages lessen the need for delicate surgery and lengthy recuperation and minimal formation of scar tissue and disfigurement. A side effect is the associated photosensitisation of skin tissue. == Basics == PDT applications involve three components: a photosensitizer, a light source and tissue oxygen. The wavelength of the light source needs to be appropriate for exciting the photosensitizer to produce radicals and/or reactive oxygen species. These are free radicals (Type I) generated through electron abstraction or transfer from a substrate molecule and highly reactive state of oxygen known as singlet oxygen (Type II). PDT is a multi-stage process. First a photosensitiser, ideally with negligible toxicity other than its phototoxicity, is administered in the absence of light, either systemically or topically. When a sufficient amount of photosensitiser appears in diseased tissue, the photosensitiser is activated by exposure to light for a specified period. The light dose supplies sufficient energy to stimulate the photosensitiser, but not enough to damage neighbouring healthy tissue. The reactive oxygen kills the target cells. === Reactive oxygen species === In air and tissue, molecular oxygen (O2) occurs in a triplet state, whereas almost all other molecules are in a singlet state. Reactions between triplet and singlet molecules are forbidden by quantum mechanics, making oxygen relatively non-reactive at physiological conditions. A photosensitizer is a chemical compound that can be promoted to an excited state upon absorption of light and undergo intersystem crossing (ISC) with oxygen to produce singlet oxygen. This species is highly cytotoxic, rapidly attacking any organic compounds it encounters. It is rapidly eliminated from cells, in an average of 3 μs. === Photochemical processes === When a photosensitiser is in its excited state (3Psen*) it can interact with molecular triplet oxygen (3O2) and produce radicals and reactive oxygen species (ROS), crucial to the Type II mechanism. These species include singlet oxygen (1O2), hydroxyl radicals (•OH) and superoxide (O2−) ions. They can interact with cellular components including unsaturated lipids, amino acid residues and nucleic acids. If sufficient oxidative damage ensues, this will result in target-cell death (only within the illuminated area). === Photochemical mechanisms === When a chromophore molecule, such as a cyclic tetrapyrrolic molecule, absorbs a photon, one of its electrons is promoted into a higher-energy orbital, elevating the chromophore from the ground state (S0) into a short-lived, electronically excited state (Sn) composed of vibrational sub-levels (Sn′). The excited chromophore can lose energy by rapidly decaying through these sub-levels via internal conversion (IC) to populate the first excited singlet state (S1), before quickly relaxing back to the ground state. The decay from the excited singlet state (S1) to the ground state (S0) is via fluorescence (S1 → S0). Singlet state lifetimes of excited fluorophores are very short (τfl. = 10−9–10−6 seconds) since transitions between the same spin states (S → S or T → T) conserve the spin multiplicity of the electron and, according to the Spin Selection Rules, are therefore considered "allowed" transitions. Alternatively, an excited singlet state electron (S1) can undergo spin inversion and populate the lower-energy first excited triplet state (T1) via intersystem crossing (ISC); a spin-forbidden process, since the spin of the electron is no longer conserved. The excited electron can then undergo a second spin-forbidden inversion and depopulate the excited triplet state (T1) by decaying to the ground state (S0) via phosphorescence (T1→ S0). Owing to the spin-forbidden triplet to singlet transition, the lifetime of phosphorescence (τP = 10−3 − 1 second) is considerably longer than that of fluorescence. === Photosensitisers and photochemistry === Tetrapyrrolic photosensitisers in the excited singlet state (1Psen*, S>0) are relatively efficient at intersystem crossing and can consequently have a high triplet-state quantum yield. The longer lifetime of this species is sufficient to allow the excited triplet state photosensitiser to interact with surrounding bio-molecules, including cell membrane constituents. === Photochemical reactions === Excited triplet-state photosensitisers can react via Type-I and Type-II processes. Type-I processes can involve the excited singlet or triplet photosensitiser (1Psen*, S1; 3Psen*, T1), however due to the short lifetime of the excited singlet state, the photosensitiser can only react if it is intimately associated with a substrate. In both cases the interaction is with readily oxidisable or reducible substrates. Type-II processes involve the direct interaction of the excited triplet photosensitiser (3Psen*, T1) with molecular oxygen (3O2, 3Σg). ==== Type-I processes ==== Type-I processes can be divided into Type I(i) and Type I(ii). Type I (i) involves the transfer of an electron (oxidation) from a substrate molecule to the excited state photosensitiser (Psen*), generating a photosensitiser radical anion (Psen•−) and a substrate radical cation (Subs•+). The majority of the radicals produced from Type-I(i) reactions react instantaneously with molecular oxygen (O2), generating a mixture of oxygen intermediates. For example, the photosensitiser radical anion can react instantaneously with molecular oxygen (3O2) to generate a superoxide radical anion (O2•−), which can go on to produce the highly reactive hydroxyl radical (OH•), initiating a cascade of cytotoxic free radicals; this process is common in the oxidative damage of fatty acids and other lipids. The Type-I process (ii) involves the transfer of a hydrogen atom (reduction) to the excited state photosensitiser (Psen*). This generates free radicals capable of rapidly reacting with molecular oxygen and creating a complex mixture of reactive oxygen intermediates, including reactive peroxides. ==== Type-II processes ==== Type-II processes involve the direct interaction of the excited triplet state photosensitiser (3Psen*) with ground state molecular oxygen (3O2, 3Σg); a spin allowed transition—the excited state photosensitiser and ground state molecular oxygen are of the same spin state (T). When the excited photosensitiser collides with molecular oxygen, a process of triplet-triplet annihilation takes place (3Psen* →1Psen and 3O2 →1O2). This inverts the spin of one oxygen molecule's (3O2) outermost antibonding electrons, generating two forms of singlet oxygen (1Δg and 1Σg), while simultaneously depopulating the photosensitiser's excited triplet state (T1 → S0). The higher-energy singlet oxygen state (1Σg, 157kJ mol−1 > 3Σg) is very short-lived (1Σg ≤ 0.33 milliseconds (methanol), undetectable in H2O/D2O) and rapidly relaxes to the lower-energy excited state (1Δg, 94kJ mol−1 > 3Σg). It is, therefore, this lower-energy form of singlet oxygen (1Δg) that is implicated in cell injury and cell death. The highly-reactive singlet oxygen species (1O2) produced via the Type-II process act near to their site generation and within a radius of approximately 20 nm, with a typical lifetime of approximately 40 nanoseconds in biological systems. It is possible that (over a 6 μs period) singlet oxygen can diffuse up to approximately 300 nm in vivo. Singlet oxygen can theoretically only interact with proximal molecules and structures within this radius. ROS initiate reactions with many biomolecules, including amino acid residues in proteins, such as tryptophan; unsaturated lipids like cholesterol and nucleic acid bases, particularly guanosine and guanine derivatives, with the latter base more susceptible to ROS. These interactions cause damage and potential destruction to cellular membranes and enzyme deactivation, culminating in cell death. It is probable that in the presence of molecular oxygen and as a direct result of the photoirradiation of the photosensitiser molecule, both Type-I and II pathways play a pivotal role in disrupting cellular mechanisms and cellular structure. Nevertheless, considerable evidence suggests that the Type-II photo-oxygenation process predominates in the induction of cell damage, a consequence of the interaction between the irradiated photosensitiser and molecular oxygen. Cells in vivo may be partially protected against the effects of photodynamic therapy by the presence of singlet oxygen scavengers (such as histidine). Certain skin cells are somewhat resistant to PDT in the absence of molecular oxygen; further supporting the proposal that the Type-II process is at the heart of photoinitiated cell death. The efficiency of Type-II processes is dependent upon the triplet state lifetime τT and the triplet quantum yield (ΦT) of the photosensitiser. Both of these parameters have been implicated in phototherapeutic effectiveness; further supporting the distinction between Type-I and Type-II mechanisms. However, the success of a photosensitiser is not exclusively dependent upon a Type-II process. Multiple photosensitisers display excited triplet lifetimes that are too short to permit a Type-II process to occur. For example, the copper metallated octaethylbenzochlorin photosensitiser has a triplet state lifetime of less than 20 nanoseconds and is still deemed to be an efficient photodynamic agent. == Photosensitizers == Many photosensitizers for PDT exist. They divide into porphyrins, chlorins and dyes. Examples include aminolevulinic acid (ALA), Silicon Phthalocyanine Pc 4, m-tetrahydroxyphenylchlorin (mTHPC) and mono-L-aspartyl chlorin e6 (NPe6). Photosensitizers commercially available for clinical use include Allumera, Photofrin, Visudyne, Levulan, Foscan, Metvix, Hexvix, Cysview and Laserphyrin, with others in development, e.g. Antrin, Photochlor, Photosens, Photrex, Lumacan, Cevira, Visonac, BF-200 ALA, Amphinex and Azadipyrromethenes. The major difference between photosensitizers is the parts of the cell that they target. Unlike in radiation therapy, where damage is done by targeting cell DNA, most photosensitizers target other cell structures. For example, mTHPC localizes in the nuclear envelope. In contrast, ALA localizes in the mitochondria and methylene blue in the lysosomes. === Cyclic tetrapyrrolic chromophores === Cyclic tetrapyrrolic molecules are fluorophores and photosensitisers. Cyclic tetrapyrrolic derivatives have an inherent similarity to the naturally occurring porphyrins present in living matter. ==== Porphyrins ==== Porphyrins are a group of naturally occurring and intensely coloured compounds, whose name is drawn from the Greek word porphura, or purple. These molecules perform biologically important roles, including oxygen transport and photosynthesis and have applications in fields ranging from fluorescent imaging to medicine. Porphyrins are tetrapyrrolic molecules, with the heart of the skeleton a heterocyclic macrocycle, known as a porphine. The fundamental porphine frame consists of four pyrrolic sub-units linked on opposing sides (α-positions, numbered 1, 4, 6, 9, 11, 14, 16 and 19) through four methine (CH) bridges (5, 10, 15 and 20), known as the meso-carbon atoms/positions. The resulting conjugated planar macrocycle may be substituted at the meso- and/or β-positions (2, 3, 7, 8, 12, 13, 17 and 18): if the meso- and β-hydrogens are substituted with non-hydrogen atoms or groups, the resulting compounds are known as porphyrins. The inner two protons of a free-base porphyrin can be removed by strong bases such as alkoxides, forming a dianionic molecule; conversely, the inner two pyrrolenine nitrogens can be protonated with acids such as trifluoroacetic acid affording a dicationic intermediate. The tetradentate anionic species can readily form complexes with most metals. ===== Absorption spectroscopy ===== Porphyrin's highly conjugated skeleton produces a characteristic ultra-violet visible (UV-VIS) spectrum. The spectrum typically consists of an intense, narrow absorption band (ε > 200000 L⋅mol−1 cm−1) at around 400 nm, known as the Soret band or B band, followed by four longer wavelength (450–700 nm), weaker absorptions (ε > 20000 L⋅mol−1⋅cm−1 (free-base porphyrins)) referred to as the Q bands. The Soret band arises from a strong electronic transition from the ground state to the second excited singlet state (S0 → S2); whereas the Q band is a result of a weak transition to the first excited singlet state (S0 → S1). The dissipation of energy via internal conversion (IC) is so rapid that fluorescence is only observed from depopulation of the first excited singlet state to the lower-energy ground state (S1 → S0). === Ideal photosensitisers === The key characteristic of a photosensitiser is the ability to preferentially accumulate in diseased tissue and induce a desired biological effect via the generation of cytotoxic species. Specific criteria: Strong absorption with a high extinction coefficient in the red/near infrared region of the electromagnetic spectrum (600–850 nm)—allows deeper tissue penetration. (Tissue is much more transparent at longer wavelengths (~700–850 nm). Longer wavelengths allow the light to penetrate deeper and treat larger structures.) Suitable photophysical characteristics: a high-quantum yield of triplet formation (ΦT ≥ 0.5); a high singlet oxygen quantum yield (ΦΔ ≥ 0.5); a relatively long triplet state lifetime (τT, μs range); and a high triplet-state energy (≥ 94 kJ mol−1). Values of ΦT= 0.83 and ΦΔ = 0.65 (haematoporphyrin); ΦT = 0.83 and ΦΔ = 0.72 (etiopurpurin); and ΦT = 0.96 and ΦΔ = 0.82 (tin etiopurpurin) have been achieved Low dark toxicity and negligible cytotoxicity in the absence of light. (The photosensitizer should not be harmful to the target tissue until the treatment beam is applied.) Preferential accumulation in diseased/target tissue over healthy tissue Rapid clearance from the body post-procedure High chemical stability: single, well-characterised compounds, with a known and constant composition Short and high-yielding synthetic route (with easy translation into multi-gram scales/reactions) Simple and stable formulation Soluble in biological media, allowing intravenous administration. Otherwise, a hydrophilic delivery system must enable efficient and effective transportation of the photosensitiser to the target site via the bloodstream. Low photobleaching to prevent degradation of the photosensitizer so it can continue producing singlet oxygen Natural fluorescence (Many optical dosimetry techniques, such as fluorescence spectroscopy, depend on fluorescence.) === First generation === Porfimer sodium Porfimer sodium is a drug used to treat some types of cancer. When absorbed by cancer cells and exposed to light, porfimer sodium becomes active and kills the cancer cells. It is a type of photodynamic therapy (PDT) agent and also called Photofrin. PDT was first discovered more than a century ago in Germany, it was not until Thomas Dougherty's when PDT became more mainstream. Prior to Dr. Dougherty, researchers had ways of using light-sensitive compounds to treat disease. Dougherty successfully treated cancer with PDT in preclinical models in 1975. Three years later, he conducted the first controlled clinical study in humans. In 1994, the FDA approved PDT with the photosensitizer porfimer sodium for palliative treatment of advanced esophageal cancer, specifically the palliation of patients with completely obstructing esophageal cancer, or for patients with partially obstructing esophageal cancer. Porfimer Sodium is also FDA-approved for the treatment of types of lung cancer, more specifically for the treatment of microinvasive endobronchial non-small-cell lung cancer (NSCLC) in patients for whom surgery and radiotherapy are not indicated and also FDA approved in the US for high grade dysplasia in Barrett's Esophagus. Disadvantages associated with first generation photosensitisers include skin sensitivity and absorption at 630 nm permitted some therapeutic use, but they markedly limited application to the wider field of disease. Second generation photosensitisers were key to the development of photodynamic therapy. === Second generation === ==== 5-Aminolaevulinic acid ==== 5-Aminolaevulinic acid (ALA) is a prodrug used to treat and image multiple superficial cancers and tumours. ALA a key precursor in the biosynthesis of the naturally occurring porphyrin, haem. Haem is synthesised in every energy-producing cell in the body and is a key structural component of haemoglobin, myoglobin and other haemproteins. The immediate precursor to haem is protoporphyrin IX (PPIX), an effective photosensitiser. Haem itself is not a photosensitiser, due to the coordination of a paramagnetic ion in the centre of the macrocycle, causing significant reduction in excited state lifetimes. The haem molecule is synthesised from glycine and succinyl coenzyme A (succinyl CoA). The rate-limiting step in the biosynthesis pathway is controlled by a tight (negative) feedback mechanism in which the concentration of haem regulates the production of ALA. However, this controlled feedback can be by-passed by artificially adding excess exogenous ALA to cells. The cells respond by producing PPIX (photosensitiser) at a faster rate than the ferrochelatase enzyme can convert it to haem. ALA, marketed as Levulan, has shown promise in photodynamic therapy (tumours) via both intravenous and oral administration, as well as through topical administration in the treatment of malignant and non-malignant dermatological conditions, including psoriasis, Bowen's disease and Hirsutism (Phase II/III clinical trials). ALA accumulates more rapidly in comparison to other intravenously administered sensitisers. Typical peak tumour accumulation levels post-administration for PPIX are usually achieved within several hours; other (intravenous) photosensitisers may take up to 96 hours to reach peak levels. ALA is also excreted more rapidly from the body (~24 hours) than other photosensitisers, minimising photosensitivity side effects. Esterified ALA derivatives with improved bioavailability have been examined. A methyl ALA ester (Metvix) is now available for basal cell carcinoma and other skin lesions. Benzyl (Benvix) and hexyl ester (Hexvix) derivatives are used for gastrointestinal cancers and for the diagnosis of bladder cancer. ==== Verteporfin ==== Benzoporphyrin derivative monoacid ring A (BPD-MA), marketed as Visudyne (Verteporfin, for injection), has been approved by health authorities in multiple jurisdictions, including US FDA, for the treatment of wet AMD beginning in 1999. It has also undergone Phase III clinical trials (USA) for the treatment of cutaneous non-melanoma skin cancer. The chromophore of BPD-MA has a red-shifted and intensified long-wavelength absorption maxima at approximately 690 nm. Tissue penetration by light at this wavelength is 50% greater than that achieved for Photofrin (λmax. = 630 nm). Verteporfin has further advantages over the first generation sensitiser Photofrin. It is rapidly absorbed by the tumour (optimal tumour-normal tissue ratio 30–150 minutes post-intravenous injection) and is rapidly cleared from the body, minimising patient photosensitivity (1–2 days). ==== Purlytin ==== Chlorin photosensitiser tin etiopurpurin is marketed as Purlytin. Purlytin has undergone Phase II clinical trials for cutaneous metastatic breast cancer and Kaposi's sarcoma in patients with AIDS (acquired immunodeficiency syndrome). Purlytin has been used successfully to treat the non-malignant conditions psoriasis and restenosis. Chlorins are distinguished from the parent porphyrins by a reduced exocyclic double bond, decreasing the symmetry of the conjugated macrocycle. This leads to increased absorption in the long-wavelength portion of the visible region of the electromagnetic spectrum (650–680 nm). Purlytin is a purpurin; a degradation product of chlorophyll. Purlytin has a tin atom chelated in its central cavity that causes a red-shift of approximately 20–30 nm (with respect to Photofrin and non-metallated etiopurpurin, λmax.SnEt2 = 650 nm). Purlytin has been reported to localise in skin and produce a photoreaction 7–14 days post-administration. ==== Foscan ==== Tetra(m-hydroxyphenyl)chlorin (mTHPC) is in clinical trials for head and neck cancers under the trade name Foscan. It has also been investigated in clinical trials for gastric and pancreatic cancers, hyperplasia, field sterilisation after cancer surgery and for the control of antibiotic-resistant bacteria. Foscan has a singlet oxygen quantum yield comparable to other chlorin photosensitisers but lower drug and light doses (approximately 100 times more photoactive than Photofrin). Foscan can render patients photosensitive for up to 20 days after initial illumination. ==== Lutex ==== Lutetium texaphyrin, marketed under the trade name Lutex and Lutrin, is a large porphyrin-like molecule. Texaphyrins are expanded porphyrins that have a penta-aza core. It offers strong absorption in the 730–770 nm region. Tissue transparency is optimal in this range. As a result, Lutex-based PDT can (potentially) be carried out more effectively at greater depths and on larger tumours. Lutex has entered Phase II clinical trials for evaluation against breast cancer and malignant melanomas. A Lutex derivative, Antrin, has undergone Phase I clinical trials for the prevention of restenosis of vessels after cardiac angioplasty by photoinactivating foam cells that accumulate within arteriolar plaques. A second Lutex derivative, Optrin, is in Phase I trials for AMD. Texaphyrins also have potential as radiosensitisers (Xcytrin) and chemosensitisers. Xcytrin, a gadolinium texaphyrin (motexafin gadolinium), has been evaluated in Phase III clinical trials against brain metastases and Phase I clinical trials for primary brain tumours. ==== ATMPn ==== 9-Acetoxy-2,7,12,17-tetrakis-(β-methoxyethyl)-porphycene has been evaluated as an agent for dermatological applications against psoriasis vulgaris and superficial non-melanoma skin cancer. ==== Zinc phthalocyanine ==== A liposomal formulation of zinc phthalocyanine (CGP55847) has undergone clinical trials (Phase I/II, Switzerland) against squamous cell carcinomas of the upper aerodigestive tract. Phthalocyanines (PCs) are related to tetra-aza porphyrins. Instead of four bridging carbon atoms at the meso-positions, as for the porphyrins, PCs have four nitrogen atoms linking the pyrrolic sub-units. PCs also have an extended conjugate pathway: a benzene ring is fused to the β-positions of each of the four-pyrrolic sub-units. These rings strengthen the absorption of the chromophore at longer wavelengths (with respect to porphyrins). The absorption band of PCs is almost two orders of magnitude stronger than the highest Q band of haematoporphyrin. These favourable characteristics, along with the ability to selectively functionalise their peripheral structure, make PCs favourable photosensitiser candidates. A sulphonated aluminium PC derivative (Photosense) has entered clinical trials (Russia) against skin, breast and lung malignancies and cancer of the gastrointestinal tract. Sulphonation significantly increases PC solubility in polar solvents including water, circumventing the need for alternative delivery vehicles. PC4 is a silicon complex under investigation for the sterilisation of blood components against human colon, breast and ovarian cancers and against glioma. A shortcoming of many of the metallo-PCs is their tendency to aggregate in aqueous buffer (pH 7.4), resulting in a decrease, or total loss, of their photochemical activity. This behaviour can be minimised in the presence of detergents. Metallated cationic porphyrazines (PZ), including PdPZ+, CuPZ+, CdPZ+, MgPZ+, AlPZ+ and GaPZ+, have been tested in vitro on V-79 (Chinese hamster lung fibroblast) cells. These photosensitisers display substantial dark toxicity. ==== Naphthalocyanines ==== Naphthalocyanines (NCs) are an extended PC derivative. They have an additional benzene ring attached to each isoindole sub-unit on the periphery of the PC structure. Subsequently, NCs absorb strongly at even longer wavelengths (approximately 740–780 nm) than PCs (670–780 nm). This absorption in the near infrared region makes NCs candidates for highly pigmented tumours, including melanomas, which present significant absorption problems for visible light. However, problems associated with NC photosensitisers include lower stability, as they decompose in the presence of light and oxygen. Metallo-NCs, which lack axial ligands, have a tendency to form H-aggregates in solution. These aggregates are photoinactive, thus compromising the photodynamic efficacy of NCs. Silicon naphthalocyanine attached to copolymer PEG-PCL (poly(ethylene glycol)-block-poly(ε-caprolactone)) accumulates selectively in cancer cells and reaches a maximum concentration after about one day. The compound provides real time near-infrared (NIR) fluorescence imaging with an extinction coefficient of 2.8 × 105 M−1 cm−1 and combinatorial phototherapy with dual photothermal and photodynamic therapeutic mechanisms that may be appropriate for adriamycin-resistant tumors. The particles had a hydrodynamic size of 37.66 ± 0.26 nm (polydispersity index = 0.06) and surface charge of −2.76 ± 1.83 mV. ==== Functional groups ==== Altering the peripheral functionality of porphyrin-type chromophores can affect photodynamic activity. Diamino platinum porphyrins show high anti-tumour activity, demonstrating the combined effect of the cytotoxicity of the platinum complex and the photodynamic activity of the porphyrin species. Positively charged PC derivatives have been investigated. Cationic species are believed to selectively localise in the mitochondria. Zinc and copper cationic derivatives have been investigated. The positively charged zinc complexed PC is less photodynamically active than its neutral counterpart in vitro against V-79 cells. Water-soluble cationic porphyrins bearing nitrophenyl, aminophenyl, hydroxyphenyl and/or pyridiniumyl functional groups exhibit varying cytotoxicity to cancer cells in vitro, depending on the nature of the metal ion (Mn, Fe, Zn, Ni) and on the number and type of functional groups. The manganese pyridiniumyl derivative has shown the highest photodynamic activity, while the nickel analogue is photoinactive. Another metallo-porphyrin complex, the iron chelate, is more photoactive (towards HIV and simian immunodeficiency virus in MT-4 cells) than the manganese complexes; the zinc derivative is photoinactive. The hydrophilic sulphonated porphyrins and PCs (AlPorphyrin and AlPC) compounds were tested for photodynamic activity. The disulphonated analogues (with adjacent substituted sulphonated groups) exhibited greater photodynamic activity than their di-(symmetrical), mono-, tri- and tetra-sulphonated counterparts; tumour activity increased with increasing degree of sulphonation. === Third generation === Many photosensitisers are poorly soluble in aqueous media, particularly at physiological pH, limiting their use. Alternate delivery strategies range from the use of oil-in-water (o/w) emulsions to carrier vehicles such as liposomes and nanoparticles. Although these systems may increase therapeutic effects, the carrier system may inadvertently decrease the "observed" singlet oxygen quantum yield (ΦΔ): the singlet oxygen generated by the photosensitiser must diffuse out of the carrier system; and since singlet oxygen is believed to have a narrow radius of action, it may not reach the target cells. The carrier may limit light absorption, reducing singlet oxygen yield. Another alternative that does not display the scattering problem is the use of moieties. Strategies include directly attaching photosensitisers to biologically active molecules such as antibodies. ==== Metallation ==== Various metals form into complexes with photosensitiser macrocycles. Multiple second generation photosensitisers contain a chelated central metal ion. The main candidates are transition metals, although photosensitisers co-ordinated to group 13 (Al, AlPcS4) and group 14 (Si, SiNC and Sn, SnEt2) metals have been synthesised. The metal ion does not confer definite photoactivity on the complex. Copper (II), cobalt (II), iron (II) and zinc (II) complexes of Hp are all photoinactive in contrast to metal-free porphyrins. However, texaphyrin and PC photosensitisers do not contain metals; only the metallo-complexes have demonstrated efficient photosensitisation. The central metal ion, bound by a number of photosensitisers, strongly influences the photophysical properties of the photosensitiser. Chelation of paramagnetic metals to a PC chromophore appears to shorten triplet lifetimes (down to nanosecond range), generating variations in the triplet quantum yield and triplet lifetime of the photoexcited triplet state. Certain heavy metals are known to enhance inter-system crossing (ISC). Generally, diamagnetic metals promote ISC and have a long triplet lifetime. In contrast, paramagnetic species deactivate excited states, reducing the excited-state lifetime and preventing photochemical reactions. However, exceptions to this generalisation include copper octaethylbenzochlorin. Many metallated paramagnetic texaphyrin species exhibit triplet-state lifetimes in the nanosecond range. These results are mirrored by metallated PCs. PCs metallated with diamagnetic ions, such as Zn2+, Al3+ and Ga3+, generally yield photosensitisers with desirable quantum yields and lifetimes (ΦT 0.56, 0.50 and 0.34 and τT 187, 126 and 35 μs, respectively). Photosensitiser ZnPcS4 has a singlet oxygen quantum yield of 0.70; nearly twice that of most other mPCs (ΦΔ at least 0.40). ==== Expanded metallo-porphyrins ==== Expanded porphyrins have a larger central binding cavity, increasing the range of potential metals. Diamagnetic metallo-texaphyrins have shown photophysical properties; high triplet quantum yields and efficient generation of singlet oxygen. In particular, the zinc and cadmium derivatives display triplet quantum yields close to unity. In contrast, the paramagnetic metallo-texaphyrins, Mn-Tex, Sm-Tex and Eu-Tex, have undetectable triplet quantum yields. This behaviour is parallel with that observed for the corresponding metallo-porphyrins. The cadmium-texaphyrin derivative has shown in vitro photodynamic activity against human leukemia cells and Gram positive (Staphylococcus) and Gram negative (Escherichia coli) bacteria. Although follow-up studies have been limited with this photosensitiser due to the toxicity of the complexed cadmium ion. A zinc-metallated seco-porphyrazine has a high quantum singlet oxygen yield (ΦΔ 0.74). This expanded porphyrin-like photosensitiser has shown the best singlet oxygen photosensitising ability of any of the reported seco-porphyrazines. Platinum and palladium derivatives have been synthesised with singlet oxygen quantum yields of 0.59 and 0.54, respectively. ==== Metallochlorins/bacteriochlorins ==== The tin (IV) purpurins are more active when compared with analogous zinc (II) purpurins, against human cancers. Sulphonated benzochlorin derivatives demonstrated a reduced phototherapeutic response against murine leukemia L1210 cells in vitro and transplanted urothelial cell carcinoma in rats, whereas the tin (IV) metallated benzochlorins exhibited an increased photodynamic effect in the same tumour model. Copper octaethylbenzochlorin demonstrated greater photoactivity towards leukemia cells in vitro and a rat bladder tumour model. It may derive from interactions between the cationic iminium group and biomolecules. Such interactions may allow electron-transfer reactions to take place via the short-lived excited singlet state and lead to the formation of radicals and radical ions. The copper-free derivative exhibited a tumour response with short intervals between drug administration and photodynamic activity. Increased in vivo activity was observed with the zinc benzochlorin analogue. ==== Metallo-phthalocyanines ==== PCs properties are strongly influenced by the central metal ion. Co-ordination of transition metal ions gives metallo-complexes with short triplet lifetimes (nanosecond range), resulting in different triplet quantum yields and lifetimes (with respect to the non-metallated analogues). Diamagnetic metals such as zinc, aluminium and gallium, generate metallo-phthalocyanines (MPC) with high triplet quantum yields (ΦT ≥ 0.4) and short lifetimes (ZnPCS4 τT = 490 Fs and AlPcS4 τT = 400 Fs) and high singlet oxygen quantum yields (ΦΔ ≥ 0.7). As a result, ZnPc and AlPc have been evaluated as second generation photosensitisers active against certain tumours. ==== Metallo-naphthocyaninesulfobenzo-porphyrazines (M-NSBP) ==== Aluminium (Al3+) has been successfully coordinated to M-NSBP. The resulting complex showed photodynamic activity against EMT-6 tumour-bearing Balb/c mice (disulphonated analogue demonstrated greater photoactivity than the mono-derivative). ==== Metallo-naphthalocyanines ==== Work with zinc NC with various amido substituents revealed that the best phototherapeutic response (Lewis lung carcinoma in mice) with a tetrabenzamido analogue. Complexes of silicon (IV) NCs with two axial ligands in anticipation the ligands minimise aggregation. Disubstituted analogues as potential photodynamic agents (a siloxane NC substituted with two methoxyethyleneglycol ligands) are an efficient photosensitiser against Lewis lung carcinoma in mice. SiNC[OSi(i-Bu)2-n-C18H37]2 is effective against Balb/c mice MS-2 fibrosarcoma cells. Siloxane NCs may be efficacious photosensitisers against EMT-6 tumours in Balb/c mice. The ability of metallo-NC derivatives (AlNc) to generate singlet oxygen is weaker than the analogous (sulphonated) metallo-PCs (AlPC); reportedly 1.6–3 orders of magnitude less. In porphyrin systems, the zinc ion (Zn2+) appears to hinder the photodynamic activity of the compound. By contrast, in the higher/expanded π-systems, zinc-chelated dyes form complexes with good to high results. An extensive study of metallated texaphyrins focused on the lanthanide (III) metal ions, Y, In, Lu, Cd, Nd, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm and Yb found that when diamagnetic Lu (III) was complexed to texaphyrin, an effective photosensitiser (Lutex) was generated. However, using the paramagnetic Gd (III) ion for the Lu metal, exhibited no photodynamic activity. The study found a correlation between the excited-singlet and triplet state lifetimes and the rate of ISC of the diamagnetic texaphyrin complexes, Y(III), In (III) and Lu (III) and the atomic number of the cation. Paramagnetic metallo-texaphyrins displayed rapid ISC. Triplet lifetimes were strongly affected by the choice of metal ion. The diamagnetic ions (Y, In and Lu) displayed triplet lifetimes ranging from 187, 126 and 35 μs, respectively. Comparable lifetimes for the paramagnetic species (Eu-Tex 6.98 μs, Gd-Tex 1.11, Tb-Tex < 0.2, Dy-Tex 0.44 × 10−3, Ho-Tex 0.85 × 10−3, Er-Tex 0.76 × 10−3, Tm-Tex 0.12 × 10−3 and Yb-Tex 0.46) were obtained. Three measured paramagnetic complexes measured significantly lower than the diamagnetic metallo-texaphyrins. In general, singlet oxygen quantum yields closely followed the triplet quantum yields. Various diamagnetic and paramagnetic texaphyrins investigated have independent photophysical behaviour with respect to a complex's magnetism. The diamagnetic complexes were characterised by relatively high fluorescence quantum yields, excited-singlet and triplet lifetimes and singlet oxygen quantum yields; in distinct contrast to the paramagnetic species. The +2 charged diamagnetic species appeared to exhibit a direct relationship between their fluorescence quantum yields, excited state lifetimes, rate of ISC and the atomic number of the metal ion. The greatest diamagnetic ISC rate was observed for Lu-Tex; a result ascribed to the heavy atom effect. The heavy atom effect also held for the Y-Tex, In-Tex and Lu-Tex triplet quantum yields and lifetimes. The triplet quantum yields and lifetimes both decreased with increasing atomic number. The singlet oxygen quantum yield correlated with this observation. Photophysical properties displayed by paramagnetic species were more complex. The observed data/behaviour was not correlated with the number of unpaired electrons located on the metal ion. For example: ISC rates and the fluorescence lifetimes gradually decreased with increasing atomic number. Gd-Tex and Tb-Tex chromophores showed (despite more unpaired electrons) slower rates of ISC and longer lifetimes than Ho-Tex or Dy-Tex. To achieve selective target cell destruction, while protecting normal tissues, either the photosensitizer can be applied locally to the target area, or targets can be locally illuminated. Skin conditions, including acne, psoriasis and also skin cancers, can be treated topically and locally illuminated. For internal tissues and cancers, intravenously administered photosensitizers can be illuminated using endoscopes and fiber optic catheters. Photosensitizers can target viral and microbial species, including HIV and MRSA. Using PDT, pathogens present in samples of blood and bone marrow can be decontaminated before the samples are used further for transfusions or transplants. PDT can also eradicate a wide variety of pathogens of the skin and of the oral cavities. Given the seriousness that drug resistant pathogens have now become, there is increasing research into PDT as a new antimicrobial therapy. == Applications == === Acne === PDT is currently in clinical trials as a treatment for severe acne. Initial results have shown for it to be effective as a treatment only for severe acne. A systematic review conducted in 2016 found that PDT is a "safe and effective method of treatment" for acne. The treatment may cause severe redness and moderate to severe pain and burning sensation in some people. (see also: Levulan) One phase II trial, while it showed improvement, was not superior to blue/violet light alone. === Cancer === The FDA has approved photodynamic therapy to treat actinic keratosis, advanced cutaneous T-cell lymphoma, Barrett esophagus, basal cell skin cancer, esophageal (throat) cancer, non-small cell lung cancer, and squamous cell skin cancer (Stage 0). Photodynamic therapy is also used to relieve symptoms of some cancers, including esophageal cancer when it blocks the throat and non-small cell lung cancer when it blocks the airways. When cells that have absorbed photosensitizers are exposed to a specific wavelength of light, the photosensitizer produces a form of oxygen, called an oxygen radical, that kills them. Photodynamic therapy (PDT) may also damage blood vessels in the tumor, which prevents it from receiving the blood it needs to keep growing. PDT may trigger the immune system to attack tumor cells, even in other areas of the body. PDT is a minimally invasive treatment that is used to treat many conditions including acne, psoriasis, age related macular degeneration, and several cancers such as skin, lung, brain, mesothelioma, bladder, bile-duct, esophageal, and head and neck cancers. In February 2019, medical scientists announced that iridium attached to albumin, creating a photosensitized molecule, can penetrate cancer cells and, after being irradiated with light, destroy the cancer cells. ==== Photoimmunotherapy ==== Photoimmunotherapy is an oncological treatment for various cancers that combines photodynamic therapy of tumor with immunotherapy treatment. Combining photodynamic therapy with immunotherapy enhances the immunostimulating response and has synergistic effects for metastatic cancer treatment. ==== Vascular targeting ==== Some photosensitisers naturally accumulate in the endothelial cells of vascular tissue allowing 'vascular targeted' PDT. Verteporfin was shown to target the neovasculature resulting from macular degeneration in the macula within the first thirty minutes after intravenous administration of the drug. Compared to normal tissues, most types of cancers are especially active in both the uptake and accumulation of photosensitizers agents, which makes cancers especially vulnerable to PDT. Since photosensitizers can also have a high affinity for vascular endothelial cells. === Ophthalmology === As cited above, verteporfin was widely approved for the treatment of wet age-related macular degeneration beginning in 1999. The drug targets the neovasculature that is caused by the condition. === Antimicrobial effects === Photodynamic skin disinfection is effective at killing topical microbes, including drug-resistant bacteria, viruses, and fungi. Photodynamic disinfection remains effective after repeat treatments, with no evidence of resistance formation. The method can effectively treat polymicrobial antibiotic resistant Pseudomonas aeruginosa and methicillin-resistant Staphylococcus aureus biofilms in a maxillary sinus cavity model. == History == === Modern era === In the late nineteenth century. Finsen successfully demonstrated phototherapy by employing heat-filtered light from a carbon-arc lamp (the "Finsen lamp") in the treatment of a tubercular condition of the skin known as lupus vulgaris, for which he won the 1903 Nobel Prize in Physiology or Medicine. In 1913 another German scientist, Meyer-Betz, described the major stumbling block of photodynamic therapy. After injecting himself with haematoporphyrin (Hp, a photosensitiser), he swiftly experienced a general skin sensitivity upon exposure to sunlight—a recurrent problem with many photosensitisers. The first evidence that agents, photosensitive synthetic dyes, in combination with a light source and oxygen could have potential therapeutic effect was made at the turn of the 20th century in the laboratory of Hermann von Tappeiner in Munich, Germany. Germany was leading the world in industrial dye synthesis at the time. While studying the effects of acridine on paramecia cultures, Oscar Raab, a student of von Tappeiner observed a toxic effect. Fortuitously Raab also observed that light was required to kill the paramecia. Subsequent work in von Tappeiner's laboratory showed that oxygen was essential for the 'photodynamic action' – a term coined by von Tappeiner. Von Tappeiner and colleagues performed the first PDT trial in patients with skin carcinoma using the photosensitizer, eosin. Of six patients with a facial basal cell carcinoma, treated with a 1% eosin solution and long-term exposure either to sunlight or arc-lamp light, four patients showed total tumour resolution and a relapse-free period of 12 months. In 1924 Policard revealed the diagnostic capabilities of hematoporphyrin fluorescence when he observed that ultraviolet radiation excited red fluorescence in the sarcomas of laboratory rats. Policard hypothesized that the fluorescence was associated with endogenous hematoporphyrin accumulation. In 1948 Figge and co-workers showed on laboratory animals that porphyrins exhibit a preferential affinity to rapidly dividing cells, including malignant, embryonic and regenerative cells. They proposed that porphyrins could be used to treat cancer. Photosensitizer Haematoporphyrin Derivative (HpD), was first characterised in 1960 by Lipson. Lipson sought a diagnostic agent suitable for tumor detection. HpD allowed Lipson to pioneer the use of endoscopes and HpD fluorescence. HpD is a porphyrin species derived from haematoporphyrin, Porphyrins have long been considered as suitable agents for tumour photodiagnosis and tumour PDT because cancerous cells exhibit significantly greater uptake and affinity for porphyrins compared to normal tissues. This had been observed by other researchers prior to Lipson. Thomas Dougherty and co-workers at Roswell Park Comprehensive Cancer Center in Buffalo, New York, clinically tested PDT in 1978. They treated 113 cutaneous or subcutaneous malignant tumors with HpD and observed total or partial resolution of 111 tumors. Dougherty helped expand clinical trials and formed the International Photodynamic Association, in 1986. John Toth, product manager for Cooper Medical Devices Corp/Cooper Lasersonics, noticed the "photodynamic chemical effect" of the therapy and wrote the first white paper naming the therapy "Photodynamic Therapy" (PDT) with early clinical argon dye lasers circa 1981. The company set up 10 clinical sites in Japan where the term "radiation" had negative connotations. HpD, under the brand name Photofrin, was the first PDT agent approved for clinical use in 1993 to treat a form of bladder cancer in Canada. Over the next decade, both PDT and the use of HpD received international attention and greater clinical acceptance and led to the first PDT treatments approved by U.S. Food and Drug Administration Japan and parts of Europe for use against certain cancers of the oesophagus and non-small cell lung cancer. Photofrin had the disadvantages of prolonged patient photosensitivity and a weak long-wavelength absorption (630 nm). This led to the development of second generation photosensitisers, including Verteporfin (a benzoporphyrin derivative, also known as Visudyne) and more recently, third generation targetable photosensitisers, such as antibody-directed photosensitisers. In the 1980s, David Dolphin, Julia Levy and colleagues developed a novel photosensitizer, verteporfin. Verteporfin, a porphyrin derivative, is activated at 690 nm, a much longer wavelength than Photofrin. It has the property of preferential uptake by neovasculature. It has been widely tested for its use in treating skin cancers and received FDA approval in 2000 for the treatment of wet age related macular degeneration. As such it was the first medical treatment ever approved for this condition, which is a major cause of vision loss. Russian scientists pioneered a photosensitizer called Photogem which, like HpD, was derived from haematoporphyrin in 1990 by Mironov and coworkers. Photogem was approved by the Ministry of Health of Russia and tested clinically from February 1992 to 1996. A pronounced therapeutic effect was observed in 91 percent of the 1500 patients. 62 percent had total tumor resolution. A further 29 percent had >50% tumor shrinkage. In early diagnosis patients 92 percent experienced complete resolution. Russian scientists collaborated with NASA scientists who were looking at the use of LEDs as more suitable light sources, compared to lasers, for PDT applications. Since 1990, the Chinese have been developing clinical expertise with PDT, using domestically produced photosensitizers, derived from Haematoporphyrin. China is notable for its expertise in resolving difficult-to-reach tumours. == Miscellany == PUVA therapy uses psoralen as photosensitiser and UVA ultraviolet as light source, but this form of therapy is usually classified as a separate form of therapy from photodynamic therapy. To allow treatment of deeper tumours some researchers are using internal chemiluminescence to activate the photosensitiser. == See also == Antimicrobial photodynamic therapy Blood irradiation therapy Laser medicine Light Harvesting Materials Photoimmunotherapy Photomedicine Photopharmacology Photostatin Sonodynamic therapy Photosensitizer Nanodumbbells, being studied for possible use in photodynamic therapy Neurotherapy == References == == External links == International Photodynamic Association Photodynamic Therapy for Cancer from the NCI
Wikipedia/Photodynamic_therapy
Tomotherapy is a type of radiation therapy treatment machine. In tomotherapy a thin radiation beam is modulated as it rotates around the patient, while they are moved through the bore of the machine. The name comes from the use of a strip-shaped beam, so that only one “slice” (Greek prefix “tomo-”) of the target is exposed at any one time by the radiation. The external appearance of the system and movement of the radiation source and patient can be considered analogous to a CT scanner (computed tomography), which uses lower doses of radiation for imaging. Like a conventional machine used for X-ray external beam radiotherapy (often referred to as a linear accelerator or linac, their main component), it [the tomotherapy machine] generates the radiation beam, but the external appearance of the machine, patient positioning, and treatment delivery differ. Conventional linacs do not work on a slice-by-slice basis but typically have a large area beam which can also be resized and modulated. == General principles == The treatment field's length (the width of the radiation slice) is adjustable using collimator jaws. In static-jaw delivery, the field length remains constant during a treatment. In dynamic-jaw delivery, the field length changes so that it begins and ends at its minimum setting. Tomotherapy treatment times vary compared to normal radiation therapy treatment times. Tomotherapy treatment times can be as low as 6.5 minutes for common prostate treatment, excluding extra time for imaging. Modern tomotherapy and conventional linac systems incorporate one or both of megavoltage X-ray or kilovoltage X-ray imaging systems, enabling image-guided radiation therapy (IGRT). In tomotherapy, images are acquired in a very similar manner to a CT scanner, thanks to their closely related design. There are few head-to-head comparisons of tomotherapy and other IMRT techniques, however there is some evidence that a conventional linac using VMAT can provide faster treatment whereas tomotherapy is better able to spare surrounding healthy tissue while delivering a uniform dose. === Helical delivery === In helical tomotherapy, the linac rotates on its gantry at a constant speed while the beam is delivered; so that from the patient's perspective, the shape traced out by the linac is helical. While helical tomotherapy can treat very long volumes without a need to abut fields in the longitudinal direction, it does display a distinct artifact due to "thread effect" when treating non-central tumors. Thread effect can be suppressed during planning through good pitch selection. === Fixed-angle delivery === Fixed-angle tomotherapy uses multiple tomotherapy beams, each delivered from a separate fixed gantry angle, in which only the couch moves during beam delivery. This is branded as TomoDirect, but has also been called topotherapy. The technology enables fixed beam treatments by moving the patient through the machine bore while maintaining specified beam angles. == Clinical considerations == Lung cancer, head and neck tumors, breast cancer, prostate cancer, stereotactic radiosurgery (SRS) and stereotactic body radiotherapy (SBRT) are some examples of treatments commonly performed using tomotherapy. In general, radiation therapy (or radiotherapy) has developed with a strong reliance on homogeneity of dose throughout the tumor. Tomotherapy embodies the sequential delivery of radiation to different parts of the tumor which raises two important issues. First, this method is known as "field matching" and brings with it the possibility of a less-than-perfect match between two adjacent fields with a resultant hot and/or cold spot within the tumor. The second issue is that if the patient or tumor moves during this sequential delivery, then again, a hot or cold spot will result. The first problem is reduced by use of a helical motion, as in spiral computed tomography. Some research has suggested tomotherapy provides more conformal treatment plans and decreased acute toxicity. Non-helical static beam techniques such as IMRT and TomoDirect are well suited to whole breast radiation therapy. These treatment modes avoid the low-dose integral splay and long treatment times associated with helical approaches by confining dose delivery to tangential angles. This risk is accentuated in younger patients with early-stage breast cancer, where cure rates are high and life expectancy is substantial. Static beam angle approaches aim to maximize the therapeutic ratio by ensuring that the tumor control probability (TCP) significantly outweighs the associated normal tissue complication probability (NTCP). == History == The tomotherapy technique was developed in the early 1990s at the University of Wisconsin–Madison by Professor Thomas Rockwell Mackie and Paul Reckwerdt. A small megavoltage x-ray source was mounted in a similar fashion to a CT x-ray source, and the geometry provided the opportunity to provide CT images of the body in the treatment setup position. Although original plans were to include kilovoltage CT imaging, current models use megavoltage energies. With this combination, the unit was one of the first devices capable of providing modern image-guided radiation therapy (IGRT). The first implementation of tomotherapy was the Corvus system developed by Nomos Corporation, with the first patient treated in April 1994. This was the first commercial system for planning and delivering intensity modulated radiation therapy (IMRT). The original system, designed solely for use in the brain, incorporated a rigid skull-based fixation system to prevent patient motion between the delivery of each slice of radiation. But some users eschewed the fixation system and applied the technique to tumors in many different parts of the body. === Mobile tomotherapy === Due to their internal shielding and small footprint, TomoTherapy Hi-Art and TomoTherapy TomoHD treatment machines were the only high energy radiotherapy treatment machines used in relocatable radiotherapy treatment suites. Two different types of suites were available: TomoMobile developed by TomoTherapy Inc. which was a moveable truck; and Pioneer, developed by UK-based Oncology Systems Limited. The latter was developed to meet the requirements of UK and European transport law requirements and was a contained unit placed on a concrete pad, delivering radiotherapy treatments in less than five weeks. == See also == Radiation therapy Radiosurgery == References == == External links ==
Wikipedia/TomoTherapy
In medicine, proton therapy, or proton radiotherapy, is a type of particle therapy that uses a beam of protons to irradiate diseased tissue, most often to treat cancer. The chief advantage of proton therapy over other types of external beam radiotherapy is that the dose of protons is deposited over a narrow range of depth; hence in minimal entry, exit, or scattered radiation dose to healthy nearby tissues. When evaluating whether to treat a tumor with photon or proton therapy, physicians may choose proton therapy if it is important to deliver a higher radiation dose to targeted tissues while significantly decreasing radiation to nearby organs at risk. The American Society for Radiation Oncology Model Policy for Proton Beam therapy says proton therapy is considered reasonable if sparing the surrounding normal tissue "cannot be adequately achieved with photon-based radiotherapy" and can benefit the patient. Like photon radiation therapy, proton therapy is often used in conjunction with surgery and/or chemotherapy to most effectively treat cancer. == Description == Proton therapy is a type of external beam radiotherapy that uses ionizing radiation. In proton therapy, medical personnel use a particle accelerator to target a tumor with a beam of protons. These charged particles damage the DNA of cells, ultimately killing them by stopping their reproduction and thus eliminating the tumor. Cancerous cells are particularly vulnerable to attacks on DNA because of their high rate of division and their limited ability to repair DNA damage. Some cancers with specific defects in DNA repair may be more sensitive to proton radiation. Proton therapy lets physicians deliver a highly conformal beam, i.e. delivering radiation that conforms to the shape and depth of the tumor and sparing much of the surrounding, normal tissue. For example, when comparing proton therapy to the most advanced types of photon therapy—intensity-modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT)—proton therapy can give similar or higher radiation doses to the tumor with a 50%-60% lower total body radiation dose. Protons can focus energy delivery to fit the tumor shape, delivering only low-dose radiation to surrounding tissue. As a result, the patient has fewer side effects. All protons of a given energy have a certain penetration range; very few protons penetrate beyond that distance. Also, the dose delivered to tissue is maximized only over the last few millimeters of the particle's range; this maximum is called the spread out Bragg peak, often called the SOBP (see visual). To treat tumors at greater depth, one needs a beam with higher energy, typically given in MeV (mega electron volts). Accelerators used for proton therapy typically produce protons with energies of 70 to 250 MeV. Adjusting proton energy during the treatment maximizes the cell damage within the tumor. Tissue closer to the surface of the body than the tumor gets less radiation, and thus less damage. Tissues deeper in the body get very few protons, so the dose becomes immeasurably small. In most treatments, protons of different energies with Bragg peaks at different depths are applied to treat the entire tumor. These Bragg peaks are shown as thin blue lines in the figure in this section. While tissues behind (or deeper than) the tumor get almost no radiation, the tissues in front of (shallower than) the tumor get radiation dosage based on the SOBP. === Equipment === Most installed proton therapy systems use isochronous cyclotrons. Cyclotrons are considered simple to operate, reliable and can be made compact, especially with use of superconducting magnets. Synchrotrons can also be used, with the advantage of easier production at varying energies. Linear accelerators, as used for photon radiation therapy, are becoming commercially available as limitations of size and cost are resolved. Modern proton systems incorporate high-quality imaging for daily assessment of tumor contours, treatment planning software illustrating 3D dose distributions, and various system configurations, e.g. multiple treatment rooms connected to one accelerator. Partly because of these advances in technology, and partly because of the continually increasing amount of proton clinical data, the number of hospitals offering proton therapy continues to grow. === FLASH therapy === FLASH radiotherapy is a technique under development for photon and proton treatments, using very high dose rates (necessitating large beam currents). If applied clinically, it could shorten treatment time to just one to three 1-second sessions, and further reducing side effects. == History == The first suggestion that energetic protons could be an effective treatment was made by Robert R. Wilson in a paper published in 1946 while he was involved in the design of the Harvard Cyclotron Laboratory (HCL). The first treatments were performed with particle accelerators built for physics research, notably Berkeley Radiation Laboratory in 1954 and at Uppsala in Sweden in 1957. In 1961, a collaboration began between HCL and Massachusetts General Hospital (MGH) to pursue proton therapy. Over the next 41 years, this program refined and expanded these techniques while treating 9,116 patients before the cyclotron was shut down in 2002. In the USSR a therapeutic proton beam with energies up to 200 MeV was obtained at the synchrocyclotron of JINR in Dubna in 1967. The ITEP center in Moscow, Russia, which began treating patients in 1969, is the oldest proton center still in operation. The Paul Scherrer Institute in Switzerland was the world's first proton center to treat eye tumors beginning in 1984. In addition, they invented pencil beam scanning in 1996, which became the state-of-the art form of proton therapy. The world's first hospital-based proton therapy center was a low energy cyclotron centre for eye tumors at Clatterbridge Centre for Oncology in the UK, opened in 1989, followed in 1990 at the Loma Linda University Medical Center (LLUMC) in Loma Linda, California. Later, the Northeast Proton Therapy Center at Massachusetts General Hospital was brought online, and the HCL treatment program was transferred to it in 2001 and 2002. At the beginning of 2023, there were 41 proton therapy centers in the United States, and a total of 89 worldwide. As of 2020, six manufacturers make proton therapy systems: Hitachi, Ion Beam Applications, Mevion Medical Systems, ProNova Solutions, ProTom International and Varian Medical Systems. == Types == The newest form of proton therapy, pencil beam scanning, gives therapy by sweeping a proton beam laterally over the target so that it gives the required dose while closely conforming to shape of the targeted tumor. Before the use of pencil beam scanning, oncologists used a scattering method to direct a wide beam toward the tumor. === Passive scattering beam delivery === The first commercially available proton delivery systems used a scattering process, or passive scattering, to deliver the therapy. With scattering proton therapy the proton beam is spread out by scattering devices, and the beam is then shaped by putting items such as collimators and compensators in the path of the protons. The collimators were custom made for the patient with milling machines. Passive scattering gives homogeneous dose along the target volume. Therefore, passive scattering gives more limited control over dose distributions proximal to target. Over time many scattering therapy systems have been upgraded to deliver pencil beam scanning. Because scattering therapy was the first type of proton therapy available, most clinical data available on proton therapy—especially long-term data as of 2020—were acquired via scattering technology. === Pencil beam scanning beam delivery === A newer and more flexible delivery method is pencil beam scanning, using a beam that sweeps laterally over the target so that it delivers the needed dose while closely conforming to the tumor's shape. This conformal delivery is achieved by shaping the dose through magnetic scanning of thin beamlets of protons without needing apertures and compensators. Multiple beams are delivered from different directions, and magnets in the treatment nozzle steer the proton beam to conform to the target volume layer as the dose is painted layer by layer. This type of scanning delivery provides greater flexibility and control, letting the proton dose conform more precisely to the shape of the tumor. Delivery of protons via pencil beam scanning, in use since 1996 at the Paul Scherrer Institute, allows for the most precise type of proton delivery: intensity-modulated proton therapy (IMPT). IMPT is to proton therapy what IMRT is to conventional photon therapy—treatment that more closely conforms to the tumor while avoiding surrounding structures. Virtually all new proton systems provide pencil beam scanning exclusively. A study led by Memorial Sloan Kettering Cancer Center suggests that IMPT can improve local control when compared to passive scattering for patients with nasal cavity and paranasal sinus malignancies. == Application == It was estimated that by the end of 2019, a total of ≈200,000 patients had been treated with proton therapy. Physicians use protons to treat conditions in two broad categories: Disease sites that respond well to higher doses of radiation, i.e., dose escalation. Dose escalation has sometimes shown a higher probability of "cure" (i.e. local control) than conventional radiotherapy. These include, among others, uveal melanoma (ocular tumor), skull base and paraspinal tumor (chondrosarcoma and chordoma), and unresectable sarcoma. In all these cases proton therapy gives significant improvement in the probability of local control, over conventional radiotherapy. For eye tumors, proton therapy also has high rates of maintaining the natural eye. Treatment where proton therapy's increased precision reduces unwanted side effects by lessening the dose to normal tissue. In these cases, the tumor dose is the same as in conventional therapy, so there is no expectation of increased probability of curing the disease. Instead, emphasis is on reducing the dose to normal tissue, thus reducing unwanted effects. Two prominent examples are pediatric neoplasms (such as medulloblastoma) and prostate cancer. === Pediatric === Irreversible long-term side effects of conventional radiation therapy for pediatric cancers are well documented and include growth disorders, neurocognitive toxicity, ototoxicity with subsequent effects on learning and language development, and renal, endocrine and gonadal dysfunctions. Radiation-induced secondary malignancy is another very serious adverse effect that has been reported. As there is minimal exit dose when using proton radiation therapy, dose to surrounding normal tissues can be significantly limited, reducing the acute toxicity which positively impacts the risk for these long-term side effects. Cancers requiring craniospinal irradiation, for example, benefit from the absence of exit dose with proton therapy: dose to the heart, mediastinum, bowel, bladder and other tissues anterior to the vertebrae is eliminated, hence a reduction of acute thoracic, gastrointestinal and bladder side effects. === Eye tumor === Proton therapy for eye tumors is a special case since this treatment requires only relatively low energy protons (≈70 MeV). Owing to this low energy, some particle therapy centers only treat eye tumors. Proton, or more generally, hadron therapy of tissue close to the eye affords sophisticated methods to assess the alignment of the eye that can vary significantly from other patient position verification approaches in image guided particle therapy. Position verification and correction must ensure that the radiation spares sensitive tissue like the optic nerve to preserve the patient's vision. For ocular tumors, selecting the type of radiotherapy depends on tumor location and extent, tumor radioresistance (calculating the dose needed to eliminate the tumor), and the therapy's potential toxic side effects on nearby critical structures. For example, proton therapy is an option for retinoblastoma and intraocular melanoma. The advantage of a proton beam is that it has the potential to effectively treat the tumor while sparing sensitive structures of the eye. Given its effectiveness, proton therapy has been described as the "gold standard" treatment for ocular melanoma. The implementation of momentum cooling technique in proton therapy for eye treatment can significantly enhance its effectiveness. This technique aids in reducing the radiation dose administered to healthy organs while ensuring that the treatment is completed within a few seconds. Consequently, patients experience improved comfort during the procedure. === Base of skull cancer === When receiving radiation for skull base tumors, side effects of the radiation can include pituitary hormone dysfunction and visual field deficit—after radiation for pituitary tumors—as well as cranial neuropathy (nerve damage), radiation-induced osteosarcoma (bone cancer), and osteoradionecrosis, which occurs when radiation causes part of the bone in the jaw or skull base to die. Proton therapy has been very effective for people with base of skull tumors. Unlike conventional photon radiation, protons do not penetrate beyond the tumor. Proton therapy lowers the risk of treatment-related side effects from when healthy tissue gets radiation. Clinical studies have found proton therapy to be effective for skull base tumors. === Head and neck tumor === Proton particles do not deposit exit dose, so proton therapy can spare normal tissues far from the tumor. This is particularly useful for head and neck tumors because of the anatomic constraints found in nearly all cancers in this region. The dosimetric advantage unique to proton therapy translates into toxicity reduction. For recurrent head and neck cancer requiring reirradiation, proton therapy is able to maximize a focused dose of radiation to the tumor while minimizing dose to surrounding tissues, hence a minimal acute toxicity profile, even in patients who got multiple prior courses of radiotherapy. === Left-side breast cancer === When breast cancer — especially in the left breast — is treated with conventional radiation, the lung and heart, which are near the left breast, are particularly susceptible to photon radiation damage. Such damage can eventually cause lung problems (e.g. lung cancer) or various heart problems. Depending on location of the tumor, damage can also occur to the esophagus, or to the chest wall (which can potentially lead to leukemia). One recent study showed that proton therapy has low toxicity to nearby healthy tissues and similar rates of disease control compared with conventional radiation. Other researchers found that proton pencil beam scanning techniques can reduce both the mean heart dose and the internal mammary node dose to essentially zero. Small studies have found that, compared to conventional photon radiation, proton therapy delivers minimal toxic dose to healthy tissues and specifically decreased dose to the heart and lung. Large-scale trials are underway to examine other potential benefits of proton therapy to treat breast cancer. === Lymphoma === Though chemotherapy is the main treatment for lymphoma, consolidative radiation is often used in Hodgkin lymphoma and aggressive non-Hodgkin lymphoma, while definitive treatment with radiation alone is used in a small fraction of lymphoma patients. Unfortunately, treatment-related toxicities caused by chemotherapy agents and radiation exposure to healthy tissues are major concerns for lymphoma survivors. Advanced radiation therapy technologies such as proton therapy may offer significant and clinically relevant advantages such as sparing important organs at risk and decreasing the risk for late normal tissue damage while still achieving the primary goal of disease control. This is especially important for lymphoma patients who are being treated with curative intent and have long life expectancy following therapy. === Prostate cancer === In prostate cancer cases, the issue is less clear. Some published studies found a reduction in long term rectal and genito-urinary damage when treating with protons rather than photons (meaning X-ray or gamma ray therapy). Others showed a small difference, limited to cases where the prostate is particularly close to certain anatomical structures. The relatively small improvement found may be the result of inconsistent patient set-up and internal organ movement during treatment, which offsets most of the advantage of increased precision. One source suggests that dose errors around 20% can result from motion errors of just 2.5 mm (0.098 in). and another that prostate motion is between 5–10 mm (0.20–0.39 in). The number of cases of prostate cancer diagnosed each year far exceeds those of the other diseases referred to above, and this has led some, but not all, facilities to devote most of their treatment slots to prostate treatments. For example, two hospital facilities devote ≈65% and 50% of their proton treatment capacity to prostate cancer, while a third devotes only 7.1%. Worldwide numbers are hard to compile, but one example says that in 2003 ≈26% of proton therapy treatments worldwide were for prostate cancer. === Gastrointestinal malignancy === A growing amount of data shows that proton therapy has great potential to increase therapeutic tolerance for patients with GI malignancy. The possibility of decreasing radiation dose to organs at risk may also help facilitate chemotherapy dose escalation or allow new chemotherapy combinations. Proton therapy will play a decisive role for ongoing intensified combined modality treatments for GI cancers. The following review presents the benefits of proton therapy in treating hepatocellular carcinoma, pancreatic cancer and esophageal cancer. === Hepatocellular carcinoma === Post-treatment liver decompensation, and subsequent liver failure, is a risk with radiotherapy for hepatocellular carcinoma, the most common type of primary liver cancer. Research shows that proton therapy gives favorable results related to local tumor control, progression-free survival, and overall survival. Other studies, which examine proton therapy compared with conventional photon therapy, show that proton therapy gives improved survival and/or fewer side effects; hence proton therapy could significantly improve clinical outcomes for some patients with liver cancer. === Reirradiation for recurrent cancer === For patients who get local or regional recurrences after their initial radiation therapy, physicians are limited in their treatment options due to their reluctance to give additional photon radiation therapy to tissues that have already been irradiated. Re-irradiation is a potentially curative treatment option for patients with locally recurrent head and neck cancer. In particular, pencil beam scanning may be ideally suited for reirradiation. Research shows the feasibility of using proton therapy with acceptable side effects, even in patients who have had multiple prior courses of photon radiation. == Comparison with other treatments == A large study on comparative effectiveness of proton therapy was published by teams of the University of Pennsylvania and Washington University in St. Louis in JAMA Oncology, assessing if proton therapy in the setting of concurrent chemoradiotherapy is associated with fewer 90-day unplanned hospitalizations and overall survival compared with concurrent photon therapy and chemoradiotherapy. The study included 1483 adult patients with nonmetastatic, locally advanced cancer treated with concurrent chemoradiotherapy with curative intent and concluded, "proton chemoradiotherapy was associated with significantly reduced acute adverse events that caused unplanned hospitalizations, with similar disease-free and overall survival". A significant number of randomized controlled trials is recruiting, but only a limited number have been completed as of August 2020. A phase III randomized controlled trial of proton beam therapy versus radiofrequency ablation (RFA) for recurrent hepatocellular carcinoma organized by the National Cancer Center in Korea showed better 2-year local progression-free survival for the proton arm and concluded that proton beam therapy (PBT) is "not inferior to RFA in terms of local progression-free survival and safety, denoting that either RFA or PBT can be applied to recurrent small HCC patients". A phase IIB randomized controlled trial of proton beam therapy versus IMRT for locally advanced esophageal cancer organized by University of Texas MD Anderson Cancer Center concluded that proton beam therapy reduced the risk and severity of adverse events compared with IMRT while maintaining similar progression free survival. Another Phase II Randomized Controlled Trial comparing photons versus protons for Glioblastoma concluded that patients at risk of severe lymphopenia could benefit from proton therapy. A team from Stanford University assessed the risk of secondary cancer after primary cancer treatment with external beam radiation using data from the National Cancer Database for 9 tumor types: head and neck, gastrointestinal, gynecologic, lymphoma, lung, prostate, breast, bone/soft tissue, and brain/central nervous system. The study included a total of 450,373 patients and concluded that proton therapy was associated with a lower risk of second cancer. The issue of when, whether, and how best to apply this technology is still under discussion by physicians and researchers. One recently introduced method, 'model-based selection', uses comparative treatment plans for IMRT and IMPT in combination with normal tissue complication probability (NTCP) models to identify patients who may benefit most from proton therapy. Clinical trials are underway to examine the comparative efficacy of proton therapy (vs photon radiation) for the following: Pediatric cancers—by St. Jude Children's Research Hospital, Samsung Medical Center Base of skull cancer—by Heidelberg University Head and neck cancer—by MD Anderson, Memorial Sloan Kettering and other centers Brain and spinal cord cancer—by Massachusetts General Hospital, Uppsala University and other centers, NRG Oncology Hepatocellular carcinoma (liver)—by NRG Oncology, Chang Gung Memorial Hospital, Loma Linda University Lung cancer—by Radiation Therapy Oncology Group (RTOG), Proton Collaborative Group (PCG), Mayo Clinic Esophageal cancer—by NRG Oncology, Abramson Cancer Center, University of Pennsylvania Breast cancer—by University of Pennsylvania, Proton Collaborative Group (PCG) Pancreatic cancer—by University of Maryland, Proton Collaborative Group (PCG) === X-ray radiotherapy === The figure at the right of the page shows how beams of X-rays (IMRT; left frame) and beams of protons (right frame), of different energies, penetrate human tissue. A tumor with a sizable thickness is covered by the IMRT spread out Bragg peak (SOBP) shown as the red lined distribution in the figure. The SOBP is an overlap of several pristine Bragg peaks (blue lines) at staggered depths. Megavoltage X-ray therapy has less "skin sparing potential" than proton therapy: X-ray radiation at the skin, and at very small depths, is lower than for proton therapy. One study estimates that passively scattered proton fields have a slightly higher entrance dose at the skin (≈75%) compared to therapeutic megavoltage (MeV) photon beams (≈60%). X-ray radiation dose falls off gradually, needlessly harming tissue deeper in the body and damaging the skin and surface tissue opposite the beam entrance. The differences between the two methods depends on: Width of the SOBP Depth of the tumor Number of beams that treat the tumor The X-ray advantage of less harm to skin at the entrance is partially counteracted by harm to skin at exit point. Since X-ray treatments are usually done with multiple exposures from opposite sides, each section of skin is exposed to both entering and exiting X-rays. In proton therapy, skin exposure at the entrance point is higher, but tissues on the opposite side of the body to the tumor get no radiation. Thus, X-ray therapy causes slightly less damage to skin and surface tissues, and proton therapy causes less damage to deeper tissues in front of and beyond the target. An important consideration in comparing these treatments is whether the equipment delivers protons via the scattering method (historically, the most common) or a spot scanning method. Spot scanning can adjust the width of the SOBP on a spot-by-spot basis, which reduces the volume of normal (healthy) tissue inside the high dose region. Also, spot scanning allows for intensity modulated proton therapy (IMPT), which determines individual spot intensities using an optimization algorithm that lets the user balance the competing goals of irradiating tumors while sparing normal tissue. Spot scanning availability depends on the machine and the institution. Spot scanning is more commonly known as pencil-beam scanning and is available on IBA, Hitachi, Mevion (known as HYPERSCAN which became US FDA approved in 2017) and Varian. === Surgery === Physicians base the decision to use surgery or proton therapy (or any radiation therapy) on tumor type, stage, and location. Sometimes surgery is superior (such as cutaneous melanoma), sometimes radiation is superior (such as skull base chondrosarcoma), and sometimes are comparable (for example, prostate cancer). Sometimes, they are used together (e.g., rectal cancer or early stage breast cancer). The benefit of external beam proton radiation is in the dosimetric difference from external beam X-ray radiation and brachytherapy in cases where use of radiation therapy is already indicated, rather than as a direct competition with surgery. In prostate cancer, the most common indication for proton beam therapy, no clinical study directly comparing proton therapy to surgery, brachytherapy, or other treatments has shown any clinical benefit for proton beam therapy. Indeed, the largest study to date showed that IMRT compared with proton therapy was associated with less gastrointestinal morbidity. == Side effects and risks == Proton therapy is a type of external beam radiotherapy, and shares risks and side effects of other forms of radiation therapy. The dose outside of the treatment region can be significantly less for deep-tissue tumors than X-ray therapy, because proton therapy takes full advantage of the Bragg peak. Proton therapy has been in use for over 40 years, and is a mature technology. As with all medical knowledge, understanding of the interaction of radiations with tumor and normal tissue is still imperfect. == Costs == Historically, proton therapy has been expensive. An analysis published in 2003 found that the cost of proton therapy is ≈2.4 times that of X-ray therapies. Newer, less expensive, and dozens more proton treatment centers are driving costs down and they offer more accurate three-dimensional targeting. Higher proton dosage over fewer treatments sessions (1/3 fewer or less) is also driving costs down. Thus the cost is expected to reduce as better proton technology becomes more widely available. An analysis published in 2005 determined that the cost of proton therapy is not unrealistic and should not be the reason for denying patients access to the technology. In some clinical situations, proton beam therapy is clearly superior to the alternatives. A study in 2007 expressed concerns about the effectiveness of proton therapy for prostate cancer, but with the advent of new developments in the technology, such as improved scanning techniques and more precise dose delivery ('pencil beam scanning'), this situation may change considerably. Amitabh Chandra, a health economist at Harvard University, said, "Proton-beam therapy is like the Death Star of American medical technology... It's a metaphor for all the problems we have in American medicine." Proton therapy is cost-effective for some types of cancer, but not all. In particular, some other treatments offer better overall value for treatment of prostate cancer. As of 2018, the cost of a single-room particle therapy system is US$40 million, with multi-room systems costing up to US$200 million. == Treatment centers == As of August 2020, there are over 89 particle therapy facilities worldwide, with at least 41 others under construction. As of August 2020, there are 34 operational proton therapy centers in the United States. As of the end of 2015 more than 154,203 patients had been treated worldwide. One hindrance to universal use of the proton in cancer treatment is the size and cost of the cyclotron or synchrotron equipment necessary. Several industrial teams are working on development of comparatively small accelerator systems to deliver the proton therapy to patients. Among the technologies being investigated are superconducting synchrocyclotrons (also known as FM Cyclotrons), ultra-compact synchrotrons, dielectric wall accelerators, and linear particle accelerators. == See also == Particle therapy Charged particle therapy Hadron Microbeam Fast neutron therapy Boron neutron capture therapy Linear energy transfer Electromagnetic radiation and health Dosimetry Ionizing radiation List of oncology-related terms == References == == Further reading == Greco C.; Wolden S. (Apr 2007). "Current status of radiotherapy with proton and light ion beams". Cancer. 109 (7): 1227–1238. doi:10.1002/cncr.22542. PMID 17326046. S2CID 36256866. Koehler, A.M. (1971). "Use of Protons for Radiotherapy". Proceedings Of The Symposium On Pion And Proton Radiography, Fermi National Accelerator Laboratory, Batavia, IL. pp. 63–68. Koehler, A. M.; Preston, W. M. (1972). "Protons in Radiation Therapy". Radiology. 104 (1). Radiological Society of North America (RSNA): 191–195. doi:10.1148/104.1.191. ISSN 0033-8419. PMID 4624458. Kjelberg, R.N. (1977). "Bragg Peak Proton Radiosurgery for Arteriovenous Malformation of the Brain". First International Seminar on the use of Proton Beams in Radiation Therapy, Moscow. Austin-Seymour, Mary; Munzenrider, John; et al. (1990). "Fractionated Proton Radiation Therapy of Cranial and Intracranial Tumors". American Journal of Clinical Oncology. 13 (4). Ovid Technologies (Wolters Kluwer Health): 327–330. doi:10.1097/00000421-199008000-00013. ISSN 0277-3732. PMID 2165739. S2CID 26465153. Hartford, Zietman, et al. (1999). "Proton Radiotherapy". In A. D'Amico, G.E. Hanks (eds.). Radiotherapeutic Management of Carcinoma of the Prostate. London, UK: Arnold Publishers. pp. 61–72. == External links == The Intrepid Proton-Man Archived 2021-08-05 at the Wayback Machine, educational comic books by Steve Englehart and Michael Jaszewski for pediatric patients 2019 BBC Horizon documentary 2019 Jove video by the University of Maryland School of Medicine explaining the treatment process: Proton Therapy Delivery and Its Clinical Application in Select Solid Tumor Malignancies 2019 The NHS Proton Beam Therapy Programme Proton Therapy Collaborative Group PTCOG Alliance for Proton Therapy Archived 2019-07-15 at the Wayback Machine CARES Cancer Network Archived 2020-10-27 at the Wayback Machine National Association for Proton Therapy American Society for Radiation Oncology Model Policy – Proton Beam Therapy Archived 2022-06-11 at the Wayback Machine Proton therapy – MedlinePlus Medical Encyclopedia Proton Therapy What is Proton Therapy Archived 2020-10-27 at the Wayback Machine
Wikipedia/Proton_therapy
Electrical impedance tomography (EIT) is a noninvasive type of medical imaging in which the electrical conductivity, permittivity, and impedance of a part of the body is inferred from surface electrode measurements and used to form a tomographic image of that part. Electrical conductivity varies considerably among various types of biological tissues or due to the movement of fluids and gases within tissues. The majority of EIT systems apply small alternating currents at a single frequency, however, some EIT systems use multiple frequencies to better differentiate between normal and suspected abnormal tissue within the same organ. Typically, conducting surface electrodes are attached to the skin around the body part being examined. Small alternating currents are applied to some or all of the electrodes, the resulting equipotentials being recorded from the other electrodes. This process will then be repeated for numerous different electrode configurations and finally result in a two-dimensional tomogram according to the image reconstruction algorithms used. Since free ion content determines tissue and fluid conductivity, muscle and blood will conduct the applied currents better than fat, bone or lung tissue. This property can be used to construct images. However, in contrast to linear x-rays used in computed tomography, electric currents travel three dimensionally along all the paths simultaneously, weighted by their conductivity (thus primarily along the path of highest conductivity, but not exclusively). Image construction can be difficult because there is usually more than one solution for a three-dimensional area projected onto a two-dimensional plane. Mathematically, the problem of recovering conductivity from surface measurements of current and potential is a non-linear inverse problem and is severely ill-posed. The mathematical formulation of the problem was posed by Alberto Calderón, and in the mathematical literature of inverse problems it is often referred to as "Calderón's inverse problem" or the "Calderón problem". There is extensive mathematical research on the uniqueness of solutions and numerical algorithms for this problem. Compared to the conductivities of most other soft tissues within the human thorax, lung tissue conductivity is approximately five-fold lower, resulting in high absolute contrast. This characteristic may partially explain the amount of research conducted in EIT lung imaging. Furthermore, lung conductivity fluctuates during the breath cycle which accounts for the interest of the research community to use EIT as a bedside method to visualize inhomogeneity of lung ventilation in mechanically ventilated patients. EIT measurements between two or more physiological states, e.g. between inspiration and expiration, are therefore referred to as time difference EIT (td-EIT). td-EIT has one major advantage over absolute EIT (a-EIT): inaccuracies resulting from interindividual anatomy, insufficient skin contact of surface electrodes or impedance transfer can be dismissed because most artifacts will eliminate themselves due to simple image subtraction in td-EIT. Further EIT applications proposed include detection/location of cancer in skin, breast, or cervix, localization of epileptic foci, imaging of brain activity. as well as a diagnostic tool for impaired gastric emptying. Attempts to detect or localize tissue pathology within normal tissue usually rely on multifrequency EIT (MF-EIT), also termed electrical impedance spectroscopy (EIS) and are based on differences in conductance patterns at varying frequencies. == History == The invention of EIT as a medical imaging technique is usually attributed to John G. Webster and a publication in 1978, although the first practical realization of a medical EIT system was detailed in the 1984 work of David C. Barber and Brian H. Brown. Together, Brown and Barber published the first Electrical Impedance Tomogram in 1983, visualizing the cross section of a human forearm by absolute EIT. Even though there has been substantial progress since this, most a-EIT applications are still considered experimental, though there are commercial systems available. A technique similar to EIT is used in geophysics and industrial process monitoring – electrical resistivity tomography. In analogy to EIT, surface electrodes are being placed on the earth, within bore holes, or within a vessel or pipe in order to locate resistivity anomalies or monitor mixtures of conductive fluids. Setup and reconstruction techniques are comparable to EIT. In geophysics, the idea dates from the 1930s. Electrical resistivity tomography has also been proposed for mapping the electrical properties of substrates and thin films for electronic applications. == Theory == Electrical conductivity and permittivity vary among biological tissue types and depend on their free ion content. Further factors affecting conductivity include temperature and other physiological factors, e.g. the respiratory cycle between in- and expiration when lung tissue becomes more conductive due to lower content of insulating air within its alveoli. After positioning surface electrodes through adhesive electrodes, an electrode belt or a conductive electrode vest around the body part of interest, alternating currents of typically a few milliamperes at a frequency of 10–100 kHz will be applied across two or more drive electrodes. The remaining electrodes will be used to measure the resulting voltage. The procedure will then be repeated for numerous "stimulation patterns", e.g. successive pairs of adjacent electrodes until an entire circle has been completed and image reconstruction can be carried out and displayed by a digital workstation that incorporates complex mathematical algorithms and a priori data. The current itself is applied using current sources, either a single current source switched between electrodes using a multiplexer or a system of voltage-to-current converters, one for each electrode, each controlled by a digital-to-analog converter. The measurements again may be taken either by a single voltage measurement circuit multiplexed over the electrodes or a separate circuit for each electrode. Earlier EIT systems still used an analog demodulation circuit to convert the alternating voltage to a direct current level before running it through an analog-to-digital converter. Newer systems convert the alternating signal directly before performing digital demodulation. Depending on indication, some EIT systems are capable of working at multiple frequencies and measuring both magnitude and phase of the voltage. Voltages measured are passed on to a computer to perform image reconstruction and display. The choice of current (or voltage) patterns affects the signal-to-noise ratio significantly. With devices capable of feeding currents from all electrodes simultaneously (such as ACT3) it is possible to adaptively determine optimal current patterns. If images are to be displayed in real time a typical approach is the application of some form of regularized inverse of a linearization of the forward problem or a fast version of a direct reconstruction method such as the D-bar method. Most practical systems used in the medical environment generate a 'difference image', i.e. differences in voltage between two time points are left-multiplied by the regularized inverse to calculate an approximate difference between permittivity and conductivity images. Another approach is to construct a finite element model of the body and adjust the conductivities (for example using a variant of Levenburg–Marquart method) to fit the measured data. This is more challenging as it requires an accurate body shape and the exact position of the electrodes. Much of the fundamental work underpinning Electrical Impedance was done at Rensselaer Polytechnic Institute starting in the 1980s. See also the work published in 1992 from the Glenfield Hospital Project (reference missing). Absolute EIT approaches are targeted at digital reconstruction of static images, i.e. two-dimensional representations of the anatomy within the body part of interest. As mentioned above and unlike linear x-rays in Computed Tomography, electric currents travel three-dimensionally along the path of least resistivity, which results in partial loss of the electric current applied (impedance transfer, e.g. due to blood flow through the transverse plane). This is one of the reasons why image reconstruction in absolute EIT is so complex, since there is usually more than just one solution for image reconstruction of a three-dimensional area projected onto a two-dimensional plane. Another difficulty is that given the number of electrodes and the measurement precision at each electrode, only objects bigger than a given size can be distinguished. This explains the necessity of highly sophisticated mathematical algorithms that will address the inverse problem and its ill-posedness. Further difficulties in absolute EIT arise from inter- and intra-individual differences of electrode conductivity with associated image distortion and artifacts. It is also important to bear in mind that the body part of interest is rarely precisely rotund and that inter-individual anatomy varies, e.g. thorax shape, affecting individual electrode spacing. A priori data accounting for age-, height- and gender-typical anatomy can reduce sensitivity to artifacts and image distortion. Improving the signal-to-noise ratio, e.g. by using active surface electrodes, further reduces imaging errors. Some of the latest EIT systems with active electrodes monitor electrode performance through an extra channel and are able to compensate for insufficient skin contact by removing them from the measurements. Another potential solution to problem with electrode-skin contact is contactless EIT technique which uses voltage excitation and capacitive coupling instead of direct contact with the skin. Capacitively coupled electrodes are more comfortable for the patient but maintaining a constant and equal coupling capacitance for all electrodes is challenging in real measurements. Time difference EIT bypasses most of these issues by recording measurements in the same individual between two or more physiological states associated with linear conductivity changes. One of the best examples for this approach is lung tissue during breathing due to linear conductivity changes between inspiration and expiration which are caused by varying contents of insulating air during each breath cycle. This permits digital subtraction of recorded measurements obtained during the breath cycle and results in functional images of lung ventilation. One major advantage is that relative changes of conductivity remain comparable between measurements even if one of the recording electrodes is less conductive than the others, thereby reducing most artifacts and image distortions. However, incorporating a priori data sets or meshes in difference EIT is still useful in order to project images onto the most likely organ morphology, which depends on weight, height, gender, and other individual factors. The open source project EIDORS provides a suite of programs (written in Matlab / GNU Octave) for data reconstruction and display under the GNU GPL license. The direct nonlinear D-bar method for nonlinear EIT reconstruction is available in Matlab code at [2]. The Open Innovation EIT Research Initiative is aimed at advancing the development of electrical impedance tomography (EIT) in general and to ultimately accelerate its clinical adoption. A plug-and-play EIT hardware and software package was available through Swisstom until 2018. == Properties == In contrast to most other tomographic imaging techniques, EIT does not apply any kind of ionizing radiation. Currents typically applied in EIT are relatively small and certainly below the threshold at which they would cause significant nerve stimulation. The frequency of the alternating current is sufficiently high not to give rise to electrolytic effects in the body and the Ohmic power dissipated is sufficiently small and diffused over the body to be easily handled by the body's thermoregulatory system. These properties qualify EIT to be continuously applied in humans, e.g. during mechanical ventilation in an intensive care unit (ICU). Because the equipment needed in order to perform EIT is much smaller and less costly than in conventional tomography, EIT qualifies for continuous real time visualization of lung ventilation right at the bedside. EIT's major disadvantage versus conventional tomography is its lower maximum spatial resolution (approximately 15% of electrode array diameter in EIT compared to 1 mm in CT and MRI). However, resolution can be improved using 32 instead of 16 electrodes. Image quality can be further improved by constructing an EIT system with active surface electrodes, which significantly reduce signal loss, artifacts, and interferences associated with cables as well as cable length and handling. In contrast to spatial resolution, temporal resolution of EIT (0.1 milliseconds) is much higher than in CT or MRI (0.1 seconds). == Applications == === Lung (a-EIT, td-EIT) === EIT is particularly useful for monitoring lung function because lung tissue resistivity is five times higher than most other soft tissues in the thorax. This results in high absolute contrast of the lungs. In addition, lung resistivity increases and decreases several-fold between inspiration and expiration which explains why monitoring ventilation is currently the most promising clinical application of EIT since mechanical ventilation frequently results in ventilator-associated lung injury (VALI). The feasibility of EIT for lung imaging was first demonstrated at Rensselaer Polytechnic Institute in 1990 using the NOSER algorithm. Time difference EIT can resolve the changes in the distribution of lung volumes between dependent and non-dependent lung regions and assist in adjusting ventilator settings to provide lung protective ventilation to patients during critical illness or anesthesia. Most EIT studies have focused on monitoring regional lung function using the information determined by time difference EIT (td-EIT). However absolute EIT (a-EIT) also has the potential to become a clinically useful tool for lung imaging, as this approach would allow one to directly distinguish between lung conditions which result from regions with lower resistivity (e.g. hemothorax, pleural effusion, atelectasis, lung edema) and those with higher resistivity (e.g. pneumothorax, emphysema). The above image shows an EIT study of a 10-day-old baby breathing normally with 16 adhesive electrodes applied to the chest. Image reconstruction from absolute impedance measurements requires consideration of the exact dimensions and shape of a body as well as the precise electrode location since simplified assumptions would lead to major reconstruction artifacts. While initial studies assessing aspects of absolute EIT have been published, this area of research has not yet reached the level of maturity which would make it suitable for clinical use. In contrast, time difference EIT determines relative impedance changes that may be caused by either ventilation or changes of end-expiratory lung volume. These relative changes are referred to a baseline level, which is typically defined by the intra-thoracic impedance distribution at the end of expiration. Time difference EIT images can be generated continuously and right at the bedside. These attributes make regional lung function monitoring particularly useful whenever there is a need to improve oxygenation or CO2 elimination and when therapy changes are intended to achieve a more homogenous gas distribution in mechanically ventilated patients. EIT lung imaging can resolve the changes in the regional distribution of lung volumes between e.g. dependent and non-dependent lung regions as ventilator parameters are changed. Thus, EIT measurements may be used to guide specific ventilator settings to maintain lung protective ventilation for each patient. Besides the applicability of EIT in the ICU, first studies with spontaneously breathing patients reveal further promising applications. The high temporal resolution of EIT allows regional assessment of common dynamic parameters used in pulmonary function testing (e.g. forced expiratory volume in 1 second). Additionally, specially developed image fusion methods overlaying functional EIT-data with morphological patient data (e.g. CT or MRI images) may be used to get a comprehensive insight into the pathophysiology of the lungs, which might be useful for patients with obstructive lung diseases (e.g. COPD, CF). After many years of lung EIT research with provisional EIT equipment or series models manufactured in very small numbers, three commercial systems for lung EIT have entered in the medical technology market: Timpel Medical - ENLIGHT 2100, Dräger's PulmoVista® 500 and Sentec's LuMon EIT. The models are currently being installed in intensive care units and are already used as aides in decision-making processes related to the treatment of patients with acute respiratory distress syndrome (ARDS). The increasing availability of commercial EIT systems in ICUs will show whether the promising body of evidence obtained from animal models will apply to humans as well (EIT-guided lung recruitment, selection of optimum PEEP levels, pneumothorax detection, prevention of ventilator associated lung injury (VALI), etc.). This would be highly desirable, given that recent studies suggest that 15% of mechanically ventilated patients in the ICU will develop acute lung injury (ALI) with attendant progressive lung collapse and which is associated with a reportedly high mortality of 39%. Just recently, the first prospective animal trial on EIT-guided mechanical ventilation and outcome could demonstrate significant benefits in regard to respiratory mechanics, gas exchange, and histological signs of ventilator-associated lung injury. In addition to visual information (e.g. regional distribution of tidal volume), EIT measurements provide raw data sets that can be used to calculate other helpful information (e.g. changes of intrathoracal gas volume during critical illness) – however, such parameters still require careful evaluation and validation. Another interesting aspect of thoracic EIT is its ability to record and filter pulsatile signals of perfusion. Although promising studies have been published on this topic, this technology is still at its beginnings. A breakthrough would allow simultaneous visualization of both regional blood flow and regional ventilation – enabling clinicians to locate and react upon physiological shunts caused by regional mismatches of lung ventilation and perfusion with associated hypoxemia. === Breast (MF-EIT) === EIT is being investigated in the field of breast imaging as an alternative/complementary technique to mammography and magnetic resonance imaging (MRI) for breast cancer detection. The low specificity of mammography and of MRI result in a relatively high rate of false positive screenings, with high distress for patients and cost for healthcare structures. Development of alternative imaging techniques for this indication would be desirable due to the shortcomings of the existing methods: ionizing radiation in mammography and the risk of inducing nephrogenic systemic fibrosis (NSF) in patients with decreased renal function by administering the contrast agent used in breast MRI, Gadolinium. Literature shows that the electrical properties differ between normal and malignant breast tissues, setting the stage for cancer detection through determination of electrical properties. An early commercial development of non-tomographic electrical impedance imaging was the T-Scan device which was reported to improve sensitivity and specificity when used as an adjunct to screening mammography. A report to the United States Food and Drug Administration (FDA) describes a study involving 504 subjects where the sensitivity of mammography was 82%, 62% for the T-Scan alone, and 88% for the two combined. The specificity was 39% for mammography, 47% for the T-Scan alone, and 51% for the two combined. Several research groups across the world are actively developing the technique. A frequency sweep seems to be an effective technique for detecting breast cancer using EIT. United States Patent US 8,200,309 B2 combines electrical impedance scanning with magnetic resonance low frequency current density imaging in a clinically acceptable configuration not requiring the use of gadolinium chelate enhancement in magnetic resonance mammography. === Cervix (MF-EIT) === In addition to his pioneering role in the development of the first EIT systems in Sheffield professor Brian H. Brown is currently active in the research and development of an electrical impedance spectroscope based on MF-EIT. According to a study published by Brown in 2000, MF-EIT is able to predict [Cervical intraepithelial neoplasia] (CIN) grades 2 and 3 according to Pap smear with a sensitivity and specificity of 92% each. Whether cervical MF-EIT is going to be introduced as an adjunct or an alternative to the Pap smear has yet to be decided. Brown is academic founder of Zilico Limited which distributes the spectroscope (ZedScan I). The device received EC certification from its Notified Body in 2013 and is currently being introduced into a number of clinics in the UK and healthcare systems across the globe. === Brain (a-EIT, td-EIT, mf-EIT) === EIT has been suggested as a basis for brain imaging to enable detection and monitoring of cerebral ischemia, haemorrhage, and other morphological pathologies associated with impedance changes due to neuronal cell swelling, i.e. cerebral hypoxemia and hypoglycemia. While EIT's maximum spatial resolution of approximately 15% of the electrode array diameter is significantly lower than that of cerebral CT or MRI (about one millimeter), temporal resolution of EIT is much higher than in CT or MRI (0.1 milliseconds compared to 0.1 seconds). This makes EIT also interesting for monitoring normal brain function and neuronal activity in intensive care units or the preoperative setting for localization of epileptic foci by telemetric recordings. Holder was able to demonstrate in 1992 that changes of intracerebral impedance can be detected noninvasively through the cranium by surface electrode measurements. Animal models of experimental stroke or seizure showed increases of impedance of up to 100% and 10%, respectively. More recent EIT systems offer the option to apply alternating currents from non-adjacent drive electrodes. So far, cerebral EIT has not yet reached the maturity to be adopted in clinical routine, yet clinical studies are currently being performed on stroke and epilepsy. In this use EIT depends upon applying low frequency currents above the skull that are around <100 Hz since during neuronal rest at this frequency these currents remain in the extracellular space and therefore unable to enter the intracellular space within neurons. However, when a neuron generates an action potential or is about to be depolarized, resistance of its membrane preventing this will be reduced by eighty-fold. Whenever this happens in a larger numbers of neurons, resistivity changes of about 0.06–1.7 % will result. These changes in resistivity provide a means of detecting coherent neuronal activity across larger numbers of neurons and so the tomographic imaging of neural brain activity. Unfortunately while such changes are detectable "they are just too small to support reliable production of images." The prospects of using this technique for this indication will depend upon improved signal processing or recording. A study reported in June 2011 that Functional Electrical Impedance Tomography by Evoke Response (fEITER) has been used to image changes in brain activity after injection of an anaesthetic. One of the benefits of the technique is that the equipment required is small enough and easy enough to transport so that it can be used for monitoring depth of anesthesia in operating theatres. === Perfusion (td-EIT) === Due to its relatively high conductivity, blood may be used for functional imaging of perfusion in tissues and organs characterized by lower conductivities, e.g. to visualize regional lung perfusion. Background of this approach is that pulsatile tissue impedance changes according to differences in the filling of blood vessels between systole and diastole, particularly when injecting saline as contrasting agent. === Sports medicine / home care (a-EIT, td-EIT) === Electrical impedance measurements may also be used to calculate abstract parameters, i.e. nonvisual information. Recent advances in EIT technology as well as the lower number of electrodes required for recording global instead of regional parameters in healthy individuals can be used for non-invasive determination of e.g. VO2 or arterial blood pressure in sports medicine or home care. == Commercial systems == === a-EIT and td-EIT === Even though medical EIT systems had not been used broadly until recently, several medical equipment manufacturers have been supplying commercial versions of lung imaging systems developed by university research groups. The first such system is produced by Maltron International who distribute the Sheffield Mark 3.5 system with 16 electrodes. Similar systems are the Goe MF II system developed by the University of Göttingen, Germany and distributed through CareFusion (16 electrodes) as well as the Enlight 1800 developed at the University of São Paulo School of Medicine and the Polytechnic Institute of the University of São Paulo, Brazil which is distributed by Timpel SA (Adult Belt Reusable - 32 electrodes; Pediatric Belt Reusable - 24 electrodes; Neonatal Belt Disposable - 16 electrodes). Timpel Medical has now released their second generation ENLIGHT 2100 and is the only FDA cleared electrical impedance tomography device commercially available in the United States. These systems typically comply with medical safety legislation and have been primarily employed by clinical research groups in hospitals, most of them in critical care. The first EIT device for lung function monitoring designed for everyday clinical use in the critical care environment has been made available by Dräger Medical in 2011 – the PulmoVista® 500 (16-electrode system). Another commercial EIT system designed for monitoring lung function in the ICU setting is based on 32 active electrodes and was first presented at 2013's annual ESICM congress – the LuMon EIT. Sentec's LuMon EIT was released to the market at 2014's International Symposium on Intensive Care and Emergency Medicine (ISICEM). === Timpel Medical === New strategies in artificial ventilation began to be developed through a research project, led by Marcelo Amato MD, PhD, University of São Paulo pulmonologist, between 2002 and 2008. These new ventilation strategies drove the need for innovation that would allow real-time visualization of ventilation and the individualization of treatment at the bedside. With this objective in mind, Timpel was created in 2004. In the same year, Dr Amato and his Team published the article “Imbalances in Regional Lung Ventilation: A Validation Study on Electrical Impedance Tomography” in the renowned ATS Journal, the American Journal of Respiratory and Critical Care Medicine otherwise known as the Blue Journal. This was just the beginning of the journey. Amato’s research Team published more than 30 articles about EIT from 2004 to 2023. This research has contributed to the many tools available with EIT today. Because of the tremendous interest in EIT and the value the technology brings to the bedside, researchers around the world have contributed to the body of evidence with more than 250 peer reviewed publications in press by 2022. Timpel's name is derived from the technology (Electric Impedance Tomography) written in reverse. El – electrical; Imp – impedance; T-tomography. Timpel is passionate and motivated: to make EIT a valuable adjunctive tool for lung protective strategies contributing to the next generation methodology of treating critically ill patients at the bedside. With Timpel’s ENLIGHT, electrical impedance tomography device each patient’s care is individualized based on their lung disease. ENLIGHT gives the Clinicians visibility of the ventilation disease profile in real time, at the bedside without the added risk of transportation. === MF-EIT === Multifrequency-EIT (MF-EIT) or electrical impedance spectroscopy (EIS) systems are typically designed to detect or locate abnormal tissue, e.g. precancerous lesions or cancer. Impedance Medical Technologies manufacture systems based on designs by the Research Institute of Radioengineering and Electronics of the Russian Academy of Science in Moscow, that are aimed especially at breast cancer detection. Texas-based Mirabel Medical Systems, Inc. develops a similar solution for non-invasive detection of breast cancer and offers the T-Scan 2000ED. Zilico Limited distributes an electrical impedance spectroscope named ZedScan I as a medical device supposed to aid cervical intraepithelial neoplasia location/diagnosis. The device just received EC certification in 2013. === V5R === The v5r is a high performance device, based upon a voltage-voltage measurement technique, designed to improve process control. The high frame rate of the v5r (over 650 frames per second) means that it can be used to monitor rapidly evolving processes or dynamic flow conditions. The data it provides can be used to determine the flow profile of complex multiphase processes; allowing engineers to discriminate between laminar flow, plug flow and other important flow conditions for deeper understanding and improved process control. When used for concentration measurements, the ability to measure full impedance across a wide range of phase ratios means the v5r is able to deliver considerable accuracy across a wider conductivity range compared to other devices. == See also == Electrical capacitance tomography Three-dimensional electrical capacitance tomography Respiratory monitoring EIDORS a reconstruction toolbox for EIT Industrial Tomography Systems == References ==
Wikipedia/Electrical_impedance_tomography
Positron emission tomography (PET) is a functional imaging technique that uses radioactive substances known as radiotracers to visualize and measure changes in metabolic processes, and in other physiological activities including blood flow, regional chemical composition, and absorption. Different tracers are used for various imaging purposes, depending on the target process within the body, such as: Fluorodeoxyglucose ([18F]FDG or FDG) is commonly used to detect cancer; [18F]Sodium fluoride (Na18F) is widely used for detecting bone formation; Oxygen-15 (15O) is sometimes used to measure blood flow. PET is a common imaging technique, a medical scintillography technique used in nuclear medicine. A radiopharmaceutical—a radioisotope attached to a drug—is injected into the body as a tracer. When the radiopharmaceutical undergoes beta plus decay, a positron is emitted, and when the positron interacts with an ordinary electron, the two particles annihilate and two gamma rays are emitted in opposite directions. These gamma rays are detected by two gamma cameras to form a three-dimensional image. PET scanners can incorporate a computed tomography scanner (CT) and are known as PET–CT scanners. PET scan images can be reconstructed using a CT scan performed using one scanner during the same session. One of the disadvantages of a PET scanner is its high initial cost and ongoing operating costs. == Uses == PET is both a medical and research tool used in pre-clinical and clinical settings. It is used heavily in the imaging of tumors and the search for metastases within the field of clinical oncology, and for the clinical diagnosis of certain diffuse brain diseases such as those causing various types of dementias. PET is valued as a research tool to learn and enhance knowledge of the normal human brain, heart function, and support drug development. PET is also used in pre-clinical studies using animals. It allows repeated investigations into the same subjects over time, where subjects can act as their own control and substantially reduces the numbers of animals required for a given study. This approach allows research studies to reduce the sample size needed while increasing the statistical quality of its results. Physiological processes lead to anatomical changes in the body. Since PET is capable of detecting biochemical processes as well as expression of some proteins, PET can provide molecular-level information much before any anatomic changes are visible. PET scanning does this by using radiolabelled molecular probes that have different rates of uptake depending on the type and function of tissue involved. Regional tracer uptake in various anatomic structures can be visualized and relatively quantified in terms of injected positron emitter within a PET scan. PET imaging is best performed using a dedicated PET scanner. It is also possible to acquire PET images using a conventional dual-head gamma camera fitted with a coincidence detector. The quality of gamma-camera PET imaging is lower, and the scans take longer to acquire. However, this method allows a low-cost on-site solution to institutions with low PET scanning demand. An alternative would be to refer these patients to another center or relying on a visit by a mobile scanner. Alternative methods of medical imaging include single-photon emission computed tomography (SPECT), computed tomography (CT), magnetic resonance imaging (MRI) and functional magnetic resonance imaging (fMRI), and ultrasound. SPECT is an imaging technique similar to PET that uses radioligands to detect molecules in the body. SPECT is less expensive and provides inferior image quality than PET. === Oncology === PET scanning with the radiotracer [18F]fluorodeoxyglucose (FDG) is widely used in clinical oncology. FDG is a glucose analog that is taken up by glucose-using cells and phosphorylated by hexokinase (whose mitochondrial form is significantly elevated in rapidly growing malignant tumors). Metabolic trapping of the radioactive glucose molecule allows the PET scan to be utilized. The concentrations of imaged FDG tracer indicate tissue metabolic activity as it corresponds to the regional glucose uptake. FDG is used to explore the possibility of cancer spreading to other body sites (cancer metastasis). These FDG PET scans for detecting cancer metastasis are the most common in standard medical care (representing 90% of current scans). The same tracer may also be used for the diagnosis of types of dementia. Less often, other radioactive tracers, usually but not always labelled with fluorine-18 (18F), are used to image the tissue concentration of different kinds of molecules of interest inside the body. A typical dose of FDG used in an oncological scan has an effective radiation dose of 7.6 mSv. Because the hydroxy group that is replaced by fluorine-18 to generate FDG is required for the next step in glucose metabolism in all cells, no further reactions occur in FDG. Furthermore, most tissues (with the notable exception of liver and kidneys) cannot remove the phosphate added by hexokinase. This means that FDG is trapped in any cell that takes it up until it decays, since phosphorylated sugars, due to their ionic charge, cannot exit from the cell. This results in intense radiolabeling of tissues with high glucose uptake, such as the normal brain, liver, kidneys, and most cancers, which have a higher glucose uptake than most normal tissue due to the Warburg effect. As a result, FDG-PET can be used for diagnosis, staging, and monitoring treatment of cancers, particularly in Hodgkin lymphoma, non-Hodgkin lymphoma, and lung cancer. A 2020 review of research on the use of PET for Hodgkin lymphoma found evidence that negative findings in interim PET scans are linked to higher overall survival and progression-free survival; however, the certainty of the available evidence was moderate for survival, and very low for progression-free survival. A few other isotopes and radiotracers are slowly being introduced into oncology for specific purposes. For example, 11C-labelled metomidate (11C-metomidate) has been used to detect tumors of adrenocortical origin. Also, fluorodopa (FDOPA) PET/CT (also called F-18-DOPA PET/CT) has proven to be a more sensitive alternative to finding and also localizing pheochromocytoma than the iobenguane (MIBG) scan. === Neuroimaging === ==== Neurology ==== PET imaging with oxygen-15 indirectly measures blood flow to the brain. In this method, increased radioactivity signal indicates increased blood flow which is assumed to correlate with increased brain activity. Because of its two-minute half-life, oxygen-15 must be piped directly from a medical cyclotron for such uses, which is difficult. PET imaging with FDG takes advantage of the fact that the brain is normally a rapid user of glucose. Standard FDG PET of the brain measures regional glucose use and can be used in neuropathological diagnosis. Brain pathologies such as Alzheimer's disease (AD) greatly decrease brain metabolism of both glucose and oxygen in tandem. Therefore FDG PET of the brain may also be used to successfully differentiate Alzheimer's disease from other dementing processes, and also to make early diagnoses of Alzheimer's disease. The advantage of FDG PET for these uses is its much wider availability. In addition, some other fluorine-18 based radioactive tracers can be used to detect amyloid-beta plaques, a potential biomarker for Alzheimer's in the brain. These include florbetapir, flutemetamol, Pittsburgh compound B (PiB) and florbetaben. PET imaging with FDG can also be used for localization of "seizure focus". A seizure focus will appear as hypometabolic during an interictal scan. Several radiotracers (i.e. radioligands) have been developed for PET that are ligands for specific neuroreceptor subtypes such as [11C]raclopride, [18F]fallypride and [18F]desmethoxyfallypride for dopamine D2/D3 receptors; [11C]McN5652 and [11C]DASB for serotonin transporters; [18F]mefway for serotonin 5HT1A receptors; and [18F]nifene for nicotinic acetylcholine receptors or enzyme substrates (e.g. 6-FDOPA for the AADC enzyme). These agents permit the visualization of neuroreceptor pools in the context of a plurality of neuropsychiatric and neurologic illnesses. PET may also be used for the diagnosis of hippocampal sclerosis, which causes epilepsy. FDG, and the less common tracers flumazenil and MPPF have been explored for this purpose. If the sclerosis is unilateral (right hippocampus or left hippocampus), FDG uptake can be compared with the healthy side. Even if the diagnosis is difficult with MRI, it may be diagnosed with PET. The development of a number of novel probes for non-invasive, in-vivo PET imaging of neuroaggregate in human brain has brought amyloid imaging close to clinical use. The earliest amyloid imaging probes included [18F]FDDNP, developed at the University of California, Los Angeles, and Pittsburgh compound B (PiB), developed at the University of Pittsburgh. These probes permit the visualization of amyloid plaques in the brains of Alzheimer's patients and could assist clinicians in making a positive clinical diagnosis of AD pre-mortem and aid in the development of novel anti-amyloid therapies. [11C]polymethylpentene (PMP) is a novel radiopharmaceutical used in PET imaging to determine the activity of the acetylcholinergic neurotransmitter system by acting as a substrate for acetylcholinesterase. Post-mortem examination of AD patients has shown decreased levels of acetylcholinesterase. [11C]PMP is used to map the acetylcholinesterase activity in the brain, which could allow for premortem diagnoses of AD and help to monitor AD treatments. Avid Radiopharmaceuticals has developed and commercialized a compound called florbetapir that uses the longer-lasting radionuclide fluorine-18 to detect amyloid plaques using PET scans. ==== Neuropsychology or cognitive neuroscience ==== To examine links between specific psychological processes or disorders and brain activity. ==== Psychiatry and neuropsychopharmacology ==== Numerous compounds that bind selectively to neuroreceptors of interest in biological psychiatry have been radiolabeled with C-11 or F-18. Radioligands that bind to dopamine receptors (D1, D2, reuptake transporter), serotonin receptors (5HT1A, 5HT2A, reuptake transporter), opioid receptors (mu and kappa), cholinergic receptors (nicotinic and muscarinic) and other sites have been used successfully in studies with human subjects. Studies have been performed examining the state of these receptors in patients compared to healthy controls in schizophrenia, substance abuse, mood disorders and other psychiatric conditions. ==== Stereotactic surgery and radiosurgery ==== PET can also be used in image guided surgery for the treatment of intracranial tumors, arteriovenous malformations and other surgically treatable conditions. === Cardiology === Cardiology, atherosclerosis and vascular disease study: FDG PET can help in identifying hibernating myocardium. However, the cost-effectiveness of PET for this role versus SPECT is unclear. FDG PET imaging of atherosclerosis to detect patients at risk of stroke is also feasible. Also, it can help test the efficacy of novel anti-atherosclerosis therapies. === Infectious diseases === Imaging infections with molecular imaging technologies can improve diagnosis and treatment follow-up. Clinically, PET has been widely used to image bacterial infections using FDG to identify the infection-associated inflammatory response. Three different PET contrast agents have been developed to image bacterial infections in vivo are [18F]maltose, [18F]maltohexaose, and [18F]2-fluorodeoxysorbitol (FDS). FDS has the added benefit of being able to target only Enterobacteriaceae. === Bio-distribution studies === In pre-clinical trials, a new drug can be radiolabeled and injected into animals. Such scans are referred to as biodistribution studies. The information regarding drug uptake, retention and elimination over time can be obtained quickly and cost-effectively compare to the older technique of killing and dissecting the animals. Commonly, drug occupancy at a purported site of action can be inferred indirectly by competition studies between unlabeled drug and radiolabeled compounds to bind with specificity to the site. A single radioligand can be used this way to test many potential drug candidates for the same target. A related technique involves scanning with radioligands that compete with an endogenous (naturally occurring) substance at a given receptor to demonstrate that a drug causes the release of the natural substance. === Small animal imaging === A miniature animal PET has been constructed that is small enough for a fully conscious rat to be scanned. This RatCAP (rat conscious animal PET) allows animals to be scanned without the confounding effects of anesthesia. PET scanners designed specifically for imaging rodents, often referred to as microPET, as well as scanners for small primates, are marketed for academic and pharmaceutical research. The scanners are based on microminiature scintillators and amplified avalanche photodiodes (APDs) through a system that uses single-chip silicon photomultipliers. In 2018 the UC Davis School of Veterinary Medicine became the first veterinary center to employ a small clinical PET scanner as a scanner for clinical (rather than research) animal diagnosis. Because of cost as well as the marginal utility of detecting cancer metastases in companion animals (the primary use of this modality), veterinary PET scanning is expected to be rarely available in the immediate future. === Musculo-skeletal imaging === PET imaging has been used for imaging muscles and bones. FDG is the most commonly used tracer for imaging muscles, and NaF-F18 is the most widely used tracer for imaging bones. ==== Muscles ==== PET is a feasible technique for studying skeletal muscles during exercise. Also, PET can provide muscle activation data about deep-lying muscles (such as the vastus intermedialis and the gluteus minimus) compared to techniques like electromyography, which can be used only on superficial muscles directly under the skin. However, a disadvantage is that PET provides no timing information about muscle activation because it has to be measured after the exercise is completed. This is due to the time it takes for FDG to accumulate in the activated muscles. ==== Bones ==== Together with [18F]sodium floride, PET for bone imaging has been in use for 60 years for measuring regional bone metabolism and blood flow using static and dynamic scans. Researchers have recently started using [18F]sodium fluoride to study bone metastasis as well. == Safety == PET scanning is non-invasive, but it does involve exposure to ionizing radiation. FDG, which is now the standard radiotracer used for PET neuroimaging and cancer patient management, has an effective radiation dose of 14 mSv. The amount of radiation in FDG is similar to the effective dose of spending one year in the American city of Denver, Colorado (12.4 mSv/year). For comparison, radiation dosage for other medical procedures range from 0.02 mSv for a chest X-ray and 6.5–8 mSv for a CT scan of the chest. Average civil aircrews are exposed to 3 mSv/year, and the whole body occupational dose limit for nuclear energy workers in the US is 50 mSv/year. For scale, see Orders of magnitude (radiation). For PET–CT scanning, the radiation exposure may be substantial—around 23–26 mSv (for a 70 kg person—dose is likely to be higher for higher body weights). == Operation == === Radionuclides and radiotracers === Radionuclides are incorporated either into compounds normally used by the body such as glucose (or glucose analogues), water, or ammonia, or into molecules that bind to receptors or other sites of drug action. Such labelled compounds are known as radiotracers. PET technology can be used to trace the biologic pathway of any compound in living humans (and many other species as well), provided it can be radiolabeled with a PET isotope. Thus, the specific processes that can be probed with PET are virtually limitless, and radiotracers for new target molecules and processes are continuing to be synthesized. As of this writing there are already dozens in clinical use and hundreds applied in research. In 2020 by far the most commonly used radiotracer in clinical PET scanning is the carbohydrate derivative FDG. This radiotracer is used in essentially all scans for oncology and most scans in neurology, thus makes up the large majority of radiotracer (>95%) used in PET and PET–CT scanning. Due to the short half-lives of most positron-emitting radioisotopes, the radiotracers have traditionally been produced using a cyclotron in close proximity to the PET imaging facility. The half-life of fluorine-18 is long enough that radiotracers labeled with fluorine-18 can be manufactured commercially at offsite locations and shipped to imaging centers. Recently rubidium-82 generators have become commercially available. These contain strontium-82, which decays by electron capture to produce positron-emitting rubidium-82. The use of positron-emitting isotopes of metals in PET scans has been reviewed, including elements not listed above, such as lanthanides. === Immuno-PET === The isotope 89Zr has been applied to the tracking and quantification of molecular antibodies with PET cameras (a method called "immuno-PET"). The biological half-life of antibodies is typically on the order of days, see daclizumab and erenumab by way of example. To visualize and quantify the distribution of such antibodies in the body, the PET isotope 89Zr is well suited because its physical half-life matches the typical biological half-life of antibodies, see table above. === Emission === To conduct the scan, a short-lived radioactive tracer isotope is injected into the living subject (usually into blood circulation). Each tracer atom has been chemically incorporated into a biologically active molecule. There is a waiting period while the active molecule becomes concentrated in tissues of interest. Then the subject is placed in the imaging scanner. The molecule most commonly used for this purpose is FDG, a sugar, for which the waiting period is typically an hour. During the scan, a record of tissue concentration is made as the tracer decays. As the radioisotope undergoes positron emission decay (also known as positive beta decay), it emits a positron, an antiparticle of the electron with opposite charge. The emitted positron travels in tissue for a short distance (typically less than 1 mm, but dependent on the isotope), during which time it loses kinetic energy, until it decelerates to a point where it can interact with an electron. The encounter annihilates both electron and positron, producing a pair of annihilation (gamma) photons moving in approximately opposite directions. These are detected when they reach a scintillator in the scanning device, creating a burst of light which is detected by photomultiplier tubes or silicon avalanche photodiodes (Si APD). The technique depends on simultaneous or coincident detection of the pair of photons moving in approximately opposite directions (they would be exactly opposite in their center of mass frame, but the scanner has no way to know this, and so has a built-in slight direction-error tolerance). Photons that do not arrive in temporal "pairs" (i.e. within a timing-window of a few nanoseconds) are ignored. === Localization of the positron annihilation event === The most significant fraction of electron–positron annihilations results in two 511 keV gamma photons being emitted at almost 180 degrees to each other. Hence, it is possible to localize their source along a straight line of coincidence (also called the line of response, or LOR). In practice, the LOR has a non-zero width as the emitted photons are not exactly 180 degrees apart. If the resolving time of the detectors is less than 500 picoseconds rather than about 10 nanoseconds, it is possible to localize the event to a segment of a chord, whose length is determined by the detector timing resolution. As the timing resolution improves, the signal-to-noise ratio (SNR) of the image will improve, requiring fewer events to achieve the same image quality. This technology is not yet common, but it is available on some new systems. === Image reconstruction === The raw data collected by a PET scanner are a list of 'coincidence events' representing near-simultaneous detection (typically, within a window of 6 to 12 nanoseconds of each other) of annihilation photons by a pair of detectors. Each coincidence event represents a line in space connecting the two detectors along which the positron emission occurred (i.e., the line of response (LOR)). Analytical techniques, much like the reconstruction of computed tomography (CT) and single-photon emission computed tomography (SPECT) data, are commonly used, although the data set collected in PET is much poorer than CT, so reconstruction techniques are more difficult. Coincidence events can be grouped into projection images, called sinograms. The sinograms are sorted by the angle of each view and tilt (for 3D images). The sinogram images are analogous to the projections captured by CT scanners, and can be reconstructed in a similar way. The statistics of data thereby obtained are much worse than those obtained through transmission tomography. A normal PET data set has millions of counts for the whole acquisition, while the CT can reach a few billion counts. This contributes to PET images appearing "noisier" than CT. Two major sources of noise in PET are scatter (a detected pair of photons, at least one of which was deflected from its original path by interaction with matter in the field of view, leading to the pair being assigned to an incorrect LOR) and random events (photons originating from two different annihilation events but incorrectly recorded as a coincidence pair because their arrival at their respective detectors occurred within a coincidence timing window). In practice, considerable pre-processing of the data is required – correction for random coincidences, estimation and subtraction of scattered photons, detector dead-time correction (after the detection of a photon, the detector must "cool down" again) and detector-sensitivity correction (for both inherent detector sensitivity and changes in sensitivity due to angle of incidence). Filtered back projection (FBP) has been frequently used to reconstruct images from the projections. This algorithm has the advantage of being simple while having a low requirement for computing resources. Disadvantages are that shot noise in the raw data is prominent in the reconstructed images, and areas of high tracer uptake tend to form streaks across the image. Also, FBP treats the data deterministically – it does not account for the inherent randomness associated with PET data, thus requiring all the pre-reconstruction corrections described above. Statistical, likelihood-based approaches: Statistical, likelihood-based iterative expectation-maximization algorithms such as the Shepp–Vardi algorithm are now the preferred method of reconstruction. These algorithms compute an estimate of the likely distribution of annihilation events that led to the measured data, based on statistical principles. The advantage is a better noise profile and resistance to the streak artifacts common with FBP, but the disadvantage is greater computer resource requirements. A further advantage of statistical image reconstruction techniques is that the physical effects that would need to be pre-corrected for when using an analytical reconstruction algorithm, such as scattered photons, random coincidences, attenuation and detector dead-time, can be incorporated into the likelihood model being used in the reconstruction, allowing for additional noise reduction. Iterative reconstruction has also been shown to result in improvements in the resolution of the reconstructed images, since more sophisticated models of the scanner physics can be incorporated into the likelihood model than those used by analytical reconstruction methods, allowing for improved quantification of the radioactivity distribution. Research has shown that Bayesian methods that involve a Poisson likelihood function and an appropriate prior probability (e.g., a smoothing prior leading to total variation regularization or a Laplacian distribution leading to ℓ 1 {\displaystyle \ell _{1}} -based regularization in a wavelet or other domain), such as via Ulf Grenander's Sieve estimator or via Bayes penalty methods or via I.J. Good's roughness method may yield superior performance to expectation-maximization-based methods which involve a Poisson likelihood function but do not involve such a prior. Attenuation correction: Quantitative PET Imaging requires attenuation correction. In these systems attenuation correction is based on a transmission scan using 68Ge rotating rod source. Transmission scans directly measure attenuation values at 511 keV. Attenuation occurs when photons emitted by the radiotracer inside the body are absorbed by intervening tissue between the detector and the emission of the photon. As different LORs must traverse different thicknesses of tissue, the photons are attenuated differentially. The result is that structures deep in the body are reconstructed as having falsely low tracer uptake. Contemporary scanners can estimate attenuation using integrated x-ray CT equipment, in place of earlier equipment that offered a crude form of CT using a gamma ray (positron emitting) source and the PET detectors. While attenuation-corrected images are generally more faithful representations, the correction process is itself susceptible to significant artifacts. As a result, both corrected and uncorrected images are always reconstructed and read together. 2D/3D reconstruction: Early PET scanners had only a single ring of detectors, hence the acquisition of data and subsequent reconstruction was restricted to a single transverse plane. More modern scanners now include multiple rings, essentially forming a cylinder of detectors. There are two approaches to reconstructing data from such a scanner: Treat each ring as a separate entity, so that only coincidences within a ring are detected, the image from each ring can then be reconstructed individually (2D reconstruction), or Allow coincidences to be detected between rings as well as within rings, then reconstruct the entire volume together (3D). 3D techniques have better sensitivity (because more coincidences are detected and used) hence less noise, but are more sensitive to the effects of scatter and random coincidences, as well as requiring greater computer resources. The advent of sub-nanosecond timing resolution detectors affords better random coincidence rejection, thus favoring 3D image reconstruction. Time-of-flight (TOF) PET: For modern systems with a higher time resolution (roughly 3 nanoseconds) a technique called "time-of-flight" is used to improve the overall performance. Time-of-flight PET makes use of very fast gamma-ray detectors and data processing system which can more precisely decide the difference in time between the detection of the two photons. It is impossible to localize the point of origin of the annihilation event exactly (currently within 10 cm). Therefore, image reconstruction is still needed. TOF technique gives a remarkable improvement in image quality, especially signal-to-noise ratio. === Combination of PET with CT or MRI === PET scans are increasingly read alongside CT or MRI scans, with the combination (co-registration) giving both anatomic and metabolic information (i.e., what the structure is, and what it is doing biochemically). Because PET imaging is most useful in combination with anatomical imaging, such as CT, modern PET scanners are now available with integrated high-end multi-detector-row CT scanners (PET–CT). Because the two scans can be performed in immediate sequence during the same session, with the patient not changing position between the two types of scans, the two sets of images are more precisely registered, so that areas of abnormality on the PET imaging can be more perfectly correlated with anatomy on the CT images. This is very useful in showing detailed views of moving organs or structures with higher anatomical variation, which is more common outside the brain. At the Jülich Institute of Neurosciences and Biophysics, the world's largest PET–MRI device began operation in April 2009. A 9.4-tesla magnetic resonance tomograph (MRT) combined with a PET. Presently, only the head and brain can be imaged at these high magnetic field strengths. For brain imaging, registration of CT, MRI and PET scans may be accomplished without the need for an integrated PET–CT or PET–MRI scanner by using a device known as the N-localizer. === Limitations === The minimization of radiation dose to the subject is an attractive feature of the use of short-lived radionuclides. Besides its established role as a diagnostic technique, PET has an expanding role as a method to assess the response to therapy, in particular, cancer therapy, where the risk to the patient from lack of knowledge about disease progress is much greater than the risk from the test radiation. Since the tracers are radioactive, the elderly and pregnant are unable to use it due to risks posed by radiation. Limitations to the widespread use of PET arise from the high costs of cyclotrons needed to produce the short-lived radionuclides for PET scanning and the need for specially adapted on-site chemical synthesis apparatus to produce the radiopharmaceuticals after radioisotope preparation. Organic radiotracer molecules that will contain a positron-emitting radioisotope cannot be synthesized first and then the radioisotope prepared within them, because bombardment with a cyclotron to prepare the radioisotope destroys any organic carrier for it. Instead, the isotope must be prepared first, then the chemistry to prepare any organic radiotracer (such as FDG) accomplished very quickly, in the short time before the isotope decays. Few hospitals and universities are capable of maintaining such systems, and most clinical PET is supported by third-party suppliers of radiotracers that can supply many sites simultaneously. This limitation restricts clinical PET primarily to the use of tracers labelled with fluorine-18, which has a half-life of 110 minutes and can be transported a reasonable distance before use, or to rubidium-82 (used as rubidium-82 chloride) with a half-life of 1.27 minutes, which is created in a portable generator and is used for myocardial perfusion studies. In recent years a few on-site cyclotrons with integrated shielding and "hot labs" (automated chemistry labs that are able to work with radioisotopes) have begun to accompany PET units to remote hospitals. The presence of the small on-site cyclotron promises to expand in the future as the cyclotrons shrink in response to the high cost of isotope transportation to remote PET machines. In recent years the shortage of PET scans has been alleviated in the US, as rollout of radiopharmacies to supply radioisotopes has grown 30 percent per year. Because the half-life of fluorine-18 is about two hours, the prepared dose of a radiopharmaceutical bearing this radionuclide will undergo multiple half-lives of decay during the working day. This necessitates frequent recalibration of the remaining dose (determination of activity per unit volume) and careful planning with respect to patient scheduling. == History == The concept of emission and transmission tomography was introduced by David E. Kuhl, Luke Chapman and Roy Edwards in the late 1950s. Their work would lead to the design and construction of several tomographic instruments at Washington University School of Medicine and later at the University of Pennsylvania. In the 1960s and 70s tomographic imaging instruments and techniques were further developed by Michel Ter-Pogossian, Michael E. Phelps, Edward J. Hoffman and others at Washington University School of Medicine. Work by Gordon Brownell, Charles Burnham and their associates at the Massachusetts General Hospital beginning in the 1950s contributed significantly to the development of PET technology and included the first demonstration of annihilation radiation for medical imaging. Their innovations, including the use of light pipes and volumetric analysis, have been important in the deployment of PET imaging. In 1961, James Robertson and his associates at Brookhaven National Laboratory built the first single-plane PET scan, nicknamed the "head-shrinker". One of the factors most responsible for the acceptance of positron imaging was the development of radiopharmaceuticals. In particular, the development of labeled 2-fluorodeoxy-D-glucose (FDG—firstly synthethized and described by two Czech scientists from Charles University in Prague in 1968) by the Brookhaven group under the direction of Al Wolf and Joanna Fowler was a major factor in expanding the scope of PET imaging. The compound was first administered to two normal human volunteers by Abass Alavi in August 1976 at the University of Pennsylvania. Brain images obtained with an ordinary (non-PET) nuclear scanner demonstrated the concentration of FDG in that organ. Later, the substance was used in dedicated positron tomographic scanners, to yield the modern procedure. The logical extension of positron instrumentation was a design using two two-dimensional arrays. PC-I was the first instrument using this concept and was designed in 1968, completed in 1969 and reported in 1972. The first applications of PC-I in tomographic mode as distinguished from the computed tomographic mode were reported in 1970. It soon became clear to many of those involved in PET development that a circular or cylindrical array of detectors was the logical next step in PET instrumentation. Although many investigators took this approach, James Robertson and Zang-Hee Cho were the first to propose a ring system that has become the prototype of the current shape of PET. The first multislice cylindrical array PET scanner was completed in 1974 at the Mallinckrodt Institute of Radiology by the group led by Ter-Pogossian. The PET–CT scanner, attributed to David Townsend and Ronald Nutt, was named by Time as the medical invention of the year in 2000. == Cost == As of August 2008, Cancer Care Ontario reports that the current average incremental cost to perform a PET scan in the province is CA$1,000–1,200 per scan. This includes the cost of the radiopharmaceutical and a stipend for the physician reading the scan. In the United States, a PET scan is estimated to be US$1,500–5,000. In England, the National Health Service reference cost (2015–2016) for an adult outpatient PET scan is £798. In Australia, as of July 2018, the Medicare Benefits Schedule Fee for whole body FDG PET ranges from A$953 to A$999, depending on the indication for the scan. == Quality control == The overall performance of PET systems can be evaluated by quality control tools such as the Jaszczak phantom. == See also == Diffuse optical imaging – also known as diffuse optical tomography, a medical imaging techniquePages displaying wikidata descriptions as a fallback Hot cell – Shielded nuclear radiation containment chamber Molecular imaging – Imaging molecules within living patients Neurotherapy – Medical treatment == References == == External links ==
Wikipedia/Positron_emission_tomography
Angiography or arteriography is a medical imaging technique used to visualize the inside, or lumen, of blood vessels and organs of the body, with particular interest in the arteries, veins, and the heart chambers. Modern angiography is performed by injecting a radio-opaque contrast agent into the blood vessel and imaging using X-ray based techniques such as fluoroscopy. With time-of-flight (TOF) magnetic resonance it is no longer necessary to use a contrast. The word itself comes from the Greek words ἀνγεῖον angeion 'vessel' and γράφειν graphein 'to write, record'. The film or image of the blood vessels is called an angiograph, or more commonly an angiogram. Though the word can describe both an arteriogram and a venogram, in everyday usage the terms angiogram and arteriogram are often used synonymously, whereas the term venogram is used more precisely. The term angiography has been applied to radionuclide angiography and newer vascular imaging techniques such as CO2 angiography, CT angiography and MR angiography. The term isotope angiography has also been used, although this more correctly is referred to as isotope perfusion scanning. == History == The technique was first developed in 1927 by the Portuguese physician and neurologist Egas Moniz at the University of Lisbon to provide contrasted X-ray cerebral angiography in order to diagnose several kinds of nervous diseases, such as tumors, artery disease and arteriovenous malformations. Moniz is recognized as the pioneer in this field. He performed the first cerebral angiogram in Lisbon in 1927, and Reynaldo dos Santos performed the first aortogram in the same city in 1929. In fact, many current angiography techniques were developed by the Portuguese at the University of Lisbon. For example, in 1932, Lopo de Carvalho performed the first pulmonary angiogram via venous puncture of the superior member. In 1948 the first cavogram was performed by Sousa Pereira. With the introduction of the Seldinger technique in 1953, the procedure became markedly safer as no sharp introductory devices needed to remain inside the vascular lumen. Radial access technique for angiography can be traced back to 1989, when Lucien Campeau first cannulated the radial artery to perform a coronary angiogram. == Technique == Depending on the type of angiogram, access to the blood vessels is gained most commonly through the femoral artery, to look at the left side of the heart and at the arterial system; or the jugular or femoral vein, to look at the right side of the heart and at the venous system. Using a system of guide wires and catheters, a type of contrast agent (which shows up by absorbing the X-rays), is added to the blood to make it visible on the X-ray images. The X-ray images taken may either be still, displayed on an image intensifier or film, or motion images. For all structures except the heart, the images are usually taken using a technique called digital subtraction angiography or DSA. Images in this case are usually taken at 2–3 frames per second, which allows the interventional radiologist to evaluate the flow of the blood through a vessel or vessels. This technique "subtracts" the bones and other organs so only the vessels filled with contrast agent can be seen. The heart images are taken at 15–30 frames per second, not using a subtraction technique. Because DSA requires the patient to remain motionless, it cannot be used on the heart. Both these techniques enable the interventional radiologist or cardiologist to see stenosis (blockages or narrowings) inside the vessel which may be inhibiting the flow of blood and causing pain. After the procedure has been completed, if the femoral technique is applied, the site of arterial entry is either manually compressed, stapled shut, or sutured in order to prevent access-site complications. == Uses == === Coronary angiography === One of the most common angiograms performed is to visualize the coronary arteries. A long, thin, flexible tube called a catheter is used to administer the X-ray contrast agent at the desired area to be visualized. The catheter is threaded into an artery in the forearm, and the tip is advanced through the arterial system into the major coronary artery. X-ray images of the transient radiocontrast distribution within the blood flowing inside the coronary arteries allows visualization of the size of the artery openings. The presence or absence of atherosclerosis or atheroma within the walls of the arteries cannot be clearly determined. Coronary angiography can visualize coronary artery stenosis, or narrowing of the blood vessel. The degree of stenosis can be determined by comparing the width of the lumen of narrowed segments of blood vessel with wider segments of adjacent vessel. The coronary angiography is performed under local anaesthesia. The patient is awake during the procedure. An incision is made in the groin, wrist, or arm, and a catheter is inserted into the artery through it. An X-ray is used to guide the catheter to the area of blockage. A dye is inserted through the catheter to make the places of blockage visible. When the catheter is in position, a thin wire with a balloon is guided to the place of blockage. The balloon is inflated to widen the artery, allowing the blood to flow freely. Often, a stent is used, and as the balloon is inflated, the stent in place expands and holds open the artery. The balloon is then deflated and removed, leaving the stent in place. After the completion of the procedure, the catheter is removed, and the plug area is sealed using angio-seal. The procedure takes around two hours, and the patient can be discharged after an overnight stay in the hospital, depending on the condition. === Cerebral angiography === Cerebral angiography provides images of blood vessels in and around the brain to detect abnormalities, including arteriovenous malformations and aneurysms. One common cerebral angiographic procedure is neuro-vascular digital subtraction angiography. === Pulmonary angiography === Pulmonary angiography is used to visualise the anatomy of pulmonary vessels. Pulmonary angiography may be used during embolization of pulmonary arteriovenous malformations. In addition, pulmonary angiography may be performed during treatment of pulmonary embolisms. === Peripheral angiography === Angiography is also commonly performed to identify vessels narrowing in patients with leg claudication or cramps, caused by reduced blood flow down the legs and to the feet; in patients with renal stenosis (which commonly causes high blood pressure) and can be used in the head to find and repair stroke. These are all done routinely through the femoral artery, but can also be performed through the brachial or axillary (arm) artery. Any stenoses found may be treated by the use of balloon angioplasty, stenting, or atherectomy. === Visceral angiography === A common indication for angiography is to evaluate and guide treatment for internal (e.g. gastrointestinal) bleeding. Angiography may also be used during hemorrhoidal artery embolization for treatment of symptomatic hemorrhoids. === Fluorescein angiography === Fluorescein angiography is a medical procedure in which a fluorescent dye is injected into the bloodstream. The dye highlights the blood vessels in the back of the eye so they can be photographed. This test is often used to manage eye disorders. === OCT angiography === Optical coherence tomography (OCT) is a technology using near-infrared light to image the eye, in particular penetrate the retina to view the micro-structure behind the retinal surface. Ocular OCT angiography (OCTA) is a method leveraging OCT technology to assess the vascular health of the retina. === Microangiography === Microangiography is commonly used to visualize tiny blood vessels. === Post mortem CT angiography === Post mortem CT angiography for medicolegal cases is a method initially developed by a virtopsy group. Originating from that project, both watery and oily solutions have been evaluated. While oily solutions require special deposition equipment to collect waste water, watery solutions seem to be regarded as less problematic. Watery solutions also were documented to enhance post mortem CT tissue differentiation whereas oily solutions were not. Conversely, oily solutions seem to only minimally disturb ensuing toxicological analysis, while watery solutions may significantly impede toxicological analysis, thus requiring blood sample preservation before post mortem CT angiography. == Complications == Angiography is a relatively safe procedure. But it does have some minor and very few major complications. After an angiogram, a sudden shock can cause a little pain at the surgery area, but heart attacks and strokes usually do not occur, as they may in bypass surgery. The risk of complications from angiography can be reduced with a prior CT scan by providing clinicians with more information about number and positioning of the clots in advance. === Cerebral angiography === Major complications in cerebral angiography such as in digital subtraction angiography or contrast MRI are also rare but include stroke, an allergic reaction to the anaesthetic other medication or the contrast medium, blockage or damage to one of the access veins in the leg, pseudoaneurysm at the puncture site; or thrombosis and embolism formation. Bleeding or bruising at the site where the contrast is injected are minor complications, delayed bleeding can also occur but is rare. === Additional risks === The contrast medium that is used usually produces a sensation of warmth lasting only a few seconds, but may be felt in a greater degree in the area of injection. If the patient is allergic to the contrast medium, much more serious side effects are inevitable; however, with new contrast agents the risk of a severe reaction is less than one in 80,000 examinations. Additionally, damage to blood vessels can occur at the site of puncture/injection, and anywhere along the vessel during passage of the catheter. If digital subtraction angiography is used instead, the risks are considerably reduced because the catheter does not need to be passed as far into the blood vessels; thus lessening the chances of damage or blockage. === Infection === Antibiotic prophylaxis may be given in those procedures that are not clean, or clean procedures that results in generation of infarcted or necrotic tissues such as embolisation. Routine diagnostic angiography is often considered a clean procedure. Prophylaxis is also given to prevent infection from infected space into blood stream. === Thrombosis === There are six risk factors causing thrombosis after arterial puncture: low blood pressure, small arterial diameter, multiple puncture tries, long duration of cannulation, administration of vasopressor/inotropic agents, and the usage of catheters with side holes. == See also == == References == == External links == RadiologyInfo for patients: Angiography procedures Cardiac Catheterization from Angioplasty.Org C-Arms types Several types of C-Arms Coronary CT angiography by Eugene Lin
Wikipedia/Angiography
Electrocardiography is the process of producing an electrocardiogram (ECG or EKG), a recording of the heart's electrical activity through repeated cardiac cycles. It is an electrogram of the heart which is a graph of voltage versus time of the electrical activity of the heart using electrodes placed on the skin. These electrodes detect the small electrical changes that are a consequence of cardiac muscle depolarization followed by repolarization during each cardiac cycle (heartbeat). Changes in the normal ECG pattern occur in numerous cardiac abnormalities, including: Cardiac rhythm disturbances, such as atrial fibrillation and ventricular tachycardia; Inadequate coronary artery blood flow, such as myocardial ischemia and myocardial infarction; and electrolyte disturbances, such as hypokalemia. Traditionally, "ECG" usually means a 12-lead ECG taken while lying down as discussed below. However, other devices can record the electrical activity of the heart such as a Holter monitor but also some models of smartwatch are capable of recording an ECG. ECG signals can be recorded in other contexts with other devices. In a conventional 12-lead ECG, ten electrodes are placed on the patient's limbs and on the surface of the chest. The overall magnitude of the heart's electrical potential is then measured from twelve different angles ("leads") and is recorded over a period of time (usually ten seconds). In this way, the overall magnitude and direction of the heart's electrical depolarization is captured at each moment throughout the cardiac cycle. There are three main components to an ECG: The P wave, which represents depolarization of the atria. The QRS complex, which represents depolarization of the ventricles. The T wave, which represents repolarization of the ventricles. During each heartbeat, a healthy heart has an orderly progression of depolarization that starts with pacemaker cells in the sinoatrial node, spreads throughout the atrium, and passes through the atrioventricular node down into the bundle of His and into the Purkinje fibers, spreading down and to the left throughout the ventricles. This orderly pattern of depolarization gives rise to the characteristic ECG tracing. To the trained clinician, an ECG conveys a large amount of information about the structure of the heart and the function of its electrical conduction system. Among other things, an ECG can be used to measure the rate and rhythm of heartbeats, the size and position of the heart chambers, the presence of any damage to the heart's muscle cells or conduction system, the effects of heart drugs, and the function of implanted pacemakers. == Medical uses == The overall goal of performing an ECG is to obtain information about the electrical functioning of the heart. Medical uses for this information are varied and often need to be combined with knowledge of the structure of the heart and physical examination signs to be interpreted. Some indications for performing an ECG include the following: Chest pain or suspected myocardial infarction (heart attack), such as ST elevated myocardial infarction (STEMI) or non-ST elevated myocardial infarction (NSTEMI) Symptoms such as shortness of breath, murmurs, fainting, seizures, funny turns, or arrhythmias including new onset palpitations or monitoring of known cardiac arrhythmias Medication monitoring (e.g., drug-induced QT prolongation, digoxin toxicity) and management of overdose (e.g., tricyclic overdose) Electrolyte abnormalities, such as hyperkalemia Perioperative monitoring in which any form of anesthesia is involved (e.g., monitored anesthesia care, general anesthesia). This includes preoperative assessment and intraoperative and postoperative monitoring. Cardiac stress testing Computed tomography angiography (CTA) and magnetic resonance angiography (MRA) of the heart (ECG is used to "gate" the scanning so that the anatomical position of the heart is steady) Clinical cardiac electrophysiology, in which a catheter is inserted through the femoral vein and can have several electrodes along its length to record the direction of electrical activity from within the heart. ECGs can be recorded as short intermittent tracings or continuous ECG monitoring. Continuous monitoring is used for critically ill patients, patients undergoing general anesthesia, and patients who have an infrequently occurring cardiac arrhythmia that would unlikely be seen on a conventional ten-second ECG. Continuous monitoring can be conducted by using Holter monitors, internal and external defibrillators and pacemakers, and/or biotelemetry. === Screening === For adults, evidence does not support the use of ECGs among those without symptoms or at low risk of cardiovascular disease as an effort for prevention. This is because an ECG may falsely indicate the existence of a problem, leading to misdiagnosis, the recommendation of invasive procedures, and overtreatment. However, persons employed in certain critical occupations, such as aircraft pilots, may be required to have an ECG as part of their routine health evaluations. Hypertrophic cardiomyopathy screening may also be considered in adolescents as part of a sports physical out of concern for sudden cardiac death. == Electrocardiograph machines == Mechanical cardiographs (apex cardiogram), developed in the 19th century, recorded heart movements by transmitting heart or chest wall motions to a spring and air chamber system. A writing lever traced these movements onto a smoked rotating cylinder, producing a cardiogram. Their accuracy was limited, as they captured all body movements, introducing errors. Modern day electrocardiograms are recorded by machines that consist of a set of electrodes connected to a central unit. In the late 19th century, scientists discovered the heart’s electrical activity, leading to the electrocardiograph’s development. Willem Einthoven’s 1903 string galvanometer enabled precise measurement of these signals, revolutionizing cardiography. He received the 1924 Nobel Prize for this work. Early ECG machines were constructed with analog electronics, where the signal drove a motor to print out the signal onto paper. Today, electrocardiographs use analog-to-digital converters to convert the electrical activity of the heart to a digital signal. Many ECG machines are now portable and commonly include a screen, keyboard, and printer on a small wheeled cart. Recent advancements in electrocardiography include developing even smaller devices for inclusion in fitness trackers and smart watches. These smaller devices often rely on only two electrodes to deliver a single lead I. Portable twelve-lead devices powered by batteries are also available. Recording an ECG is a safe and painless procedure. The machines are powered by mains power but they are designed with several safety features including an earthed (ground) lead. Other features include: Defibrillation protection: any ECG used in healthcare may be attached to a person who requires defibrillation and the ECG needs to protect itself from this source of energy. Electrostatic discharge is similar to defibrillation discharge and requires voltage protection up to 18,000 volts. Additionally, circuitry called the right leg driver can be used to reduce common-mode interference (typically the 50 or 60 Hz mains power). ECG voltages measured across the body are very small. This low voltage necessitates a low noise circuit, instrumentation amplifiers, and electromagnetic shielding. Simultaneous lead recordings: earlier designs recorded each lead sequentially, but current models record multiple leads simultaneously. Most modern ECG machines include automated interpretation algorithms. This analysis calculates features such as the PR interval, QT interval, corrected QT (QTc) interval, PR axis, QRS axis, rhythm and more. The results from these automated algorithms are considered "preliminary" until verified and/or modified by expert interpretation. Despite recent advances, computer misinterpretation remains a significant problem and can result in clinical mismanagement. === Cardiac monitors === Besides the standard electrocardiograph machine, there are other devices that can record ECG signals. Portable devices have existed since the Holter monitor was introduced in 1962. Traditionally, these monitors have used electrodes with patches on the skin to record the ECG, but new devices can stick to the chest as a single patch without need for wires, developed by Zio (Zio XT), TZ Medical (Trident), Philips (BioTel) and BardyDx (CAM) among many others. Implantable devices such as the artificial cardiac pacemaker and implantable cardioverter-defibrillator are capable of measuring a "far field" signal between the leads in the heart and the implanted battery/generator that resembles an ECG signal (technically, the signal recorded in the heart is called an electrogram, which is interpreted differently). The development of the Holter monitor led to the creation of the implantable loop recorder, which performs the same function but is an implantable device with batteries that last for years. Additionally, there are available various Arduino kits with ECG sensor modules and smartwatch devices that are capable of recording an ECG signal as well, such as with the 4th generation Apple Watch (2018), Samsung Galaxy Watch 4 (2021) and newer devices. == Electrodes and leads == Electrodes are the actual conductive pads attached to the body surface. Any pair of electrodes can measure the electrical potential difference between the two corresponding locations of attachment. Such a pair forms a lead. However, "leads" can also be formed between a physical electrode and a virtual electrode, which is the average of numerous leads. All clinical ECGs use Wilson's central terminal (WCT) as the virtual electrode from which the precordial leads are measured, whose potential is defined as the average potential measured by the three standard limb leads. Commonly, 10 electrodes attached to the body are used to form 12 ECG leads, with each lead measuring a specific electrical potential difference. === 12-Lead ECG === Leads are broken down into three types: limb; augmented limb; and precordial or chest. The 12-lead ECG has a total of three limb leads and three augmented limb leads arranged like spokes of a wheel in the coronal plane (vertical), and six precordial leads or chest leads that lie on the perpendicular transverse plane (horizontal). Electrodes should be placed in standard positions, with 'left' or 'right' referring to anatomical directions, being the patient's left or right. Exceptions due to emergency or other issues should be recorded to avoid erroneous analysis. The 12 standard ECG leads and electrodes are listed below. All leads are effectively bipolar, with one positive and one negative electrode; the term "unipolar" is not true, nor useful. Two types of electrodes in common use are a flat paper-thin sticker and a self-adhesive circular pad. The former are typically used in a single ECG recording while the latter are for continuous recordings as they stick longer. Each electrode consists of an electrically conductive electrolyte gel and a silver/silver chloride conductor. The gel typically contains potassium chloride – sometimes silver chloride as well – to permit electron conduction from the skin to the wire and to the electrocardiogram. === Virtual Electrode === The virtual electrode is used to obtain useful measurements from the precordial leads, and also allows the creation of the augmented limb leads. The virtual electrode is known as Wilson's Central Terminal (WCT). For the precordial leads, WCT is formed by averaging the three standard limb leads (I, II, and III): V W = 1 3 ( R A + L A + L L ) {\displaystyle V_{W}={\frac {1}{3}}(RA+LA+LL)} WCT is therefore a virtual electrode which sits slightly posteriorly to the heart. It is a useful point, from which the electrical potential of the precordial leads is measured. WCT used to be used as a reference for the virtual limb leads, however use in this way produced leads with very small amplitudes. Goldberger's modification is now used to produce each augmented limb lead, aVF, aVR, and aVL, which produces 50% larger amplitude leads than the standard WCT. Goldberger's WCT is formed according to the following: a V R = L A + L L 2 {\displaystyle aVR={\frac {LA+LL}{2}}} a V L = R A + L L 2 {\displaystyle aVL={\frac {RA+LL}{2}}} a V F = R A + L A 2 {\displaystyle aVF={\frac {RA+LA}{2}}} In a 12-lead ECG, all leads except the limb leads are assumed to be unipolar (aVR, aVL, aVF, V1, V2, V3, V4, V5, and V6). The measurement of a voltage requires two contacts and so, electrically, the unipolar leads are measured from the common lead (negative) and the unipolar lead (positive). This averaging for the common lead and the abstract unipolar lead concept makes for a more challenging understanding and is complicated by sloppy usage of "lead" and "electrode". In fact, instead of being a constant reference, VW has a value that fluctuates throughout the heart cycle. It also does not truly represent the center-of-heart potential due to the body parts the signals travel through. Because voltage is by definition a bipolar measurement between two points, describing an electrocardiographic lead as "unipolar" makes little sense electrically and should be avoided. The American Heart Association states "All leads are effectively 'bipolar,' and the term 'unipolar' in description of the augmented limb leads and the precordial leads lacks precision." === Limb leads === Leads I, II and III are called the limb leads. The electrodes that form these signals are located on the limbs – one on each arm and one on the left leg. The limb leads form the points of what is known as Einthoven's triangle. Lead I is the voltage between the (positive) left arm (LA) electrode and right arm (RA) electrode: I = L A − R A {\displaystyle I=LA-RA} Lead II is the voltage between the (positive) left leg (LL) electrode and the right arm (RA) electrode: I I = L L − R A {\displaystyle II=LL-RA} Lead III is the voltage between the (positive) left leg (LL) electrode and the left arm (LA) electrode: I I I = L L − L A {\displaystyle III=LL-LA} === Augmented limb leads === Leads aVR, aVL, and aVF are the augmented limb leads. They are derived from the same three electrodes as leads I, II, and III, but they use Goldberger's central terminal as their negative pole. Goldberger's central terminal is a combination of inputs from two limb electrodes, with a different combination for each augmented lead. It is referred to immediately below as "the negative pole". Lead augmented vector right (aVR) has the positive electrode on the right arm. The negative pole is a combination of the left arm electrode and the left leg electrode: a V R = R A − 1 2 ( L A + L L ) = 3 2 ( R A − V W ) {\displaystyle aVR=RA-{\frac {1}{2}}(LA+LL)={\frac {3}{2}}(RA-V_{W})} Lead augmented vector left (aVL) has the positive electrode on the left arm. The negative pole is a combination of the right arm electrode and the left leg electrode: a V L = L A − 1 2 ( R A + L L ) = 3 2 ( L A − V W ) {\displaystyle aVL=LA-{\frac {1}{2}}(RA+LL)={\frac {3}{2}}(LA-V_{W})} Lead augmented vector foot (aVF) has the positive electrode on the left leg. The negative pole is a combination of the right arm electrode and the left arm electrode: a V F = L L − 1 2 ( R A + L A ) = 3 2 ( L L − V W ) {\displaystyle aVF=LL-{\frac {1}{2}}(RA+LA)={\frac {3}{2}}(LL-V_{W})} Together with leads I, II, and III, augmented limb leads aVR, aVL, and aVF form the basis of the hexaxial reference system, which is used to calculate the heart's electrical axis in the frontal plane. Older versions of the nodes (VR, VL, VF) use Wilson's central terminal as the negative pole, but the amplitude is too small for the thick lines of old ECG machines. The Goldberger terminals scale up (augments) the Wilson results by 50%, at the cost of sacrificing physical correctness by not having the same negative pole for all three. === Precordial leads === The precordial leads lie in the transverse (horizontal) plane, perpendicular to the other six leads. The six precordial electrodes act as the positive poles for the six corresponding precordial leads: (V1, V2, V3, V4, V5, and V6). Wilson's central terminal is used as the negative pole. Recently, unipolar precordial leads have been used to create bipolar precordial leads that explore the right to left axis in the horizontal plane. === Specialized leads === Additional electrodes may rarely be placed to generate other leads for specific diagnostic purposes. Right-sided precordial leads may be used to better study pathology of the right ventricle or for dextrocardia (and are denoted with an R (e.g., V5R). Posterior leads (V7 to V9) may be used to demonstrate the presence of a posterior myocardial infarction. The Lewis lead or S5-lead (requiring an electrode at the right sternal border in the second intercostal space) can be used to better detect atrial activity in relation to that of the ventricles. An esophageal lead can be inserted to a part of the esophagus where the distance to the posterior wall of the left atrium is only approximately 5–6 mm (remaining constant in people of different age and weight). An esophageal lead avails for a more accurate differentiation between certain cardiac arrhythmias, particularly atrial flutter, AV nodal reentrant tachycardia and orthodromic atrioventricular reentrant tachycardia. It can also evaluate the risk in people with Wolff-Parkinson-White syndrome, as well as terminate supraventricular tachycardia caused by re-entry. An intracardiac electrogram (ICEG) is essentially an ECG with some added intracardiac leads (that is, inside the heart). The standard ECG leads (external leads) are I, II, III, aVL, V1, and V6. Two to four intracardiac leads are added via cardiac catheterization. The word "electrogram" (EGM) without further specification usually means an intracardiac electrogram. === Lead locations on an ECG report === A standard 12-lead ECG report (an electrocardiograph) shows a 2.5 second tracing of each of the twelve leads. The tracings are most commonly arranged in a grid of four columns and three rows. The first column is the limb leads (I, II, and III), the second column is the augmented limb leads (aVR, aVL, and aVF), and the last two columns are the precordial leads (V1 to V6). Additionally, a rhythm strip may be included as a fourth or fifth row. The timing across the page is continuous and notes tracings of the 12 leads for the same time period. In other words, if the output were traced by needles on paper, each row would switch which leads as the paper is pulled under the needle. For example, the top row would first trace lead I, then switch to lead aVR, then switch to V1, and then switch to V4, and so none of these four tracings of the leads are from the same time period as they are traced in sequence through time. === Contiguity of leads === Each of the 12 ECG leads records the electrical activity of the heart from a different angle, and therefore align with different anatomical areas of the heart. Two leads that look at neighboring anatomical areas are said to be contiguous. In addition, any two precordial leads next to one another are considered to be contiguous. For example, though V4 is an anterior lead and V5 is a lateral lead, they are contiguous because they are next to one another. == Electrophysiology == The study of the conduction system of the heart is called cardiac electrophysiology (EP). An EP study is performed via a right-sided cardiac catheterization: a wire with an electrode at its tip is inserted into the right heart chambers from a peripheral vein, and placed in various positions in close proximity to the conduction system so that the electrical activity of that system can be recorded. Standard catheter positions for an EP study include "high right atrium" or hRA near the sinus node, a "His" across the septal wall of the tricuspid valve to measure bundle of His, a "coronary sinus" into the coronary sinus, and a "right ventricle" in the apex of the right ventricle. == Interpretation == Interpretation of the ECG is fundamentally about understanding the electrical conduction system of the heart. Normal conduction starts and propagates in a predictable pattern, and deviation from this pattern can be a normal variation or be pathological. An ECG does not equate with mechanical pumping activity of the heart; for example, pulseless electrical activity produces an ECG that should pump blood but no pulses are felt (and constitutes a medical emergency and CPR should be performed). Ventricular fibrillation produces an ECG but is too dysfunctional to produce a life-sustaining cardiac output. Certain rhythms are known to have good cardiac output and some are known to have bad cardiac output. Ultimately, an echocardiogram or other anatomical imaging modality is useful in assessing the mechanical function of the heart. Like all medical tests, what constitutes "normal" is based on population studies. The heartrate range of between 60 and 100 beats per minute (bpm) is considered normal since data shows this to be the usual resting heart rate. === Theory === Interpretation of the ECG is ultimately that of pattern recognition. In order to understand the patterns found, it is helpful to understand the theory of what ECGs represent. The theory is rooted in electromagnetics and boils down to the four following points: depolarization of the heart toward the positive electrode produces a positive deflection depolarization of the heart away from the positive electrode produces a negative deflection repolarization of the heart toward the positive electrode produces a negative deflection repolarization of the heart away from the positive electrode produces a positive deflection Thus, the overall direction of depolarization and repolarization produces positive or negative deflection on each lead's trace. For example, depolarizing from right to left would produce a positive deflection in lead I because the two vectors point in the same direction. In contrast, that same depolarization would produce minimal deflection in V1 and V2 because the vectors are perpendicular, and this phenomenon is called isoelectric. Normal rhythm produces four entities – a P wave, a QRS complex, a T wave, and a U wave – that each have a fairly unique pattern. The P wave represents atrial depolarization. The QRS complex represents ventricular depolarization. The T wave represents ventricular repolarization. The U wave represents papillary muscle repolarization. Changes in the structure of the heart and its surroundings (including blood composition) change the patterns of these four entities. The U wave is not typically seen and its absence is generally ignored. Atrial repolarization is typically hidden in the much more prominent QRS complex and normally cannot be seen without additional, specialized electrodes. === Background grid === ECGs are normally printed on a grid. The horizontal axis represents time and the vertical axis represents voltage. The standard values on this grid are shown in the adjacent image at 25mm/sec (or 40ms per mm): A small box is 1 mm × 1 mm and represents 0.1 mV × 0.04 seconds. A large box is 5 mm × 5 mm and represents 0.5 mV × 0.20 seconds. The "large" box is represented by a heavier line weight than the small boxes. The standard printing speed in the United States is 25 mm per sec (5 big boxes per second), but in other countries it can be 50 mm per sec. Faster speeds such as 100 and 200 mm per sec are used during electrophysiology studies. Not all aspects of an ECG rely on precise recordings or having a known scaling of amplitude or time. For example, determining if the tracing is a sinus rhythm only requires feature recognition and matching, and not measurement of amplitudes or times (i.e., the scale of the grids are irrelevant). An example to the contrary, the voltage requirements of left ventricular hypertrophy require knowing the grid scale. === Rate and rhythm === In a normal heart, the heart rate is the rate at which the sinoatrial node depolarizes since it is the source of depolarization of the heart. Heart rate, like other vital signs such as blood pressure and respiratory rate, change with age. In adults, a normal heart rate is between 60 and 100 bpm (normocardic), whereas it is higher in children. A heart rate below normal is called "bradycardia" (<60 in adults) and above normal is called "tachycardia" (>100 in adults). A complication of this is when the atria and ventricles are not in synchrony and the "heart rate" must be specified as atrial or ventricular (e.g., the ventricular rate in ventricular fibrillation is 300–600 bpm, whereas the atrial rate can be normal [60–100] or faster [100–150]). In normal resting hearts, the physiologic rhythm of the heart is normal sinus rhythm (NSR). Normal sinus rhythm produces the prototypical pattern of P wave, QRS complex, and T wave. Generally, deviation from normal sinus rhythm is considered a cardiac arrhythmia. Thus, the first question in interpreting an ECG is whether or not there is a sinus rhythm. A criterion for sinus rhythm is that P waves and QRS complexes appear 1-to-1, thus implying that the P wave causes the QRS complex. Once sinus rhythm is established, or not, the second question is the rate. For a sinus rhythm, this is either the rate of P waves or QRS complexes since they are 1-to-1. If the rate is too fast, then it is sinus tachycardia, and if it is too slow, then it is sinus bradycardia. If it is not a sinus rhythm, then determining the rhythm is necessary before proceeding with further interpretation. Some arrhythmias with characteristic findings: Absent P waves with "irregularly irregular" QRS complexes are the hallmark of atrial fibrillation. A "saw tooth" pattern with QRS complexes is the hallmark of atrial flutter. A sine wave pattern is the hallmark of ventricular flutter. Absent P waves with wide QRS complexes and a fast heart rate are ventricular tachycardia. Determination of rate and rhythm is necessary in order to make sense of further interpretation. === Axis === The heart has several axes, but the most common by far is the axis of the QRS complex (references to "the axis" imply the QRS axis). Each axis can be computationally determined to result in a number representing degrees of deviation from zero, or it can be categorized into a few types. The QRS axis is the general direction of the ventricular depolarization wavefront (or mean electrical vector) in the frontal plane. It is often sufficient to classify the axis as one of three types: normal, left deviated, or right deviated. Population data shows that a normal QRS axis is from −30° to 105°, with 0° being along lead I and positive being inferior and negative being superior (best understood graphically as the hexaxial reference system). Beyond +105° is right axis deviation and beyond −30° is left axis deviation (the third quadrant of −90° to −180° is very rare and is an indeterminate axis). A shortcut for determining if the QRS axis is normal is if the QRS complex is mostly positive in lead I and lead II (or lead I and aVF if +90° is the upper limit of normal). The normal QRS axis is generally down and to the left, following the anatomical orientation of the heart within the chest. An abnormal axis suggests a change in the physical shape and orientation of the heart or a defect in its conduction system that causes the ventricles to depolarize in an abnormal way. The extent of a normal axis can be +90° or 105° depending on the source. === Amplitudes and intervals === All of the waves on an ECG tracing and the intervals between them have a predictable time duration, a range of acceptable amplitudes (voltages), and a typical morphology. Any deviation from the normal tracing is potentially pathological and therefore of clinical significance. For ease of measuring the amplitudes and intervals, an ECG is printed on graph paper at a standard scale: each 1 mm (one small box on the standard 25mm/s ECG paper) represents 40 milliseconds of time on the x-axis, and 0.1 millivolts on the y-axis. === Time-Frequency Analysis in ECG Signal Processing === In electrocardiogram (ECG) signal processing, Time-Frequency Analysis (TFA) is an important technique used to reveal how the frequency characteristics of ECG signals change over time, especially in non-stationary signals such as arrhythmias or transient cardiac events. Common Methods for Time-Frequency Analysis Steps for Time-Frequency Analysis Step1: Preprocessing Signal Denoising: Use wavelet denoising, band-pass filtering (0.5–50 Hz), or Principal Component Analysis (PCA) to remove electromyographic (EMG) noise. Signal Segmentation: Segment the signal based on heartbeat cycles (e.g., R-wave detection). Step2: Select an Appropriate TFA Method Choose methods such as STFT, WT, or HHT based on the application requirements. Step3: Compute the Time-Frequency Spectrum Calculate the time-frequency distribution using the selected method to generate a time-frequency representation. Step4: Feature Extraction Extract power features from specific frequency bands, such as low-frequency (LF: 0.04–0.15 Hz) and high-frequency (HF: 0.15–0.4 Hz) components. Step5: Pattern Recognition or Diagnosis Apply machine learning or deep learning models to detect or classify cardiac events based on the time-frequency features. Application Scenarios Heart Rate Variability Analysis (HRV): Time-frequency analysis helps to separate sympathetic and parasympathetic nervous system activity. Atrial Fibrillation Detection: Analyze the time-frequency characteristics of atrial activity. Ventricular Fibrillation Analysis: Detect time-frequency changes in high-frequency abnormal components. === Limb leads and electrical conduction through the heart === The animation shown to the right illustrates how the path of electrical conduction gives rise to the ECG waves in the limb leads. What is green zone ? Recall that a positive current (as created by depolarization of cardiac cells) traveling towards the positive electrode and away from the negative electrode creates a positive deflection on the ECG. Likewise, a positive current traveling away from the positive electrode and towards the negative electrode creates a negative deflection on the ECG. The red arrow represents the overall direction of travel of the depolarization. The magnitude of the red arrow is proportional to the amount of tissue being depolarized at that instance. The red arrow is simultaneously shown on the axis of each of the 3 limb leads. Both the direction and the magnitude of the red arrow's projection onto the axis of each limb lead is shown with blue arrows. Then, the direction and magnitude of the blue arrows are what theoretically determine the deflections on the ECG. For example, as a blue arrow on the axis for Lead I moves from the negative electrode, to the right, towards the positive electrode, the ECG line rises, creating an upward wave. As the blue arrow on the axis for Lead I moves to the left, a downward wave is created. The greater the magnitude of the blue arrow, the greater the deflection on the ECG for that particular limb lead. Frames 1–3 depict the depolarization being generated in and spreading through the sinoatrial node. The SA node is too small for its depolarization to be detected on most ECGs. Frames 4–10 depict the depolarization traveling through the atria, towards the atrioventricular node. During frame 7, the depolarization is traveling through the largest amount of tissue in the atria, which creates the highest point in the P wave. Frames 11–12 depict the depolarization traveling through the AV node. Like the SA node, the AV node is too small for the depolarization of its tissue to be detected on most ECGs. This creates the flat PR segment. Frame 13 depicts an interesting phenomenon in an over-simplified fashion. It depicts the depolarization as it starts to travel down the interventricular septum, through the bundle of His and bundle branches. After the Bundle of His, the conduction system splits into the left bundle branch and the right bundle branch. Both branches conduct action potentials at about 1 m/s. However, the action potential starts traveling down the left bundle branch about 5 milliseconds before it starts traveling down the right bundle branch, as depicted by frame 13. This causes the depolarization of the interventricular septum tissue to spread from left to right, as depicted by the red arrow in frame 14. In some cases, this gives rise to a negative deflection after the PR interval, creating a Q wave such as the one seen in lead I in the animation to the right. Depending on the mean electrical axis of the heart, this phenomenon can result in a Q wave in lead II as well. Following depolarization of the interventricular septum, the depolarization travels towards the apex of the heart. This is depicted by frames 15–17 and results in a positive deflection on all three limb leads, which creates the R wave. Frames 18–21 then depict the depolarization as it travels throughout both ventricles from the apex of the heart, following the action potential in the Purkinje fibers. This phenomenon creates a negative deflection in all three limb leads, forming the S wave on the ECG. Repolarization of the atria occurs at the same time as the generation of the QRS complex, but it is not detected by the ECG since the tissue mass of the ventricles is so much larger than that of the atria. Ventricular contraction occurs between ventricular depolarization and repolarization. During this time, there is no movement of charge, so no deflection is created on the ECG. This results in the flat ST segment after the S wave. Frames 24–28 in the animation depict repolarization of the ventricles. The epicardium is the first layer of the ventricles to repolarize, followed by the myocardium. The endocardium is the last layer to repolarize. The plateau phase of depolarization has been shown to last longer in endocardial cells than in epicardial cells. This causes repolarization to start from the apex of the heart and move upwards. Since repolarization is the spread of negative current as membrane potentials decrease back down to the resting membrane potential, the red arrow in the animation is pointing in the direction opposite of the repolarization. This therefore creates a positive deflection in the ECG, and creates the T wave. === Ischemia and infarction === Ischemia or non-ST elevation myocardial infarctions (non-STEMIs) may manifest as ST depression or inversion of T waves. It may also affect the high frequency band of the QRS. ST elevation myocardial infarctions (STEMIs) have different characteristic ECG findings based on the amount of time elapsed since the MI first occurred. The earliest sign is hyperacute T waves, peaked T waves due to local hyperkalemia in ischemic myocardium. This then progresses over a period of minutes to elevations of the ST segment by at least 1 mm. Over a period of hours, a pathologic Q wave may appear and the T wave will invert. Over a period of days the ST elevation will resolve. Pathologic Q waves generally will remain permanently. The coronary artery that has been occluded can be identified in an STEMI based on the location of ST elevation. The left anterior descending (LAD) artery supplies the anterior wall of the heart, and therefore causes ST elevations in anterior leads (V1 and V2). The LCx supplies the lateral aspect of the heart and therefore causes ST elevations in lateral leads (I, aVL and V6). The right coronary artery (RCA) usually supplies the inferior aspect of the heart, and therefore causes ST elevations in inferior leads (II, III and aVF). === Artifacts === An ECG tracing is affected by patient motion. Some rhythmic motions (such as shivering or tremors) can create the illusion of cardiac arrhythmia. Artifacts are distorted signals caused by a secondary internal or external sources, such as muscle movement or interference from an electrical device. Distortion poses significant challenges to healthcare providers, who employ various techniques and strategies to safely recognize these false signals. Accurately separating the ECG artifact from the true ECG signal can have a significant impact on patient outcomes and legal liabilities. Improper lead placement (for example, reversing two of the limb leads) has been estimated to occur in 0.4% to 4% of all ECG recordings, and has resulted in improper diagnosis and treatment including unnecessary use of thrombolytic therapy. === A Method for Interpretation === Whitbread, consultant nurse and paramedic, suggests ten rules of the normal ECG, deviation from which is likely to indicate pathology. These have been added to, creating the 15 rules for 12-lead (and 15- or 18-lead) interpretation. Rule 1: All waves in aVR are negative. Rule 2: The ST segment (J point) starts on the isoelectric line (except in V1 & V2 where it may be elevated by not greater than 1 mm). Rule 3: The PR interval should be 0.12–0.2 seconds long. Rule 4: The QRS complex should not exceed 0.11–0.12 seconds. Rule 5: The QRS and T waves tend to have the same general direction in the limb leads. Rule 6: The R wave in the precordial (chest) leads grows from V1 to at least V4 where it may or may not decline again. Rule 7: The QRS is mainly upright in I and II. Rule 8: The P wave is upright in I II and V2 to V6. Rule 9: There is no Q wave or only a small q (<0.04 seconds in width) in I, II and V2 to V6. Rule 10: The T wave is upright in I II and V2 to V6. The end of the T wave should not drop below the isoelectric baseline. Rule 11: Does the deepest S wave in V1 plus the tallest R wave in V5 or V6 equal >35 mm? Rule 12: Is there an Epsilon wave? Rule 13: Is there an J wave? Rule 14: Is there a Delta wave? Rule 15: Are there any patterns representing an occlusive myocardial infarction (OMI)? == Diagnosis == Numerous diagnoses and findings can be made based upon electrocardiography, and many are discussed above. Overall, the diagnoses are made based on the patterns. For example, an "irregularly irregular" QRS complex without P waves is the hallmark of atrial fibrillation; however, other findings can be present as well, such as a bundle branch block that alters the shape of the QRS complexes. ECGs can be interpreted in isolation but should be applied – like all diagnostic tests – in the context of the patient. For example, an observation of peaked T waves is not sufficient to diagnose hyperkalemia; such a diagnosis should be verified by measuring the blood potassium level. Conversely, a discovery of hyperkalemia should be followed by an ECG for manifestations such as peaked T waves, widened QRS complexes, and loss of P waves. The following is an organized list of possible ECG-based diagnoses. Rhythm disturbances or arrhythmias: Atrial fibrillation and atrial flutter without rapid ventricular response Premature atrial contraction (PACs) and premature ventricular contraction (PVCs) Sinus arrhythmia Sinus bradycardia and sinus tachycardia Sinus pause and sinoatrial arrest Sinus node dysfunction and bradycardia-tachycardia syndrome Supraventricular tachycardia Atrial fibrillation with rapid ventricular response Atrial flutter with rapid ventricular response AV nodal reentrant tachycardia Atrioventricular reentrant tachycardia Junctional ectopic tachycardia Atrial tachycardia Ectopic atrial tachycardia (unicentric) Multifocal atrial tachycardia Paroxysmal atrial tachycardia Sinoatrial nodal reentrant tachycardia Wide complex tachycardia Ventricular flutter Ventricular fibrillation Ventricular tachycardia (monomorphic ventricular tachycardia) Torsades de pointes (polymorphic ventricular tachycardia) Pre-excitation syndrome Lown–Ganong–Levine syndrome Wolff–Parkinson–White syndrome J wave (Osborn wave) Heart block and conduction problems: Sinoatrial block: first, second, and third-degree AV node First-degree AV block Second-degree AV block (Mobitz [Wenckebach] I and II) Third-degree AV block or complete AV block Right bundle Incomplete right bundle branch block (IRBBB) Complete right bundle branch block (RBBB) Left bundle Incomplete left bundle branch block (ILBBB) Complete left bundle branch block (LBBB) Left anterior fascicular block (LAFB) Left posterior fascicular block (LPFB) Bifascicular block (LAFB plus LPFB) Trifascicular block (LAFP plus FPFB plus RBBB) QT syndromes Brugada syndrome Short QT syndrome Long QT syndromes, genetic and drug-induced Right and left atrial abnormality Electrolytes disturbances and intoxication: Digitalis intoxication Calcium: hypocalcemia and hypercalcemia Potassium: hypokalemia and hyperkalemia Serotonin toxicity Ischemia and infarction: Wellens' syndrome (LAD occlusion) de Winter T waves (LAD occlusion) ST elevation and ST depression High frequency QRS changes Myocardial infarction (heart attack) Non-Q wave myocardial infarction NSTEMI STEMI Sgarbossa's criteria for ischemia with a LBBB Structural: Acute pericarditis Right and left ventricular hypertrophy Right ventricular strain or S1Q3T3 (can be seen in pulmonary embolism) Other phenomena: Cardiac aberrancy Ashman phenomenon Concealed conduction Electrical alternans == History == In 1872, Alexander Muirhead is reported to have attached wires to the wrist of a patient with fever to obtain an electronic record of their heartbeat. In 1882, John Burdon-Sanderson working with frogs, was the first to appreciate that the interval between variations in potential was not electrically quiescent and coined the term "isoelectric interval" for this period. In 1887, Augustus Waller invented an ECG machine consisting of a Lippmann capillary electrometer fixed to a projector. The trace from the heartbeat was projected onto a photographic plate that was itself fixed to a toy train. This allowed a heartbeat to be recorded in real time. In 1895, Willem Einthoven assigned the letters P, Q, R, S, and T to the deflections in the theoretical waveform he created using equations which corrected the actual waveform obtained by the capillary electrometer to compensate for the imprecision of that instrument. Using letters different from A, B, C, and D (the letters used for the capillary electrometer's waveform) facilitated comparison when the uncorrected and corrected lines were drawn on the same graph. Einthoven probably chose the initial letter P to follow the example set by Descartes in geometry. When a more precise waveform was obtained using the string galvanometer, which matched the corrected capillary electrometer waveform, he continued to use the letters P, Q, R, S, and T, and these letters are still in use today. Einthoven also described the electrocardiographic features of a number of cardiovascular disorders. In 1897, the string galvanometer was invented by the French engineer Clément Ader. In 1901, Einthoven, working in Leiden, the Netherlands, used the string galvanometer: the first practical ECG. This device was much more sensitive than the capillary electrometer Waller used. In 1924, Einthoven was awarded the Nobel Prize in Medicine for his pioneering work in developing the ECG. By 1927, General Electric had developed a portable apparatus that could produce electrocardiograms without the use of the string galvanometer. This device instead combined amplifier tubes similar to those used in a radio with an internal lamp and a moving mirror that directed the tracing of the electric pulses onto film. In 1937, Taro Takemi invented a new portable electrocardiograph machine. In 1942, Emanuel Goldberger increases the voltage of Wilson's unipolar leads by 50% and creates the augmented limb leads aVR, aVL and aVF. When added to Einthoven's three limb leads and the six chest leads we arrive at the 12-lead electrocardiogram that is used today. In the late 1940s, Rune Elmqvist invented an inkjet printer involving thin jets of ink deflected by electrical potentials from the heart, with good frequency response and direct recording of ECG on paper. The device, called the Mingograf, was sold by Siemens Elema until the 1990s. === Etymology === The word is derived from the Greek electro, meaning related to electrical activity; kardia, meaning heart; and graph, meaning "to write". == See also == Signal-averaged electrocardiogram Electrical conduction system of the heart Electroencephalography Electrogastrogram Electropalatography Electroretinography Emergency medicine Forward problem of electrocardiology Heart rate Heart rate monitor Wireless ambulatory ECG == Notes == == References == == External links == The whole ECG course on 1 A4 paper from ECGpedia, a wiki encyclopedia for a course on interpretation of ECG Wave Maven – a large database of practice ECG questions provided by Beth Israel Deaconess Medical Center PysioBank – a free scientific database with physiologic signals (here ecg) EKG Academy – free EKG lectures, drills and quizzes ECG Learning Center created by Eccles Health Sciences Library at University of Utah
Wikipedia/Electrocardiography
Functional imaging (or physiological imaging) is a medical imaging technique of detecting or measuring changes in metabolism, blood flow, regional chemical composition, and absorption. As opposed to structural imaging, functional imaging centers on revealing physiological activities within a certain tissue or organ by employing medical image modalities that very often use tracers or probes to reflect spatial distribution of them within the body. These tracers are often analogous to some chemical compounds, like glucose, within the body. To achieve this, isotopes are used because they have similar chemical and biological characteristics. By appropriate proportionality, the nuclear medicine physicians can determine the real intensity of certain substances within the body to evaluate the risk or danger of developing some diseases. == Modalities == Positron emission tomography (PET) Fludeoxyglucose for Glucose metabolism O-15 as a flow tracer Single-photon emission computed tomography (SPECT) Computed tomography (CT) perfusion imaging Functional magnetic resonance imaging (fMRI) BOLD Diffusion MRI Perfusion (blood flow) Arterial spin labeling MRI Blood volume Hyperpolarized carbon-13 MRI Functional photoacoustic microscopy (fPAM) Magnetic particle imaging (MPI) Optical imaging Near-infrared spectroscopy (NIRS) == See also == Biomedical engineering Medical imaging PET-CT Radiology Functional neuroimaging == References == == External links == Scholarpedia Functional imaging
Wikipedia/Functional_imaging
Computer-aided detection (CADe), also called computer-aided diagnosis (CADx), are systems that assist doctors in the interpretation of medical images. Imaging techniques in X-ray, MRI, endoscopy, and ultrasound diagnostics yield a great deal of information that the radiologist or other medical professional has to analyze and evaluate comprehensively in a short time. CAD systems process digital images or videos for typical appearances and to highlight conspicuous sections, such as possible diseases, in order to offer input to support a decision taken by the professional. CAD also has potential future applications in digital pathology with the advent of whole-slide imaging and machine learning algorithms. So far its application has been limited to quantifying immunostaining but is also being investigated for the standard H&E stain. CAD is an interdisciplinary technology combining elements of artificial intelligence and computer vision with radiological and pathology image processing. A typical application is the detection of a tumor. For instance, some hospitals use CAD to support preventive medical check-ups in mammography (diagnosis of breast cancer), the detection of polyps in colonoscopy, and lung cancer. Computer-aided detection (CADe) systems are usually confined to marking conspicuous structures and sections. Computer-aided diagnosis (CADx) systems evaluate the conspicuous structures. For example, in mammography CAD highlights microcalcification clusters and hyperdense structures in the soft tissue. This allows the radiologist to draw conclusions about the condition of the pathology. Another application is CADq, which quantifies, e.g., the size of a tumor or the tumor's behavior in contrast medium uptake. Computer-aided simple triage (CAST) is another type of CAD, which performs a fully automatic initial interpretation and triage of studies into some meaningful categories (e.g. negative and positive). CAST is particularly applicable in emergency diagnostic imaging, where a prompt diagnosis of critical, life-threatening condition is required. Although CAD has been used in clinical environments for over 40 years, CAD usually does not substitute the doctor or other professional, but rather plays a supporting role. The professional (generally a radiologist) is generally responsible for the final interpretation of a medical image. However, the goal of some CAD systems is to detect earliest signs of abnormality in patients that human professionals cannot, as in diabetic retinopathy, architectural distortion in mammograms, ground-glass nodules in thoracic CT, and non-polypoid (“flat”) lesions in CT colonography. == History == In the late 1950s, with the dawn of modern computers researchers in various fields started exploring the possibility of building computer-aided medical diagnostic (CAD) systems. These first CAD systems used flow-charts, statistical pattern-matching, probability theory, or knowledge bases to drive their decision-making process. In the early 1970s, some of the very early CAD systems in medicine, which were often referred as “expert systems” in medicine, were developed and used mainly for educational purposes. Examples include the MYCIN expert system, the Internist-I expert system and the CADUCEUS (expert system). During the beginning of the early developments, the researchers were aiming at building entirely automated CAD / expert systems. The expectated capability of computers was unrealistically optimistic among these scientists. However, after the breakthrough paper, “Reducibility among Combinatorial Problems” by Richard M. Karp, it became clear that there were limitations but also potential opportunities when one develops algorithms to solve groups of important computational problems. As result of the new understanding of the various algorithmic limitations that Karp discovered in the early 1970s, researchers started realizing the serious limitations that CAD and expert systems in medicine have. The recognition of these limitations brought the investigators to develop new kinds of CAD systems by using advanced approaches. Thus, by the late 1980s and early 1990s the focus sifted in the use of data mining approaches for the purpose of using more advanced and flexible CAD systems. In 1998, the first commercial CAD system for mammography, the ImageChecker system, was approved by the US Food and Drug Administration (FDA). In the following years several commercial CAD systems for analyzing mammography, breast MRI, medical imagining of lung, colon, and heart also received FDA approvals. Currently, CAD systems are used as a diagnostic aid to provide physicians for better medical decision-making. == Methodology == CAD is fundamentally based on highly complex pattern recognition. X-ray or other types of images are scanned for suspicious structures. Normally a few thousand images are required to optimize the algorithm. Digital image data are copied to a CAD server in a DICOM-format and are prepared and analyzed in several steps. 1. Preprocessing for Reduction of artifacts (bugs in images) Image noise reduction Leveling (harmonization) of image quality (increased contrast) for clearing the image's different basic conditions e.g. different exposure parameter. Filtering 2. Segmentation for Differentiation of different structures in the image, e.g. heart, lung, ribcage, blood vessels, possible round lesions Matching with anatomic databank Sample gray-values in volume of interest 3. Structure/ROI (Region of Interest) Analyze Every detected region is analyzed individually for special characteristics: Compactness Form, size and location Reference to close by structures / ROIs Average grey level value analyze within a ROI Proportion of grey levels to border of the structure inside the ROI 4. Evaluation / classification After the structure is analyzed, every ROI is evaluated individually (scoring) for the probability of a TP. The following procedures are examples of classification algorithms. Nearest-Neighbor Rule (e.g. k-nearest neighbors) Minimum distance classifier Cascade classifier Naive Bayes classifier Artificial neural network Radial basis function network (RBF) Support vector machine (SVM) Principal component analysis (PCA) If the detected structures have reached a certain threshold level, they are highlighted in the image for the radiologist. Depending on the CAD system these markings can be permanently or temporary saved. The latter's advantage is that only the markings which are approved by the radiologist are saved. False hits should not be saved, because an examination at a later date becomes more difficult then. == Relation to provider metrics == === Sensitivity and specificity === CAD systems seek to highlight suspicious structures. Today's CAD systems cannot detect 100% of pathological changes. The hit rate (sensitivity) can be up to 90% depending on system and application. A correct hit is termed a True Positive (TP), while the incorrect marking of healthy sections constitutes a False Positive (FP). The less FPs indicated, the higher the specificity is. A low specificity reduces the acceptance of the CAD system because the user has to identify all of these wrong hits. The FP-rate in lung overview examinations (CAD Chest) could be reduced to 2 per examination. In other segments (e.g. CT lung examinations) the FP-rate could be 25 or more. In CAST systems the FP rate must be extremely low (less than 1 per examination) to allow a meaningful study triage. === Absolute detection rate === The absolute detection rate of a radiologist is an alternative metric to sensitivity and specificity. Overall, results of clinical trials about sensitivity, specificity, and the absolute detection rate can vary markedly. Each study result depends on its basic conditions and has to be evaluated on those terms. The following facts have a strong influence: Retrospective or prospective design Quality of the used images Condition of the x-ray examination Radiologist's experience and education Type of lesion Size of the considered lesion == Challenges == Despite the many developments that CAD has achieved since the dawn of computers, there are still certain challenges that CAD systems face today. Some challenges are related to various algorithmic limitations in the procedures of a CAD system including input data collection, preprocessing, processing and system assessments. Algorithms are generally designed to select a single likely diagnosis, thus providing suboptimal results for patients with multiple, concurrent disorders. Today input data for CAD mostly come from electronic health records (EHR). Effective designing, implementing and analyzing for EHR is a major necessity on any CAD systems. Due to the massive availability of data and the need to analyze such data, big data is also one of the biggest challenges that CAD systems face today. The increasingly vast amount of patient data is a serious problem. Often the patient data are complex and can be semi-structured or unstructured data. It requires highly developed approaches to store, retrieve and analyze them in reasonable time. During the preprocessing stage, input data must be normalized. The normalization of input data includes noise reduction and filtering. Processing may contain a few sub-steps depending on applications. Basic three sub-steps on medical imaging are segmentation, feature extraction / selection, and classification. These sub-steps require advanced techniques to analyze input data with less computational time. Although much effort has been devoted to creating innovative techniques for these procedures of CAD systems, no single best algorithm has emerged for any individual step. Ongoing studies in building innovative algorithms for all the aspects of CAD systems is essential. There is also a lack of standardized assessment measures for CAD systems. This fact may cause the difficulty for obtaining approval for commercial use from governing bodies such as the FDA. Moreover, while many positive developments of CAD systems have been proven, studies for validating their algorithms for clinical practice have not been confirmed. Other challenges are related to the problem for healthcare providers to adopt new CAD systems in clinical practice. Some negative studies may discourage the use of CAD. In addition, the lack of training of health professionals on the use of CAD sometimes brings the incorrect interpretation of the system outcomes. == Applications == CAD is used in the diagnosis of breast cancer, lung cancer, colon cancer, prostate cancer, bone metastases, coronary artery disease, congenital heart defect, pathological brain detection, fracture detection, Alzheimer's disease, and diabetic retinopathy. === Breast cancer === CAD is used in screening mammography (X-ray examination of the female breast). Screening mammography is used for the early detection of breast cancer. CAD systems are often utilized to help classify a tumor as malignant (cancerous) or benign (non-cancerous). CAD is especially established in the US and the Netherlands and is used in addition to human evaluation, usually by a radiologist. The first CAD system for mammography was developed in a research project at the University of Chicago. Today it is commercially offered by iCAD and Hologic. However, while achieving high sensitivities, CAD systems tend to have very low specificity and the benefits of using CAD remain uncertain. A 2008 systematic review on computer-aided detection in screening mammography concluded that CAD does not have a significant effect on cancer detection rate, but does undesirably increase recall rate (i.e. the rate of false positives). However, it noted considerable heterogeneity in the impact on recall rate across studies. Recent advances in machine learning, deep-learning and artificial intelligence technology have enabled the development of CAD systems that are clinically proven to assist radiologists in addressing the challenges of reading mammographic images by improving cancer detection rates and reducing false positives and unnecessary patient recalls, while significantly decreasing reading times. Procedures to evaluate mammography based on magnetic resonance imaging (MRI) exist too. === Lung cancer (bronchial carcinoma) === In the diagnosis of lung cancer, computed tomography with special three-dimensional CAD systems are established and considered as appropriate second opinions. At this a volumetric dataset with up to 3,000 single images is prepared and analyzed. Round lesions (lung cancer, metastases and benign changes) from 1 mm are detectable. Today all well-known vendors of medical systems offer corresponding solutions. Early detection of lung cancer is valuable. However, the random detection of lung cancer in the early stage (stage 1) in the X-ray image is difficult. Round lesions that vary from 5–10 mm are easily overlooked. The routine application of CAD Chest Systems may help to detect small changes without initial suspicion. A number of researchers developed CAD systems for detection of lung nodules (round lesions less than 30 mm) in chest radiography and CT, and CAD systems for diagnosis (e.g., distinction between malignant and benign) of lung nodules in CT. Virtual dual-energy imaging improved the performance of CAD systems in chest radiography. === Colon cancer === CAD is available for detection of colorectal polyps in the colon in CT colonography. Polyps are small growths that arise from the inner lining of the colon. CAD detects the polyps by identifying their characteristic "bump-like" shape. To avoid excessive false positives, CAD ignores the normal colon wall, including the haustral folds. === Cardiovascular disease === State-of-the-art methods in cardiovascular computing, cardiovascular informatics, and mathematical and computational modeling can provide valuable tools in clinical decision-making. CAD systems with novel image-analysis-based markers as input can aid vascular physicians to decide with higher confidence on best suitable treatment for cardiovascular disease patients. Reliable early-detection and risk-stratification of carotid atherosclerosis is of outmost importance for predicting strokes in asymptomatic patients. To this end, various noninvasive and low-cost markers have been proposed, using ultrasound-image-based features. These combine echogenicity, texture, and motion characteristics to assist clinical decision towards improved prediction, assessment and management of cardiovascular risk. CAD is available for the automatic detection of significant (causing more than 50% stenosis) coronary artery disease in coronary CT angiography (CCTA) studies. === Congenital heart defect === Early detection of pathology can be the difference between life and death. CADe can be done by auscultation with a digital stethoscope and specialized software, also known as computer-aided auscultation. Murmurs, irregular heart sounds, caused by blood flowing through a defective heart, can be detected with high sensitivity and specificity. Computer-aided auscultation is sensitive to external noise and bodily sounds and requires an almost silent environment to function accurately. === Pathological brain detection (PBD) === Chaplot et al. was the first to use Discrete Wavelet Transform (DWT) coefficients to detect pathological brains. Maitra and Chatterjee employed the Slantlet transform, which is an improved version of DWT. Their feature vector of each image is created by considering the magnitudes of Slantlet transform outputs corresponding to six spatial positions chosen according to a specific logic. In 2010, Wang and Wu presented a forward neural network (FNN) based method to classify a given MR brain image as normal or abnormal. The parameters of FNN were optimized via adaptive chaotic particle swarm optimization (ACPSO). Results over 160 images showed that the classification accuracy was 98.75%. In 2011, Wu and Wang proposed using DWT for feature extraction, PCA for feature reduction, and FNN with scaled chaotic artificial bee colony (SCABC) as classifier. In 2013, Saritha et al. were the first to apply wavelet entropy (WE) to detect pathological brains. Saritha also suggested to use spider-web plots. Later, Zhang et al. proved removing spider-web plots did not influence the performance. Genetic pattern search method was applied to identify abnormal brain from normal controls. Its classification accuracy was reported as 95.188%. Das et al. proposed to use Ripplet transform. Zhang et al. proposed to use particle swarm optimization (PSO). Kalbkhani et al. suggested to use GARCH model. In 2014, El-Dahshan et al. suggested the use of pulse coupled neural network. In 2015, Zhou et al. suggested application of naive Bayes classifier to detect pathological brains. === Alzheimer's disease === CADs can be used to identify subjects with Alzheimer's and mild cognitive impairment from normal elder controls. In 2014, Padma et al. used combined wavelet statistical texture features to segment and classify AD benign and malignant tumor slices. Zhang et al. found kernel support vector machine decision tree had 80% classification accuracy, with an average computation time of 0.022s for each image classification. In 2019, Signaevsky et al. have first reported a trained Fully Convolutional Network (FCN) for detection and quantification of neurofibrillary tangles (NFT) in Alzheimer's disease and an array of other tauopathies. The trained FCN achieved high precision and recall in naive digital whole slide image (WSI) semantic segmentation, correctly identifying NFT objects using a SegNet model trained for 200 epochs. The FCN reached near-practical efficiency with average processing time of 45 min per WSI per graphics processing unit (GPU), enabling reliable and reproducible large-scale detection of NFTs. The measured performance on test data of eight naive WSI across various tauopathies resulted in the recall, precision, and an F1 score of 0.92, 0.72, and 0.81, respectively. Eigenbrain is a novel brain feature that can help to detect AD, based on principal component analysis (PCA) or independent component analysis decomposition. Polynomial kernel SVM has been shown to achieve good accuracy. The polynomial KSVM performs better than linear SVM and RBF kernel SVM. Other approaches with decent results involve the use of texture analysis, morphological features, or high-order statistical features === Nuclear medicine === CADx is available for nuclear medicine images. Commercial CADx systems for the diagnosis of bone metastases in whole-body bone scans and coronary artery disease in myocardial perfusion images exist. With a high sensitivity and an acceptable false lesions detection rate, computer-aided automatic lesion detection system is demonstrated as useful and will probably in the future be able to help nuclear medicine physicians to identify possible bone lesions. === Diabetic retinopathy === Diabetic retinopathy is a disease of the retina that is diagnosed predominantly by fundoscopic images. Diabetic patients in industrialised countries generally undergo regular screening for the condition. Imaging is used to recognize early signs of abnormal retinal blood vessels. Manual analysis of these images can be time-consuming and unreliable. CAD has been employed to enhance the accuracy, sensitivity, and specificity of automated detection method. The use of some CAD systems to replace human graders can be safe and cost effective. Image pre-processing, and feature extraction and classification are two main stages of these CAD algorithms. ==== Pre-processing methods ==== Image normalization is minimizing the variation across the entire image. Intensity variations in areas between periphery and central macular region of the eye have been reported to cause inaccuracy of vessel segmentation. Based on the 2014 review, this technique was the most frequently used and appeared in 11 out of 40 recently (since 2011) published primary research. Histogram equalization is useful in enhancing contrast within an image. This technique is used to increase local contrast. At the end of the processing, areas that were dark in the input image would be brightened, greatly enhancing the contrast among the features present in the area. On the other hand, brighter areas in the input image would remain bright or be reduced in brightness to equalize with the other areas in the image. Besides vessel segmentation, other features related to diabetic retinopathy can be further separated by using this pre-processing technique. Microaneurysm and hemorrhages are red lesions, whereas exudates are yellow spots. Increasing contrast between these two groups allow better visualization of lesions on images. With this technique, 2014 review found that 10 out of the 14 recently (since 2011) published primary research. Green channel filtering is another technique that is useful in differentiating lesions rather than vessels. This method is important because it provides the maximal contrast between diabetic retinopathy-related lesions. Microaneurysms and hemorrhages are red lesions that appear dark after application of green channel filtering. In contrast, exudates, which appear yellow in normal image, are transformed into bright white spots after green filtering. This technique is mostly used according to the 2014 review, with appearance in 27 out of 40 published articles in the past three years. In addition, green channel filtering can be used to detect center of optic disc in conjunction with double-windowing system. Non-uniform illumination correction is a technique that adjusts for non-uniform illumination in fundoscopic image. Non-uniform illumination can be a potential error in automated detection of diabetic retinopathy because of changes in statistical characteristics of image. These changes can affect latter processing such as feature extraction and are not observable by humans. Correction of non-uniform illumination (f') can be achieved by modifying the pixel intensity using known original pixel intensity (f), and average intensities of local (λ) and desired pixels (μ) (see formula below). Walter-Klein transformation is then applied to achieve the uniform illumination. This technique is the least used pre-processing method in the review from 2014. f ′ = f + μ − λ {\displaystyle f'=f+\mu -\lambda } Morphological operations is the second least used pre-processing method in 2014 review. The main objective of this method is to provide contrast enhancement, especially darker regions compared to background. ==== Feature extractions and classifications ==== After pre-processing of funduscopic image, the image will be further analyzed using different computational methods. However, the current literature agreed that some methods are used more often than others during vessel segmentation analyses. These methods are SVM, multi-scale, vessel-tracking, region growing approach, and model-based approaches. Support vector machine is by far the most frequently used classifier in vessel segmentation, up to 90% of cases. SVM is a supervised learning model that belongs to the broader category of pattern recognition technique. The algorithm works by creating a largest gap between distinct samples in the data. The goal is to create the largest gap between these components that minimize the potential error in classification. In order to successfully segregate blood vessel information from the rest of the eye image, SVM algorithm creates support vectors that separate the blood vessel pixel from the rest of the image through a supervised environment. Detecting blood vessel from new images can be done through similar manner using support vectors. Combination with other pre-processing technique, such as green channel filtering, greatly improves the accuracy of detection of blood vessel abnormalities. Some beneficial properties of SVM include Flexibility – Highly flexible in terms of function Simplicity – Simple, especially with large datasets (only support vectors are needed to create separation between data) Multi-scale approach is a multiple resolution approach in vessel segmentation. At low resolution, large-diameter vessels can first be extracted. By increasing resolution, smaller branches from the large vessels can be easily recognized. Therefore, one advantage of using this technique is the increased analytical speed. Additionally, this approach can be used with 3D images. The surface representation is a surface normal to the curvature of the vessels, allowing the detection of abnormalities on vessel surface. Vessel tracking is the ability of the algorithm to detect "centerline" of vessels. These centerlines are maximal peak of vessel curvature. Centers of vessels can be found using directional information that is provided by Gaussian filter. Similar approaches that utilize the concept of centerline are the skeleton-based and differential geometry-based. Region growing approach is a method of detecting neighboring pixels with similarities. A seed point is required for such method to start. Two elements are needed for this technique to work: similarity and spatial proximity. A neighboring pixel to the seed pixel with similar intensity is likely to be the same type and will be added to the growing region. One disadvantage of this technique is that it requires manual selection of seed point, which introduces bias and inconsistency in the algorithm. This technique is also being used in optic disc identification. Model-based approaches employ representation to extract vessels from images. Three broad categories of model-based are known: deformable, parametric, and template matching. Deformable methods uses objects that will be deformed to fit the contours of the objects on the image. Parametric uses geometric parameters such as tubular, cylinder, or ellipsoid representation of blood vessels. Classical snake contour in combination with blood vessel topological information can also be used as a model-based approach. Lastly, template matching is the usage of a template, fitted by stochastic deformation process using Hidden Markov Mode 1. == Effects on employment == Automation of medical diagnosis labor (for example, quantifying red blood cells) has some historical precedent. The deep learning revolution of the 2010s has already produced AI that are more accurate in many areas of visual diagnosis than radiologists and dermatologists, and this gap is expected to grow. Some experts, including many doctors, are dismissive of the effects that AI will have on medical specialties. In contrast, many economists and artificial intelligence experts believe that fields such as radiology will be massively disrupted, with unemployment or downward pressure on the wages of radiologists; hospitals will need fewer radiologists overall, and many of the radiologists who still exist will require substantial retraining. Geoffrey Hinton, the "Godfather of deep learning", argues that in light of the likely advances expected in the next five or ten years, hospitals should immediately stop training radiologists, as their time-consuming and expensive training on visual diagnosis will soon be mostly obsolete, leading to a glut of traditional radiologists. An op-ed in JAMA argues that pathologists and radiologists should merge into a single "information specialist" role, and state that "To avoid being replaced by computers, radiologists must allow themselves to be displaced by computers." Information specialists would be trained in "Bayesian logic, statistics, data science", and some genomics and biometrics; manual visual pattern recognition would be greatly de-emphasized compared with current onerous radiology training. == See also == Computerized Systems Used In Clinical Trials Diagnostic robot == Footnotes == == References == == External links == Digital Retinal Images for Vessel Extraction (DRIVE) STructured Analysis of the REtina (STARE) High-Resolution Fundus (HRF) Image Database
Wikipedia/Computer-aided_diagnosis
Mammography (also called mastography; DICOM modality: MG) is the process of using low-energy X-rays (usually around 30 kVp) to examine the human breast for diagnosis and screening. The goal of mammography is the early detection of breast cancer, typically through detection of characteristic masses, microcalcifications, asymmetries, and distortions. As with all X-rays, mammograms use doses of ionizing radiation to create images. These images are then analyzed for abnormal findings. It is usual to employ lower-energy X-rays, typically Mo (K-shell X-ray energies of 17.5 and 19.6 keV) and Rh (20.2 and 22.7 keV) than those used for radiography of bones. Mammography may be 2D or 3D (tomosynthesis), depending on the available equipment or purpose of the examination. Ultrasound, ductography, positron emission mammography (PEM), and magnetic resonance imaging (MRI) are adjuncts to mammography. Ultrasound is typically used for further evaluation of masses found on mammography or palpable masses that may or may not be seen on mammograms. Ductograms are still used in some institutions for evaluation of bloody nipple discharge when the mammogram is non-diagnostic. MRI can be useful for the screening of high-risk patients, for further evaluation of questionable findings or symptoms, as well as for pre-surgical evaluation of patients with known breast cancer, in order to detect additional lesions that might change the surgical approach (for example, from breast-conserving lumpectomy to mastectomy). In 2023, the U.S. Preventive Services Task Force issued a draft recommendation statement that all women should receive a screening mammography every two years from age 40 to 74. The American College of Radiology, Society of Breast Imaging, and American Cancer Society recommend yearly screening mammography starting at age 40. The Canadian Task Force on Preventive Health Care (2012) and the European Cancer Observatory (2011) recommend mammography every 2 to 3 years between ages 50 and 69. These task force reports point out that in addition to unnecessary surgery and anxiety, the risks of more frequent mammograms include a small but significant increase in breast cancer induced by radiation. Additionally, mammograms should not be performed with increased frequency in patients undergoing breast surgery, including breast enlargement, mastopexy, and breast reduction. == Types == === Digital === Digital mammography is a specialized form of mammography that uses digital receptors and computers instead of X-ray film to help examine breast tissue for breast cancer. The electrical signals can be read on computer screens, permitting more manipulation of images to allow radiologists to view the results more clearly. The standard digital mammography is "full field" (FFDM), in which the entire breast is imaged in a single view. Digital mammography can also include the use of "spot views", in which a paddle is used to further compress areas of concern. Digital mammography is also utilized in stereotactic biopsy. Breast biopsy may also be performed using a different modality, such as ultrasound or magnetic resonance imaging (MRI). While radiologists had hoped for more marked improvement, the effectiveness of digital mammography was found comparable to traditional X-ray methods in 2004, though there may be reduced radiation with the technique and it may lead to fewer retests. Specifically, it performs no better than film for post-menopausal women, who represent more than three-quarters of women with breast cancer. The U.S. Preventive Services Task Force concluded that there was insufficient evidence to recommend for or against digital mammography over basic film mammography for breast cancer screening. Digital mammography is a NASA spin-off, utilizing technology developed for the Hubble Space Telescope. As of 2022, over 99% of certified mammography centers in the United States screening centers use digital mammography. Globally, systems by Fujifilm Corporation are the most widely used. In the United States, GE's digital imaging units typically cost US$300,000 to $500,000, far more than film-based imaging systems. Costs may decline as GE begins to compete with the less expensive Fuji systems. === 3D mammography === Three-dimensional mammography, also known as digital breast tomosynthesis (DBT), tomosynthesis, and 3D breast imaging, is a mammogram technology that creates a 3D view of the breast using X-rays from different angles. Supplementing standard 2D mammography with DBT has been shown to improve cancer detection. Cost effectiveness is unclear as of 2016. Another concern is that it more than doubles the radiation exposure. === Contrast-enhanced mammography === Contrast-enhanced mammography is an advanced imaging technique that employs iodinated contrast agents to visualize breast neovascularization, functioning similarly to magnetic resonance imaging. Tumor-associated angiogenesis often results in leaky blood vessels, allowing contrast material to accumulate within the tumor tissue and produce an iodine-enhanced image. This enhances the visibility of malignancies that might otherwise be obscured by dense breast tissue. Contrast-enhanced mammography is also referred to as contrast-enhanced spectral mammography, contrast-enhanced digital mammography, or contrast-enhanced dual-energy mammography. A large randomized controlled trial published in The Lancet in 2025 found that contrast-enhanced mammography detects significantly more invasive breast cancers in women with dense breast tissue than standard mammography or ultrasound. Conducted across 10 U.K. screening sites with over 9,000 participants, the study reported that contrast-enhanced mammography identified 15.7 invasive cancers per 1,000 exams, compared to 4.2 for ultrasound and 15 for MRI, with no statistically significant difference between Contrast-enhanced mammography and MRI. CEM was also found to be more cost-effective and accessible than MRI. Advocates suggest contrast-enhanced mammography could improve early detection and outcomes for women with dense breasts, but acknowledge risks of overdiagnosis. === Photon counting === Photon-counting mammography was introduced commercially in 2003 and was shown to reduce the X-ray dose to the patient by approximately 40% compared to conventional methods while maintaining image quality at an equal or higher level. The technology was subsequently developed to enable spectral imaging with the possibility to further improve image quality, to distinguish between different tissue types, and to measure breast density. === Galactography === A galactography (or breast ductography) is a now infrequently used type of mammography used to visualize the milk ducts. Prior to the mammography itself, a radiopaque substance is injected into the duct system. This test is indicated when nipple discharge exists. == Medical uses == Mammography can detect cancer early when it’s most treatable and can be treated less invasively (thereby helping to preserve quality of life). According to National Cancer Institute data, since mammography screening became widespread in the mid-1980s, the U.S. breast cancer death rate, unchanged for the previous 50 years, has dropped well over 30 percent. In European countries like Denmark and Sweden, where mammography screening programs are more organized, the breast cancer death rate has been cut almost in half over the last 20 years. Mammography screening cuts the risk of dying from breast cancer nearly in half. A recent study published in Cancer showed that more than 70 percent of the women who died from breast cancer in their 40s at major Harvard teaching hospitals were among the 20 percent of women who were not being screened. Some scientific studies have shown that the most lives are saved by screening beginning at age 40. A recent study in the British Medical Journal shows that early detection of breast cancer – as with mammography – significantly improves breast cancer survival. The benefits of mammography screening at decreasing breast cancer mortality in randomized trials are not found in observational studies performed long after implementation of breast cancer screening programs (for instance, Bleyer et al.) == When to start screening == In 2014, the Surveillance, Epidemiology, and End Results Program of the National Institutes of Health reported the occurrence rates of breast cancer based on 1000 women in different age groups. In the 40–44 age group, the incidence was 1.5 and in the 45–49 age group, the incidence was 2.3. In the older age groups, the incidence was 2.7 in the 50–54 age group and 3.2 in the 55–59 age group. While screening between ages 40 and 50 is somewhat controversial, the preponderance of the evidence indicates that there is a benefit in terms of early detection. Currently, the American Cancer Society, the American Congress of Obstetricians and Gynecologists (ACOG), the American College of Radiology, and the Society of Breast Imaging encourage annual mammograms beginning at age 40. The National Cancer Institute encourages mammograms every one to two years for women ages 40 to 49. In 2023, United States Preventive Services Task Force (USPSTF) revised the recommendation that women and transgender men undergo biennial mammograms starting at the age of 40, rather than the previously suggested age of 50. This adjustment is prompted by the increasing incidence of breast cancer in the 40 to 49 age group over the past decade. In contrast, the American College of Physicians, a large internal medicine group, has recently encouraged individualized screening plans as opposed to wholesale biannual screening of women aged 40 to 49. The American Cancer Society recommendations for women at average risk for breast cancer is a yearly mammogram from age 45 to 54 with an optional yearly mammogram from age 40 to 44. == Screening for high-risk population == Women who are at high risk for early-onset breast cancer have separate recommendations for screening. These include those who: Have a known BRCA1 or BRCA2 gene mutation. Have a 1st-degree relative (parent, brother, sister, or child), 2nd-degree relative (aunts, uncles, nieces, or grandparents), or 3rd-degree relative with a known BRCA1 or BRCA2 gene mutation. Have a lifetime risk of breast cancer >20% according to risk assessment tools History of radiation therapy to chest between 10 and 30 years of age Have or has a 1st-degree relative with a genetic syndrome including Li-Fraumeni syndrome, Cowden syndrome, or Bannayan-Riley-Ruvalcaba syndrome The American College of Radiology recommends these individuals to get annual mammography starting at the age of 30. Those with a history of chest radiation therapy before age 30 should start annually at age 25 of 8 years after their latest therapy (whichever is latest). The American Cancer Society also recommends women at high risk should get a mammogram and breast MRI every year beginning at age 30 or an age recommended by their healthcare provider. The National Comprehensive Cancer Network advocates screening for women who possess a BRCA1 or BRCA2 mutation or have a first-degree relative with such a mutation, even in the absence of the patient being tested for BRCA1/2 mutations. For women at high risk, the network recommends undergoing an annual mammogram and breast MRI between the ages of 25 and 40, considering the specific gene mutation type or the youngest age of breast cancer occurrence in the family. Additionally, the network suggests that high-risk women undergo clinical breast exams every 6 to 12 months starting at age 25. These individuals should also engage in discussions with healthcare providers to assess the advantages and disadvantages of 3D mammography and acquire knowledge on detecting changes in their breasts. == Adverse effects == === Radiation === The radiation exposure associated with mammography is a potential risk of screening, which appears to be greater in younger women. In scans where women receive 0.25–20 Gray (Gy) of radiation, they have more of an elevated risk of developing breast cancer. A study of radiation risk from mammography concluded that for women 40 years of age and older, the risk of radiation-induced breast cancer was minuscule, particularly compared with the potential benefit of mammographic screening, with a benefit-to-risk ratio of 48.5 lives saved for each life lost due to radiation exposure. This also correlates to a decrease in breast cancer mortality rates by 24%. === Pain === The mammography procedure can be painful. Reported pain rates range from 6–76%, with 23–95% experiencing pain or discomfort. Experiencing pain is a significant predictor in women not re-attending screening. There are few proven interventions to reduce pain in mammography, but evidence suggests that giving women information about the mammography procedure prior to it taking place may reduce the pain and discomfort experienced. Furthermore, research has found that standardised compression levels can help to reduce patients' pain while still allowing for optimal diagnostic images to be produced. == Procedure == During the procedure, the breast is compressed using a dedicated mammography unit. Parallel-plate compression evens out the thickness of breast tissue to increase image quality by reducing the thickness of tissue that X-rays must penetrate, decreasing the amount of scattered radiation (scatter degrades image quality), reducing the required radiation dose, and holding the breast still (preventing motion blur). In screening mammography, both head-to-foot (craniocaudal, CC) view and angled side-view (mediolateral oblique, MLO) images of the breast are taken. Diagnostic mammography may include these and other views, including geometrically magnified and spot-compressed views of the particular area of concern. Deodorant, talcum powder or lotion may show up on the X-ray as calcium spots, so women are discouraged from applying them on the day of their exam. There are two types of mammogram studies: screening mammograms and diagnostic mammograms. Screening mammograms, consisting of four standard X-ray images, are performed yearly on patients who present with no symptoms. Diagnostic mammograms are reserved for patients with breast symptoms (such as palpable lumps, breast pain, skin changes, nipple changes, or nipple discharge), as follow-up for probably benign findings (coded BI-RADS 3), or for further evaluation of abnormal findings seen on their screening mammograms. Diagnostic mammograms may also performed on patients with personal or family histories of breast cancer. Patients with breast implants and other stable benign surgical histories generally do not require diagnostic mammograms. Until some years ago, mammography was typically performed with screen-film cassettes. Today, mammography is undergoing transition to digital detectors, known as digital mammography or Full Field Digital Mammography (FFDM). The first FFDM system was approved by the FDA in the U.S. in 2000. This progress is occurring some years later than in general radiology. This is due to several factors: The higher spatial resolution demands of mammography Significantly increased expense of the equipment Concern by the FDA that digital mammography equipment demonstrate that it is at least as good as screen-film mammography at detecting breast cancers without increasing dose or the number of women recalled for further evaluation. As of March 1, 2010, 62% of facilities in the United States and its territories have at least one FFDM unit. (The FDA includes computed radiography units in this figure.) Tomosynthesis, otherwise known as 3D mammography, was first introduced in clinical trials in 2008 and has been Medicare-approved in the United States since 2015. As of 2023, 3D mammography has become widely available in the US and has been shown to have improved sensitivity and specificity over 2D mammography. Mammograms are either looked at by one (single reading) or two (double reading) trained professionals: these film readers are generally radiologists, but may also be radiographers, radiotherapists, or breast clinicians (non-radiologist physicians specializing in breast disease). Double reading significantly improves the sensitivity and specificity of the procedure, and is standard practice in the United Kingdom, but not in the United States as it is not reimbursed by Medicare or private health insurance. This is despite multiple trials showing increased accuracy of detection and improved patient outcomes for both morbidity and mortality when double reading is employed. Clinical decision support systems may be used with digital mammography (or digitized images from analogue mammography), but studies suggest these approaches do not significantly improve performance or provide only a small improvement. === Interpretation of results === ==== BI-RADS ==== Stratification for breast cancer risk on a mammogram is based on a reporting system known as Breast Imaging-Reporting and Data System (BI-RADS), developed by the American College of Radiology in 1993. It has five general categories of findings: mass, asymmetry, architectural distortion, calcifications, and associated features. The use of language with BI-RADS is extremely precise, with a limited set of permissible adjectives for lesion margins, shape and internal density, each of which carries a different prognostic significance. Margins of a lesion, for example, can only be described as circumscribed, obscured, micropapillary, indistinct or stellate. Similarly, shape can only be round, oval or irregular. Each of these agreed upon adjectives is referred to as a "descriptor" in the BI-RADS lexicon, with specific positive and negative predictive values for breast cancer with each word. Additionally, each BI-RADS category corresponds with a probability of cancer. This fastiduous attention to semantics with BI-RADS allows for standardization of cancer detection across different treatment centers and imaging modalities. After describing the findings, the radiologist provides a final assessment ranging from 0 to 6: BI-RADS 0 indicates an incomplete assessment which needs additional imaging. BI-RADS 1 & 2 indicate a negative and benign screen mammogram respectively. BI-RADS 3 indicates probably benign. BI-RADS 4 indicates suspicious for malignancy. BI-RADS 5 indicates highly suggestive of malignancy. BI-RADS 6 is for biopsy-proven breast cancer. BI-RADS 3, 4 and 5 assessments on screening mammograms require further investigation with a second "diagnostic" study. The latter is a more detailed mammogram that allows dedicated attention to the abnormal finding with additional maneuvers such as magnification, rolling of breast tissue or exaggerated positioning. There may also be imaging with ultrasound at this time, which carries its own parallel BI-RADS lexicon. Suspicious lesions are then biopsied with local anesthesia or proceed straight to surgery depending on their staging. Biopsy can be done with the help of x-rays or ultrasound, depending on which imaging modality shows the lesion best. In the UK mammograms are scored on a scale from 1–5 (1 = normal, 2 = benign, 3 = indeterminate, 4 = suspicious of malignancy, 5 = malignant). Evidence suggests that accounting for genetic risk, factors improve breast cancer risk prediction. == History == As a medical procedure that induces ionizing radiation, the origin of mammography can be traced to the discovery of X-rays by Wilhelm Röntgen in 1895. In 1913, German surgeon Albert Salomon performed a mammography study on 3,000 mastectomies, comparing X-rays of the breasts to the actual removed tissue, observing specifically microcalcifications. By doing so, he was able to establish the difference as seen on an X-ray image between cancerous and non-cancerous tumors in the breast. Salomon's mammographs provided substantial information about the spread of tumors and their borders. In 1930, American physician and radiologist Stafford L. Warren published "A Roentgenologic Study of the Breast", a study where he produced stereoscopic X-rays images to track changes in breast tissue as a result of pregnancy and mastitis. In 119 women who subsequently underwent surgery, he correctly found breast cancer in 54 out of 58 cases. As early as 1937, Jacob Gershon-Cohen developed a form a mammography for a diagnostic of breast cancer at earlier stages to improve survival rates. In 1949, Raul Leborgne sparked renewed enthusiasm for mammography by emphasizing the importance of technical proficiency in patient positioning and the adoption of specific radiological parameters. He played a pioneering role in elevating imaging quality while placing particular emphasis on distinguishing between benign and malignant calcifications. In the early 1950s, Uruguayan radiologist Raul Leborgne developed the breast compression technique to produce better quality images, and described the differences between benign and malign microcalcifications. In 1956, Gershon-Cohen conducted clinical trails on over 1,000 asymptomatic women at the Albert Einstein Medical Center on his screening technique, and the same year, Robert Egan at the University of Texas M.D. Anderson Cancer Center combined a technique of low kVp with high mA and single emulsion films developed by Kodak to devise a method of screening mammography. He published these results in 1959 in a paper, subsequently vulgarized in a 1964 book called Mammography. The "Egan technique", as it became known, enabled physicians to detect calcification in breast tissue; of the 245 breast cancers that were confirmed by biopsy among 1,000 patients, Egan and his colleagues at M.D. Anderson were able to identify 238 cases by using his method, 19 of which were in patients whose physical examinations had revealed no breast pathology. Use of mammography as a screening technique spread clinically after a 1966 study demonstrating the impact of mammograms on mortality and treatment led by Philip Strax. This study, based in New York, was the first large-scale randomized controlled trial of mammography screening. In 1985, László Tabár and colleagues documented findings from mammographic screening involving 134,867 women aged 40 to 79. Using a single mediolateral oblique image, they reported a 31% reduction in mortality. Dr. Tabár has since written many publications promoting mammography in the areas of epidemiology, screening, early diagnosis, and clinical-radiological-pathological correlation. == Arguments against screening mammography == The use of mammography as a screening tool for the detection of early breast cancer in otherwise healthy women without symptoms is seen by some as controversial. Keen and Keen indicated that repeated mammography starting at age fifty saves about 1.8 lives over 15 years for every 1,000 women screened. This result has to be weighed against the adverse effects of errors in diagnosis, over-treatment, and radiation exposure. The Cochrane analysis of screening indicates that it is "not clear whether screening does more good than harm". According to their analysis, 1 in 2,000 women will have her life prolonged by 10 years of screening, while 10 healthy women will undergo unnecessary breast cancer treatment. Additionally, 200 women will experience significant psychological stress due to false positive results. The Cochrane Collaboration (2013) concluded after ten years that trials with adequate randomization did not find an effect of mammography screening on total cancer mortality, including breast cancer. The authors of this Cochrane review write: "If we assume that screening reduces breast cancer mortality by 15% and that overdiagnosis and over-treatment is at 30%, it means that for every 2,000 women invited for screening throughout 10 years, one will avoid dying of breast cancer and 10 healthy women, who would not have been diagnosed if there had not been screening, will be treated unnecessarily. Furthermore, more than 200 women will experience important psychological distress including anxiety and uncertainty for years because of false positive findings." The authors conclude that the time has come to re-assess whether universal mammography screening should be recommended for any age group. They state that universal screening may not be reasonable. The Nordic Cochrane Collection updated research in 2012 and stated that advances in diagnosis and treatment make mammography screening less effective today, rendering it "no longer effective". They conclude that "it therefore no longer seems reasonable to attend" for breast cancer screening at any age, and warn of misleading information on the internet. Newman posits that screening mammography does not reduce death overall, but causes significant harm by inflicting cancer scare and unnecessary surgical interventions. The Nordic Cochrane Collection notes that advances in diagnosis and treatment of breast cancer may make breast cancer screening no longer effective in decreasing death from breast cancer, and therefore no longer recommend routine screening for healthy women as the risks might outweigh the benefits. Of every 1,000 U.S. women who are screened, about 7% will be called back for a diagnostic session (although some studies estimate the number to be closer to 10% to 15%). About 10% of those who are called back will be referred for a biopsy. Of the 10% referred for biopsy, about 3.5% will have cancer and 6.5% will not. Of the 3.5% who have cancer, about 2 will have an early stage cancer that will be cured after treatment. Mammography may also produce false negatives. Estimates of the numbers of cancers missed by mammography are usually around 20%. Reasons for not seeing the cancer include observer error, but more frequently it is because the cancer is hidden by other dense tissue in the breast, and even after retrospective review of the mammogram the cancer cannot be seen. Furthermore, one form of breast cancer, lobular cancer, has a growth pattern that produces shadows on the mammogram that are indistinguishable from normal breast tissue. === Mortality === The Cochrane Collaboration states that the best quality evidence does not demonstrate a reduction in mortality or a reduction in mortality from all types of cancer from screening mammography. The Canadian Task Force found that for women ages 50 to 69, screening 720 women once every 2 to 3 years for 11 years would prevent one death from breast cancer. For women ages 40 to 49, 2,100 women would need to be screened at the same frequency and period to prevent a single death from breast cancer. Women whose breast cancer was detected by screening mammography before the appearance of a lump or other symptoms commonly assume that the mammogram "saved their lives". In practice, the vast majority of these women received no practical benefit from the mammogram. There are four categories of cancers found by mammography: Cancers that are so easily treated that a later detection would have produced the same rate of cure (women would have lived even without mammography). Cancers so aggressive that even early detection is too late to benefit the patient (women who die despite detection by mammography). Cancers that would have receded on their own or are so slow-growing that the woman would die of other causes before the cancer produced symptoms (mammography results in over-diagnosis and over-treatment of this class). A small number of breast cancers that are detected by screening mammography and whose treatment outcome improves as a result of earlier detection. Only 3% to 13% of breast cancers detected by screening mammography will fall into this last category. Clinical trial data suggests that 1 woman per 1,000 healthy women screened over 10 years falls into this category. Screening mammography produces no benefit to any of the remaining 87% to 97% of women. The probability of a woman falling into any of the above four categories varies with age. A 2016 review for the United States Preventive Services Task Force found that mammography was associated with an 8%-33% decrease in breast cancer mortality in different age groups, but that this decrease was not statistically significant at the age groups of 39–49 and 70–74. The same review found that mammography significantly decreased the risk of advanced cancer among women aged 50 and older by 38%, but among those aged 39 to 49 the risk reduction was a non-significant 2%. The USPSTF made their review based on data from randomized controlled trials (RCT) studying breast cancer in women between the ages of 40–49. === False positives === The goal of any screening procedure is to examine a large population of patients and find the small number most likely to have a serious condition. These patients are then referred for further, usually more invasive, testing. Thus a screening exam is not intended to be definitive; rather it is intended to have sufficient sensitivity to detect a useful proportion of cancers. The cost of higher sensitivity is a larger number of results that would be regarded as suspicious in patients without disease. This is true of mammography. The patients without disease who are called back for further testing from a screening session (about 7%) are sometimes referred to as "false positives". There is a trade-off between the number of patients with disease found and the much larger number of patients without disease that must be re-screened. Research shows that false-positive mammograms may affect women's well-being and behavior. Some women who receive false-positive results may be more likely to return for routine screening or perform breast self-examinations more frequently. However, some women who receive false-positive results become anxious, worried, and distressed about the possibility of having breast cancer, feelings that can last for many years. False positives also mean greater expense, both for the individual and for the screening program. Since follow-up screening is typically much more expensive than initial screening, more false positives (that must receive follow-up) means that fewer women may be screened for a given amount of money. Thus as sensitivity increases, a screening program will cost more or be confined to screening a smaller number of women. === Overdiagnosis === The central harm of mammographic breast cancer screening is overdiagnosis: the detection of abnormalities that meet the pathologic definition of cancer but will never progress to cause symptoms or death. Dr. H. Gilbert Welch, a researcher at Dartmouth College, states that "screen-detected breast and prostate cancer survivors are more likely to have been over-diagnosed than actually helped by the test." Estimates of overdiagnosis associated with mammography have ranged from 1% to 54%. In 2009, Peter C. Gotzsche and Karsten Juhl Jørgensen reviewed the literature and found that 1 in 3 cases of breast cancer detected in a population offered mammographic screening is over-diagnosed. In contrast, a 2012 panel convened by the national cancer director for England and Cancer Research UK concluded that 1 in 5 cases of breast cancer diagnosed among women who have undergone breast cancer screening are over-diagnosed. This means an over-diagnosis rate of 129 women per 10,000 invited to screening. A recent systematic review of 30 studies found that screening mammography for breast cancer among women aged 40 years and older was 12.6%. === False negatives === Mammograms also have a rate of missed tumors, or "false negatives". Accurate data regarding the number of false negatives are very difficult to obtain because mastectomies cannot be performed on every woman who has had a mammogram to determine the false negative rate. Estimates of the false negative rate depend on close follow-up of a large number of patients for many years. This is difficult in practice because many women do not return for regular mammography making it impossible to know if they ever developed a cancer. In his book The Politics of Cancer, Dr. Samuel S. Epstein claims that in women ages 40 to 49, one in four cancers are missed at each mammography. Researchers have found that breast tissue is denser among younger women, making it difficult to detect tumors. For this reason, false negatives are twice as likely to occur in pre-menopausal mammograms (Prate). This is why the screening program in the UK does not start calling women for screening mammograms until age 50. The importance of these missed cancers is not clear, particularly if the woman is getting yearly mammograms. Research on a closely related situation has shown that small cancers that are not acted upon immediately, but are observed over periods of several years, will have good outcomes. A group of 3,184 women had mammograms that were formally classified as "probably benign". This classification is for patients who are not clearly normal but have some area of minor concern. This results not in the patient being biopsied, but rather in having early follow up mammography every six months for three years to determine whether there has been any change in status. Of these 3,184 women, 17 (0.5%) did have cancers. Most importantly, when the diagnosis was finally made, they were all still stage 0 or 1, the earliest stages. Five years after treatment, none of these 17 women had evidence of re-occurrence. Thus, small early cancers, even though not acted on immediately, were still reliably curable. === Cost-effectiveness === Breast cancer imposes a significant economic strain on communities, with the expense of treating stages three and four in the United States in 2017 amounting to approximately $127,000. While early diagnosis and screening methods are important in reducing the death rates, the cost-benefit of breast cancer screening using mammography has been unclear. A recent systematic review of three studies held in Spain, Denmark, and the United States from 2000-2019 found that digital mammography is not cost-beneficial for the healthcare system when compared to other screening methods. Therefore, increasing its frequency may cause higher costs on the healthcare system. While there may be a lack of evidence, it is suggested that digital mammography be performed every two years for ages over 50. === Arguments against the USPSTF recommendations === As the USPSTF recommendations are so influential, changing mammography screenings from 50 to 40 years of age has significant implications to public health. The major concerns regarding this update is whether breast cancer mortality has truly been increasing and if there is new evidence that the benefits of mammography are increasing. According to National Vital Statistics System, mortality from breast cancer has been steadily decreasing in the United States from 2018 to 2021. There have also been no new randomized trials of screening mammography for women in their 40s since the previous USPSTF recommendation was made. In addition, the 8 most recent randomized trials for this age group revealed no significant effect. Instead, the USPSTF used statistical models to estimate what would happen if the starting age were lowered, assuming that screening mammography reduces breast cancer mortality by 25%. This found that screening 1,000 women from 40–74 years of age, instead of 50-74, would cause 1-2 fewer breast cancer deaths per 1,000 women screened over a lifetime. Approximately 75 percent of women diagnosed with breast cancer have no family history of breast cancer or other factors that put them at high risk for developing the disease (so screening only high-risk women misses majority of cancers). An analysis by Hendrick and Helvie, published in the American Journal of Roentgenology, showed that if USPSTF breast cancer screening guidelines were followed, approximately 6,500 additional women each year in the U.S. would die from breast cancer. The largest (Hellquist et al) and longest running (Tabar et al) breast cancer screening studies in history re-confirmed that regular mammography screening cut breast cancer deaths by roughly a third in all women ages 40 and over (including women ages 40–49). This renders the USPSTF calculations off by half. They used a 15% mortality reduction to calculate how many women needed to be invited to be screened to save a life. With the now re-confirmed 29% (or up) figure, the number to be screened using the USPSTF formula is half of their estimate and well within what they considered acceptable by their formula. == Society and culture == === Attendance === Many factors affect how many people attend breast cancer screenings. For example, people from minority ethnic communities are also less likely to attend cancer screening. In the UK, women of South Asian heritage are the least likely to attend breast cancer screening. Research is still needed to identify specific barriers for the different South Asian communities. For example, a study showed that British-Pakistani women faced cultural and language barriers and were not aware that breast screening takes place in a female-only environment. People with mental illnesses are also less likely to attend cancer screening appointments. In Northern Ireland women with mental health problems were shown to be less likely to attend screening for breast cancer, than women without. The lower attendance numbers remained the same even when marital status and social deprivation were taken into account. == Regulation == Mammography facilities in the United States and its territories (including military bases) are subject to the Mammography Quality Standards Act (MQSA). The act requires annual inspections and accreditation every three years through an FDA-approved body. Facilities found deficient during the inspection or accreditation process can be barred from performing mammograms until corrective action has been verified or, in extreme cases, can be required to notify past patients that their exams were sub-standard and should not be relied upon. At this time, MQSA applies only to traditional mammography and not to related scans, such as breast ultrasound, stereotactic breast biopsy, or breast MRI. As of September 10, 2024, the MQSA requires that all patients be notified of their breast density ("dense" or "not dense") in their mammogram reports. == Research == === Artificial intelligence (AI) algorithms === Recently, artificial intelligence (AI) programs have been developed to utilize features from screening mammography images to predict breast cancer risk. A systematic review of 16 retrospective study designs comparing median maximum AUC found that artificial intelligence had a comparable or better accuracy (AUC = 0.72) of predicting breast cancer than clinical risk factors alone (AUC = 0.61), suggesting a transition from clinical risk factor-based to AI image-based risk models may lead to more accurate and personalized risk-based screening approaches. Another study of 32 published papers involving 23,804 mammograms and various machine learning methods (CNN, ANN, and SVM) found promising results in the ability to assist clinicians in large-scale population-based breast cancer screening programs. == Alternative examination methods == For patients who do not want to undergo mammography, MRI and also breast computed tomography (also called breast CT) offer a painless alternative. Whether the respective method is suitable depends on the clinical picture; it is decided by the physician. == See also == Computed tomography laser mammography Molecular breast imaging Xeromammography == References == == Further reading == == External links == Mammographic Image Analysis Homepage Screening Mammograms: Questions and Answers, from the National Cancer Institute American Cancer Society: Mammograms and Other Breast Imaging Procedures U.S. Preventive Task Force recommendations on screening mammography
Wikipedia/Mammography
A hospital information system (HIS) is an element of health informatics that focuses mainly on the administrational needs of hospitals. In many implementations, a HIS is a comprehensive, integrated information system designed to manage all the aspects of a hospital's operation, such as medical, administrative, financial, and legal issues and the corresponding processing of services. Hospital information system is also known as hospital management software or hospital management system (HMS). More generally an HIS is a form of medical information system (MIS). Hospital information systems provide a common source of information about a patient's health history, and doctors schedule timing. The system has to keep data in a secure place and controls who can reach the data in certain circumstances. These systems enhance the ability of health care professionals to coordinate care by providing a patient's health information and visit history at the place and time that it is needed. Patient's laboratory test information also includes visual results such as X-ray, which may be reachable by professionals. HIS provide internal and external communication among health care providers. Portable devices such as smartphones and tablet computers may be used at the bedside. Hospital information systems are often composed of one or several software components with specialty-specific extensions, as well as of a large variety of sub-systems in medical specialties from a multi-vendor market. Specialized implementations name for example laboratory information system (LIS), Policy and Procedure Management System, radiology information system (RIS) or picture archiving and communication system (PACS). Potential benefits of hospital information systems include: Efficient and accurate administration of finance, diet of patient, engineering, and distribution of medical aid. It helps to view a broad picture of hospital growth Improved monitoring of drug usage, and study of effectiveness. This leads to the reduction of adverse drug interactions while promoting more appropriate pharmaceutical utilization. Enhances information integrity, reduces transcription errors, and reduces duplication of information entries. == Artificial Intelligence in Hospital Information Systems == Recent developments in Artificial Intelligence (AI) have led to the integration of intelligent technologies within modern Hospital Information Systems (HIS). AI-powered hospital management software leverages machine learning, natural language processing, predictive analytics, and computer vision to improve efficiency, clinical outcomes, and decision-making in healthcare environments. ===== Applications of AI in HIS ===== AI technologies are being used across several hospital operations, including: Clinical Decision Support: AI-driven modules assist healthcare professionals in diagnosis and treatment planning by analyzing patient data against large datasets of medical knowledge and clinical guidelines. Predictive Analytics: Machine learning algorithms are used to predict patient outcomes, readmission risks, and disease progression, allowing for proactive care interventions. Medical Imaging: AI enhances diagnostic accuracy in radiology and pathology by identifying patterns in medical images, reducing human error and speeding up analysis. Administrative Automation: Natural language processing (NLP) tools automate documentation, billing, coding, and scheduling, reducing clerical workload and operational costs. Personalized Medicine: AI analyzes genetic data and patient histories to tailor treatment plans to individual patient profiles. ===== Benefits of AI-Integrated HIS ===== The integration of AI into hospital management software offers several advantages: Improved Clinical Accuracy: AI can analyze complex datasets rapidly and identify patterns not easily visible to human clinicians, potentially improving diagnostic accuracy and reducing misdiagnosis. Operational Efficiency: By automating routine administrative tasks, AI frees up staff time, enhances workflow management, and optimizes resource allocation. Enhanced Patient Experience: AI-powered virtual assistants and chatbots facilitate communication, appointment scheduling, and patient engagement, improving overall satisfaction. Cost Reduction: Predictive analytics help reduce unnecessary tests and readmissions, leading to significant cost savings for healthcare providers. Real-Time Monitoring: AI tools enable continuous monitoring of patient vitals through wearable devices, triggering alerts in critical situations and reducing the need for constant human supervision. ===== Challenges and Ethical Considerations ===== Despite its potential, the use of AI in hospital management raises concerns related to: Data Privacy and Security: Ensuring compliance with healthcare data protection regulations (e.g., HIPAA, GDPR) is crucial when using AI systems. Bias in Algorithms: AI models trained on biased datasets can produce inequitable outcomes, particularly in underserved populations. Clinical Validation: Many AI tools still require rigorous clinical validation before they can be reliably used in patient care settings. ==== Adoption Trends ==== Hospitals worldwide, particularly in technologically advanced regions, are increasingly investing in AI-integrated systems. Large-scale deployments have been observed in countries such as the United States, Germany, India, and Singapore, often in partnership with health tech companies and research institutions. == See also == == References == 4. AI-Powered Hospital Management Software information get by NZCares
Wikipedia/Hospital_information_systems
Dual-energy X-ray absorptiometry (DXA, or DEXA) is a means of measuring bone mineral density (BMD) with spectral imaging. Two X-ray beams, with different energy levels, are aimed at the patient's bones. When soft tissue absorption is subtracted, the bone mineral density (BMD) can be determined from the absorption of each beam by bone. Dual-energy X-ray absorptiometry is the most widely used and most thoroughly studied bone density measurement technology. The DXA scan is typically used to diagnose and follow osteoporosis, as contrasted to the nuclear bone scan, which is sensitive to certain metabolic diseases of bones in which bones are trying to heal from infections, fractures, or tumors. It is also sometimes used to assess body composition. == Physics == Soft tissue and bone have different attenuation coefficients to X-rays. A single X-ray beam passing through the body is attenuated by both soft tissue and bone, and it is not possible to determine from a single beam how much attenuation is attributable to the bone. However, attenuation coefficients vary with the energy of the X-rays, and, crucially, the ratio of the attenuation coefficients also varies. DXA uses two energies of X-ray. The difference in total absorption between the two can be used, by suitable weighting, to subtract out the absorption by soft tissue, leaving just the absorption by bone, which is related to bone density. One type of DXA scanner uses a cerium filter with a tube voltage of 80 kV, resulting in effective photon energies of about 40 and 70 keV. Another type of DXA scanner uses a samarium filter with a tube voltage of 100 kV, which produces effective energies of 47 and 80 keV. Also, the tube voltage can be continuously switched between a low (for example 70 kV) and high (for example 140 kV) value in synchronism with the frequency of the electrical mains, resulting in effective energies alternating between 45 and 100 keV. The combination of dual X-ray absorptiometry and laser uses the laser to measure the thickness of the region scanned, allowing for varying proportions of lean soft tissue and adipose tissue within the soft tissue to be controlled for and improving the accuracy. == Bone density measurement == === Indications === The U.S. Preventive Services Task Force recommends that women over the age of 65 should get a DXA scan. The age when men should be tested is uncertain, but some sources recommend age 70. At-risk women should consider getting a scan when their risk is equal to that of a normal 65-year-old woman. A person's risk can be measured with the University of Sheffield's FRAX calculator—which includes many clinical risk factors, including prior fragility fracture, use of glucocorticoids, heavy smoking, excess alcohol intake, rheumatoid arthritis, history of parental hip fracture, chronic renal and liver disease, chronic respiratory disease, long-term use of phenobarbital or phenytoin, celiac disease, inflammatory bowel disease, and other risks. === Scoring === The World Health Organization has defined the following categories based on bone density in white women: Bone densities are often given to patients as a T score or a Z score. A T score tells the patient what their bone mineral density is in comparison to a young adult of the same gender with peak bone mineral density. A normal T score is -1.0 and above, low bone density is between -1.0 and -2.5, and osteoporosis is -2.5 and lower. A Z score is just a comparison of what a patient's bone mineral density is in comparison to the average bone mineral density of a male or female of their age and weight. The WHO committee did not have enough data to create definitions for men or nonwhite women. Special considerations are involved in the use of DXA to assess bone mass in children. Specifically, comparing the bone mineral density of children to the reference data of adults (to calculate a T-score) underestimates the BMD of children, because children have less bone mass than fully developed adults. This would lead to an over-diagnosis of osteopenia for children. To avoid an overestimation of bone mineral deficits, BMD scores are commonly compared to reference data for the same gender and age (by calculating a Z-score). Also, there are other variables in addition to age that are suggested to confound the interpretation of BMD as measured by DXA. One important confounding variable is bone size. DXA has been shown to overestimate the bone mineral density of taller subjects and underestimate the bone mineral density of smaller subjects. This error is due to the way DXA calculates BMD. In DXA, bone mineral content (measured as the attenuation of the X-ray by the bones being scanned) is divided by the area (also measured by the machine) of the site being scanned. Because DXA calculates BMD using area (aBMD: areal Bone Mineral Density), it is not an accurate measurement of true bone mineral density, which is mass divided by a volume. To distinguish DXA BMD from volumetric bone-mineral density, researchers sometimes refer to DXA BMD as an areal bone mineral density (aBMD). The confounding effect of differences in bone size is due to the missing depth value in the calculation of bone mineral density. Despite DXA technology's problems with estimating volume, it is still a fairly accurate measure of bone mineral content. Methods to correct for this shortcoming include the calculation of a volume that is approximated from the projected area measure by DXA. DXA BMD results adjusted in this manner are referred to as the bone mineral apparent density (BMAD) and are a ratio of the bone mineral content versus a cuboidal estimation of the volume of bone. Like the results for aBMD, BMAD results do not accurately represent true bone mineral density, since they use approximations of the bone's volume. BMAD is used primarily for research purposes and is not yet used in clinical settings. Other imaging technologies such as quantitative computed tomography (QCT) are capable of measuring the bone's volume, and are, therefore, not susceptible to the confounding effect of bone-size in the way that DXA results are susceptible. It is important for patients to get repeat BMD measurements done on the same machine each time, or at least a machine from the same manufacturer. Error between machines, or trying to convert measurements from one manufacturer's standard to another can introduce errors large enough to wipe out the sensitivity of the measurements. DXA results must be adjusted if the patient is taking strontium supplements. DXA can also used to measure trabecular bone score. === Current clinical practice in pediatrics === DXA is, by far, the most widely used technique for bone mineral density measurements, since it is relatively inexpensive, accessible, easy to use, and provides an accurate estimation of bone mineral density in adults. The official position of the International Society for Clinical Densitometry (ISCD) is that a patient be tested for BMD if they have a condition that could precipitate bone loss, are to be prescribed pharmaceuticals known to cause bone loss, or are being treated and need reqiore monitoring. The ISCD states that there is no clearly understood correlation between BMD and the risk of a child's sustaining a fracture; the diagnosis of osteoporosis in children cannot be made on the basis of densitometry criteria. T-scores are prohibited with children and should not even appear on DXA reports. Thus, the WHO classification of osteoporosis and osteopenia in adults cannot be applied to children, but Z-scores can be used to assist diagnosis. Some clinics may routinely carry out DXA scans on pediatric patients with conditions such as nutritional rickets, lupus, and Turner syndrome. DXA has been demonstrated to measure skeletal maturity and body fat composition and has been used to evaluate the effects of pharmaceutical therapy. It may also aid pediatricians in diagnosing and monitoring treatment of disorders of bone mass acquisition in childhood. However, it seems that DXA is still in its early days in pediatrics, and there are widely acknowledged limitations and disadvantages with DXA. A view exists that DXA scans for diagnostic purposes should not even be performed outside specialist centers, and, if a scan is done outside one of these centers, it should not be interpreted without consultation with an expert in the field. Furthermore, most of the pharmaceuticals given to adults with low bone mass can be given to children only in strictly monitored clinical trials. Whole-body calcium measured by DXA has been validated in adults using in-vivo neutron activation of total body calcium but this is not suitable for paediatric subjects and studies have been carried out on paediatric-sized animals. == Body composition measurement == DXA scans can also be used to measure total body composition and fat content with a high degree of accuracy comparable to hydrostatic weighing with a few important caveats. From the DXA scans, a low resolution "fat shadow" image can also be generated, which gives an overall impression of fat distribution throughout the body. It has been suggested that, while very accurately measuring minerals and lean soft tissue (LST), DXA may provide skewed results due to its method of indirectly calculating fat mass by subtracting it from the LST and/or body cell mass (BCM) that DXA actually measures. DXA scans have been suggested as useful tools to diagnose conditions with an abnormal fat distribution, such as familial partial lipodystrophy. They are also used to assess adiposity in children, especially to conduct clinical research. == Radiation exposure == DXA uses X-rays to measure bone mineral density. The radiation dose of current DEXA systems is small, as low as 0.001 mSv, much less than a standard chest or dental x-ray. However, the dose delivered by older DEXA radiation sources (that used radioisotopes rather than x-ray generators) could be as high as 35 mGy, considered a significant dose by radiological health standards. == Regulation == === United States === The quality of DXA operators varies widely. DXA is not regulated like other radiation-based imaging techniques because of its low dosage. Each US state has a different policy as to what certifications are needed to operate a DXA machine. California, for example, requires coursework and a state-run test, whereas Maryland has no requirements for DXA technicians. Many states require a training course and certificate from the International Society of Clinical Densitometry (ISCD). === Australia === In Australia, regulation differs according to the applicable state or territory. For example, in Victoria, an individual performing DXA scans is required to completed a recognised course in safe use of bone mineral densitometers. In NSW and QLD a DXA technician only requires prior study in science, nursing or other related undergraduate study. The Environmental Protection Agency (EPA) oversees licensing of technicians, however, this is far from rigorous and regulation is non-existent. == References == == External links == Non-invasive testing of bone density explained Information for patients, from RSNA Bone Densitometry explained
Wikipedia/Dual_energy_X-ray_absorptiometry
Brachytherapy is a form of radiation therapy where a sealed radiation source is placed inside or next to the area requiring treatment. The word "brachytherapy" comes from the Greek word βραχύς, brachys, meaning "short-distance" or "short". Brachytherapy is commonly used as an effective treatment for cervical, prostate, breast, esophageal and skin cancer and can also be used to treat tumours in many other body sites. Treatment results have demonstrated that the cancer-cure rates of brachytherapy are either comparable to surgery and external beam radiotherapy (EBRT) or are improved when used in combination with these techniques. Brachytherapy can be used alone or in combination with other therapies such as surgery, EBRT and chemotherapy. Brachytherapy contrasts with unsealed source radiotherapy, in which a therapeutic radionuclide (radioisotope) is injected into the body to chemically localize to the tissue requiring destruction. It also contrasts to External Beam Radiation Therapy (EBRT), in which high-energy x-rays (or occasionally gamma-rays from a radioisotope like cobalt-60) are directed at the tumour from outside the body. Brachytherapy instead involves the precise placement of short-range radiation-sources (radioisotopes, iodine-125 or caesium-131 for instance) directly at the site of the cancerous tumour. These are enclosed in a protective capsule or wire, which allows the ionizing radiation to escape to treat and kill surrounding tissue but prevents the charge of radioisotope from moving or dissolving in body fluids. The capsule may be removed later, or (with some radioisotopes) it may be allowed to remain in place.: Ch. 1  A feature of brachytherapy is that the irradiation affects only a very localized area around the radiation sources. Exposure to radiation of healthy tissues farther away from the sources is therefore reduced. In addition, if the patient moves or if there is any movement of the tumour within the body during treatment, the radiation sources retain their correct position in relation to the tumour. These characteristics of brachytherapy provide advantages over EBRT – the tumour can be treated with very high doses of localised radiation whilst reducing the probability of unnecessary damage to surrounding healthy tissues.: Ch. 1  A course of brachytherapy can be completed in less time than other radiotherapy techniques. This can help reduce the chance for surviving cancer-cells to divide and grow in the intervals between each radiotherapy dose. Patients typically have to make fewer visits to the radiotherapy clinic compared with EBRT, and may receive the treatment as outpatients. This makes treatment accessible and convenient for many patients. These features of brachytherapy mean that most patients are able to tolerate the brachytherapy procedure very well. The global market for brachytherapy reached US$680 million in 2013, of which the high-dose rate (HDR) and LDR segments accounted for 70%. Microspheres and electronic brachytherapy comprised the remaining 30%. One analysis predicts that the brachytherapy market may reach over US$2.4 billion in 2030, growing by 8% annually, mainly driven by the microspheres market as well as electronic brachytherapy, which is gaining significant interest worldwide as a user-friendly technology. == Medical uses == Brachytherapy is commonly used to treat cancers of the cervix, prostate, breast, and skin. Brachytherapy can also be used in the treatment of tumours of the brain, eye, head and neck region (lip, floor of mouth, tongue, nasopharynx and oropharynx), respiratory tract (trachea and bronchi), digestive tract (oesophagus, gall bladder, bile-ducts, rectum, anus), urinary tract (bladder, urethra, penis), female reproductive tract (uterus, vagina, vulva), and soft tissues. As the radiation sources can be precisely positioned at the tumour treatment site, brachytherapy enables a high dose of radiation to be applied to a small area. Furthermore, because the radiation sources are placed in or next to the target tumour, the sources maintain their position in relation to the tumour when the patient moves or if there is any movement of the tumour within the body. Therefore, the radiation sources remain accurately targeted. This enables clinicians to achieve a high level of dose conformity – i.e. ensuring the whole of the tumour receives an optimal level of radiation. It also reduces the risk of damage to healthy tissue, organs or structures around the tumour, thus enhancing the chance of cure and preservation of organ function. The use of HDR brachytherapy enables overall treatment times to be reduced compared with EBRT. Patients receiving brachytherapy generally have to make fewer visits for radiotherapy compared with EBRT, and overall radiotherapy treatment plans can be completed in less time. Many brachytherapy procedures are performed on an outpatient basis. This convenience may be particularly relevant for patients who have to work, older patients, or patients who live some distance from treatment centres, to ensure that they have access to radiotherapy treatment and adhere to treatment plans. Shorter treatment times and outpatient procedures can also help improve the efficiency of radiotherapy clinics. Brachytherapy can be used with the aim of curing the cancer in cases of small or locally advanced tumours, provided the cancer has not metastasized (spread to other parts of the body). In appropriately selected cases, brachytherapy for primary tumours often represents a comparable approach to surgery, achieving the same probability of cure and with similar side effects. However, in locally advanced tumours, surgery may not routinely provide the best chance of cure and is often not technically feasible to perform. In these cases radiotherapy, including brachytherapy, offers the only chance of cure. In more advanced disease stages, brachytherapy can be used as palliative treatment for symptom relief from pain and bleeding. In cases where the tumour is not easily accessible or is too large to ensure an optimal distribution of irradiation to the treatment area, brachytherapy can be combined with other treatments, such as EBRT and/or surgery.: Ch. 1  Combination therapy of brachytherapy exclusively with chemotherapy is rare. === Cervical cancer === Brachytherapy is commonly used in the treatment of early or locally confined cervical cancer and is a standard of care in many countries.: Ch. 14  Cervical cancer can be treated with either LDR, PDR or HDR brachytherapy. Used in combination with EBRT, brachytherapy can provide better outcomes than EBRT alone. The precision of brachytherapy enables a high dose of targeted radiation to be delivered to the cervix, while minimising radiation exposure to adjacent tissues and organs. The chances of staying free of disease (disease-free survival) and of staying alive (overall survival) are similar for LDR, PDR and HDR treatments. However, a key advantage of HDR treatment is that each dose can be delivered on an outpatient basis with a short administration time providing greater convenience for many patients. Research shows locally advanced carcinoma of the cervix must be treated with a combination of external beam radiotherapy (EBRT) and intracavity brachytherapy (ICBT). === Prostate cancer === Brachytherapy to treat prostate cancer can be given either as permanent LDR seed implantation or as temporary HDR brachytherapy.: Ch. 20  Permanent seed implantation is suitable for patients with a localised tumour and good prognosis and has been shown to be a highly effective treatment to prevent the cancer from returning. The survival rate is similar to that found with EBRT or surgery (radical prostatectomy), but with fewer side effects such as impotence and incontinence. The procedure can be completed quickly and patients are usually able to go home on the same day of treatment and return to normal activities after one to two days. Permanent seed implantation is often a less invasive treatment option compared to the surgical removal of the prostate. Temporary HDR brachytherapy is a newer approach to treating prostate cancer, but is currently less common than seed implantation. It is predominantly used to provide an extra dose in addition to EBRT (known as "boost" therapy) as it offers an alternative method to deliver a high dose of radiation therapy that conforms to the shape of the tumour within the prostate, while sparing radiation exposure to surrounding tissues. HDR brachytherapy as a boost for prostate cancer also means that the EBRT course can be shorter than when EBRT is used alone. === Breast cancer === Radiation therapy is standard of care for women who have undergone lumpectomy or mastectomy surgery, and is an integral component of breast-conserving therapy.: Ch. 18  Brachytherapy can be used after surgery, before chemotherapy or palliatively in the case of advanced disease. Brachytherapy to treat breast cancer is usually performed with HDR temporary brachytherapy. Post surgery, breast brachytherapy can be used as a "boost" following whole breast irradiation (WBI) using EBRT. More recently, brachytherapy alone is used to deliver APBI (accelerated partial breast irradiation), involving delivery of radiation to only the immediate region surrounding the original tumour. The main benefit of breast brachytherapy compared to whole breast irradiation is that a high dose of radiation can be precisely applied to the tumour while sparing radiation to healthy breast tissues and underlying structures such as the ribs and lungs. APBI can typically be completed over the course of a week. The option of brachytherapy may be particularly important in ensuring that working women, the elderly or women without easy access to a treatment centre, are able to benefit from breast-conserving therapy due to the short treatment course compared with WBI (which often requires more visits over the course of 1–2 months). There are five methods that can be used to deliver breast brachytherapy: Interstitial breast brachytherapy, Intracavitary breast brachytherapy, Intraoperative radiation therapy, Permanent Breast Seed Implantation and non-invasive breast brachytherapy using mammography for target localization and an HDR source. ==== Interstitial breast brachytherapy ==== Interstitial breast brachytherapy involves the temporary placement of several flexible plastic catheters in the breast tissue. These are carefully positioned to allow optimal targeting of radiation to the treatment area while sparing the surrounding breast tissue. The catheters are connected to an afterloader, which delivers the planned radiation dose to the treatment area. Interstitial breast brachytherapy can be used as "boost" after EBRT, or as APBI. ==== Intraoperative radiation therapy ==== Intraoperative radiation therapy (IORT) delivers radiation at the same time as the surgery to remove the tumour (lumpectomy). An applicator is placed in the cavity left after tumour removal and a mobile electronic device generates radiation (either x-rays or electrons) and delivers it via the applicator. Radiation is delivered all at once and the applicator removed before closing the incision. ==== Intracavitary breast brachytherapy ==== Intracavitary breast brachytherapy (also known as "balloon brachytherapy") involves the placement of a single catheter into the breast cavity left after the removal of the tumour (lumpectomy). The catheter can be placed at the time of the lumpectomy or postoperatively. Via the catheter, a balloon is then inflated in the cavity. The catheter is then connected to an afterloader, which delivers the radiation dose through the catheter and into the balloon. Currently, intracavitary breast brachytherapy is only routinely used for APBI. There are also devices that combine the features of interstitial and intracavitary breast brachytherapy (e.g. SAVI). These devices use multiple catheters but are inserted through a single-entry point in the breast. Studies suggest the use of multiple catheters enables physicians to target the radiation more precisely. ==== Permanent breast seed implantation ==== Permanent breast seed implantation (PBSI) implants many radioactive "seeds" (small pellets) into the breast in the area surrounding the site of the tumour, similar to permanent seed prostate brachytherapy. The seeds are implanted in a single 1–2 hour procedure and deliver radiation over the following months as the radioactive material inside them decays. Risk of radiation from the implants to others (e.g. partner/spouse) has been studied and found to be safe. === Brain tumors === Surgically Targeted Radiation Therapy (STaRT), branded as GammaTile Therapy, is a type of brachytherapy implant specifically designed for use inside the brain. GammaTile is FDA-cleared to treat newly diagnosed, operable malignant intracranial neoplasms (i.e., brain tumors) and operable recurrent intracranial neoplasms, including meningiomas, metastases, high-grade gliomas, and glioblastomas. In a clinical study, GammaTile Therapy improved local tumor control compared to previous same-site treatments without an increased risk of side effects. === Esophageal cancer === For esophageal cancer radiation treatment, brachytherapy is one option for effective treatment, involves definitive radiotherapy (boost) or palliative treatments. Definitive radiotherapy (boost) can deliver the dose precisely and palliative treatments can be given to relieve dysphagia. The large diameter applicators or balloon type catheter are used with the afterloader to expand the esophagus and facilitate the delivery of radiation dose to tumor with sparing of nearby normal tissue. Brachytherapy followed EBRT or surgery have been shown to improve the survival rate and local recurrent rate than EBRT or surgery only for esophageal cancer patients. === Skin cancer === HDR brachytherapy for nonmelanomatous skin cancer, such as basal cell carcinoma and squamous cell carcinoma, provides an alternative treatment option to surgery. This is especially relevant for cancers on the nose, ears, eyelids or lips, where surgery may cause disfigurement or require extensive reconstruction.: Ch. 28  Various applicators can be used to ensure close contact between the radiation source(s) and the skin, which conform to the curvature of the skin and help ensure precision delivery of the optimal irradiation dose.: Ch. 28  Another type of brachytherapy which has similar advantages as the HDR is provided be the Rhenium-SCT (Skin Cancer Therapy). It makes use of the beta ray emissions of Rhenium-188 to treat basal cell or squamous cell carcinomas. the radiation source is enclosed in a compound which is applied to a thin protective foil directly over the lesion. This way the radiation source can be applied to complex locations and minimize radiation to healthy tissue. Brachytherapy for skin cancer provides good cosmetic results and clinical efficacy; studies with up to five years follow-up have shown that brachytherapy is highly effective in terms of local control, and is comparable to EBRT. Treatment times are typically short, providing convenience for patients. It has been suggested that brachytherapy may become a standard of treatment for skin cancer in the near future. === Blood vessels === Brachytherapy can be used in the treatment of coronary in-stent restenosis, in which a catheter is placed inside blood vessels, through which sources are inserted and removed. In treating In-stent restenosis (ISR) Drug Eluting stents (DES) have been found to be superior to Intracoronary Brachytherapy (ICBT). However, there is continued interest in vascular brachytherapy for persistent restenosis in failed stents and vein grafts. The therapy has also been investigated for use in the treatment of peripheral vasculature stenosis and considered for the treatment of atrial fibrillation. == Side effects == The likelihood and nature of potential acute, sub-acute or long-term side-effects associated with brachytherapy depends on the location of the tumour being treated and the type of brachytherapy being used. === Acute === Acute side effects associated with brachytherapy include localised bruising, swelling, bleeding, discharge or discomfort within the implanted region. These usually resolve within a few days following completion of treatment. Patients may also feel fatigued for a short period following treatment. Brachytherapy treatment for cervical or prostate cancer can cause acute and transient urinary symptoms such as urinary retention, urinary incontinence or painful urination (dysuria). Transient increased bowel frequency, diarrhoea, constipation or minor rectal bleeding may also occur. Acute and subacute side effects usually resolve over a matter of days or a few weeks. In the case of permanent (seed) brachytherapy for prostate cancer, there is a small chance that some seeds may migrate out of the treatment region into the bladder or urethra and be passed in the urine. Brachytherapy for skin cancer may result in a shedding of the outer layers of skin (desquamation) around the area of treatment in the weeks following therapy, which typically heals in 5–8 weeks.: Ch. 28  If the cancer is located on the lip, ulceration may occur as a result of brachytherapy, but usually resolves after 4–6 weeks. Most of the acute side effects associated with brachytherapy can be treated with medication or through dietary changes, and usually disappear over time (typically a matter of weeks), once the treatment is completed. The acute side effects of HDR brachytherapy are broadly similar to EBRT. === Long-term === In a small number of people, brachytherapy may cause long-term side effects due to damage or disruption of adjacent tissues or organs. Long-term side effects are usually mild or moderate in nature. For example, urinary and digestive problems may persist as a result of brachytherapy for cervical or prostate cancer, and may require ongoing management. Brachytherapy for prostate cancer may cause erectile dysfunction in approximately 15–30% of patients.: Ch. 20  However, the risk of erectile dysfunction is related to age (older men are at a greater risk than younger men) and also the level of erectile function prior to receiving brachytherapy. In patients who do experience erectile dysfunction, the majority of cases can successfully be treated with drugs such as Viagra.: Ch. 20  Importantly, the risk of erectile dysfunction after brachytherapy is less than after radical prostatectomy. Brachytherapy for breast or skin cancer may cause scar tissue to form around the treatment area. In the case of breast brachytherapy, fat necrosis may occur as a result of fatty acids entering the breast tissues. This can cause the breast tissue to become swollen and tender. Fat necrosis is a benign condition and typically occurs 4–12 months after treatment and affects about 2% of patients. == Safety around others == Patients often ask if they need to have special safety precautions around family and friends after receiving brachytherapy. If temporary brachytherapy is used, no radioactive sources remain in the body after treatment. Therefore, there is no radiation risk to friends or family from being in close proximity with them. If permanent brachytherapy is used, low dose radioactive sources (seeds) are left in the body after treatment – the radiation levels are very low and decrease over time. In addition, the irradiation only affects tissues within a few millimetres of the radioactive sources (i.e. the tumour being treated). As a precaution, some people receiving permanent brachytherapy may be advised not to hold any small children or be too close to pregnant women for a short time after treatment. Radiation oncologists or nurses can provide specific instructions to patients and advise for how long they need to be careful. == Types == Different types of brachytherapy can be defined according to (1) the placement of the radiation sources in the target treatment area, (2) the rate or 'intensity' of the irradiation dose delivered to the tumour, and (3) the duration of dose delivery. === Source placement === The two main types of brachytherapy treatment in terms of the placement of the radioactive source are interstitial and contact. In the case of interstitial brachytherapy, the sources are placed directly in the target tissue of the affected site, such as the prostate or breast.: Ch. 1  Contact brachytherapy involves placement of the radiation source in a space next to the target tissue.: Ch. 1  This space may be a body cavity (intracavitary brachytherapy) such as the cervix, uterus or vagina; a body lumen (intraluminal brachytherapy) such as the trachea or oesophagus; or externally (surface brachytherapy) such as the skin.: Ch. 1  A radiation source can also be placed in blood vessels (intravascular brachytherapy) for the treatment of coronary in-stent restenosis. === Dose rate === The dose rate of brachytherapy refers to the level or 'intensity' with which the radiation is delivered to the surrounding medium and is expressed in Grays per hour (Gy/h). Low-dose rate (LDR) brachytherapy involves implanting radiation sources that emit radiation at a rate of up to 2 Gy·h−1. LDR brachytherapy is commonly used for cancers of the oral cavity, oropharynx, sarcomas: Ch. 27  and prostate cancer: Ch. 20  Medium-dose rate (MDR) brachytherapy is characterized by a medium rate of dose delivery, ranging between 2 Gy·h−1 to 12 Gy·h−1. High-dose rate (HDR) brachytherapy is when the rate of dose delivery exceeds 12 Gy·h−1. The most common applications of HDR brachytherapy are in tumours of the cervix, esophagus, lungs, breasts and prostate. Most HDR treatments are performed on an outpatient basis, but this is dependent on the treatment site. Pulsed-dose rate (PDR) brachytherapy involves short pulses of radiation, typically once an hour, to simulate the overall rate and effectiveness of LDR treatment. Typical tumour sites treated by PDR brachytherapy are gynaecological: Ch. 14  and head and neck cancers. The calculation of radiation dose from radioactive seeds is crucial in the planning and administration of brachytherapy treatments. Most modern calculation are done using the formalism published by the American Association of Physicists in Medicine. For the geometry in figure 1, this formalism uses five parameters. Strength of the source: How much radiation is being emitted by the seed, expressed as air kerma strength and denoted by S k {\displaystyle S_{k}} . Dose rate of the source: How much dose the seed will deliver to the reference point over a certain period of time, denoted by Λ {\displaystyle \Lambda } . Geometry factor: How the shape of the seed will affect the dose at points away from the reference point, denoted by G ( r , θ ) {\displaystyle G(r,\theta )} . Anisotropy function: How the much radiation will be stopped before passing out of the seed, denoted by F ( r , θ ) {\displaystyle F(r,\theta )} . Radial dose function: How the radiation will interact with the material surrounding the seed, denoted by g ( r ) {\displaystyle g(r)} . The equation which links these parameters is, D ( r , θ ) = S k Λ G ( r , θ ) G ( r 0 , θ 0 ) g ( r ) F ( r , θ ) {\displaystyle D(r,\theta )=S_{k}\Lambda {\frac {G(r,\theta )}{G(r_{0},\theta _{0})}}g(r)F(r,\theta )} === Duration of dose delivery === The placement of radiation sources in the target area can be temporary or permanent. Temporary brachytherapy involves placement of radiation sources for a set duration (usually a number of minutes or hours) before being withdrawn.: Ch. 1  The specific treatment duration will depend on many different factors, including the required rate of dose delivery and the type, size and location of the cancer. In LDR and PDR brachytherapy, the source typically stays in place up to 24 hours before being removed, while in HDR brachytherapy this time is typically a few minutes. Permanent brachytherapy, also known as seed implantation, involves placing small LDR radioactive seeds or pellets (about the size of a grain of rice) in the tumour or treatment site and leaving them there permanently to gradually decay. Over a period of weeks or months, the level of radiation emitted by the sources will decline to almost zero. The inactive seeds then remain in the treatment site with no lasting effect. Permanent brachytherapy is most commonly used in the treatment of prostate cancer. == Procedure == === Initial planning === To accurately plan the brachytherapy procedure, a thorough clinical examination is performed to understand the characteristics of the tumour. In addition, a range of imaging modalities can be used to visualise the shape and size of the tumour and its relation to surrounding tissues and organs. These include x-ray radiography, ultrasound, computed axial tomography (CT or CAT) scans, and magnetic resonance imaging (MRI).: Ch. 5  The data from many of these sources can be used to create a 3D visualisation of the tumour and the surrounding tissues.: Ch. 5  Using this information, a plan of the optimal distribution of the radiation sources can be developed. This includes consideration of how the source carriers (applicators), which are used to deliver the radiation to the treatment site, should be placed and positioned.: Ch. 5  Applicators are non-radioactive and are typically needles or plastic catheters. The specific type of applicator used will depend on the type of cancer being treated and the characteristics of the target tumour.: Ch. 5  This initial planning helps to ensure that 'cold spots' (too little irradiation) and 'hot spots' (too much irradiation) are avoided during treatment, as these can respectively result in treatment failure and side-effects. === Insertion === Before radioactive sources can be delivered to the tumour site, the applicators have to be inserted and correctly positioned in line with the initial planning. Imaging techniques, such as x-ray, fluoroscopy and ultrasound are typically used to help guide the placement of the applicators to their correct positions and to further refine the treatment plan.: Ch. 5  CT scans and MRI can also be used.: Ch. 5  Once the applicators are inserted, they are held in place against the skin using sutures or adhesive tape to prevent them from moving. Once the applicators are confirmed as being in the correct position, further imaging can be performed to guide detailed treatment planning.: Ch. 5  === Creation of a virtual patient === The images of the patient with the applicators in situ are imported into treatment planning software and the patient is brought into a dedicated shielded room for treatment. The treatment planning software enables multiple 2D images of the treatment site to be translated into a 3D 'virtual patient', within which the position of the applicators can be defined.: Ch. 5  The spatial relationships between the applicators, the treatment site and the surrounding healthy tissues within this 'virtual patient' are a copy of the relationships in the actual patient. === Optimizing the irradiation plan === To identify the optimal spatial and temporal distribution of radiation sources within the applicators of the implanted tissue or cavity, the treatment planning software allows virtual radiation sources to be placed within the virtual patient. The software shows a graphical representation of the distribution of the irradiation. This serves as a guide for the brachytherapy team to refine the distribution of the sources and provide a treatment plan that is optimally tailored to the anatomy of each patient before actual delivery of the irradiation begins. This approach is sometimes called 'dose-painting'. === Treatment delivery === The radiation sources used for brachytherapy are always enclosed within a non-radioactive capsule. The sources can be delivered manually, but are more commonly delivered through a technique known as 'afterloading'. Manual delivery of brachytherapy is limited to a few LDR applications, due to risk of radiation exposure to clinical staff. In contrast, afterloading involves the accurate positioning of non-radioactive applicators in the treatment site, which are subsequently loaded with the radiation sources. In manual afterloading, the source is delivered into the applicator by the operator. Remote afterloading systems provide protection from radiation exposure to healthcare professionals by securing the radiation source in a shielded safe. Once the applicators are correctly positioned in the patient, they are connected to an 'afterloader' machine (containing the radioactive sources) through a series of connecting guide tubes. The treatment plan is sent to the afterloader, which then controls the delivery of the sources along the guide tubes into the pre-specified positions within the applicator. This process is only engaged once staff are removed from the treatment room. The sources remain in place for a pre-specified length of time, again following the treatment plan, following which they are returned along the tubes to the afterloader. On completion of delivery of the radioactive sources, the applicators are carefully removed from the body. Patients typically recover quickly from the brachytherapy procedure, enabling it to often be performed on an outpatient basis. Between 2003 and 2012 in United States community hospitals, the rate of hospital stays with brachytherapy had a 24.4 percent average annual decrease among adults aged 45–64 years and a 27.3 percent average annual decrease among adults aged 65–84 years. Brachytherapy was the OR procedure with the greatest change in occurrence among hospital stays paid by Medicare and private insurance. == Radiation sources == Commonly used radiation sources (radionuclides) for brachytherapy include: == History == Brachytherapy dates back to 1901 (shortly after the discovery of radioactivity by Henri Becquerel in 1896) when Pierre Curie suggested to Henri-Alexandre Danlos that a radioactive source could be inserted into a tumour. It was found that the radiation caused the tumour to shrink. Independently, Alexander Graham Bell also suggested the use of radiation in this way. In the early twentieth century, techniques for the application of brachytherapy were pioneered at the Curie institute in Paris by Danlos and at St Luke's and Memorial Hospital in New York by Robert Abbe.: Ch. 1  Working with the Curies in their radium research laboratory at the University of Paris, American physicist William Duane refined a technique for extracting radon-222 gas from radium sulfate solutions. Solutions containing 1 gram of radium were "milked" to create radon "seeds" of about 20 millicuries each. These "seeds" were distributed throughout Paris for use in an early form of brachytherapy named endocurietherapy. Duane perfected this "milking" technique during his time in Paris and referred to the device as a "radium cow". Duane returned to the United States in 1913 and worked in a joint role as assistant professor of physics at Harvard and Research Fellow in Physics of the Harvard Cancer Commission. The Cancer Commission was founded in 1901 and hired Duane to investigate the usage of radium emanations in the treatment of cancer. In 1915 he built Boston's first "radium cow" and thousands of patients were treated with the radon-222 generated from it. Interstitial radium therapy was common in the 1930s.: Ch. 1  Gold seeds filled with radon were used as early as 1942 until at least 1958. Gold shells were selected by Gino Failla around 1920 to shield beta rays while passing gamma rays. Cobalt needles were also used briefly after World War II.: Ch. 1  Radon and cobalt were replaced by radioactive tantalum and gold, before iridium rose in prominence.: Ch. 1  First used in 1958, iridium is the most commonly used artificial source for brachytherapy today.: Ch. 1  Following initial interest in brachytherapy in Europe and the US, its use declined in the middle of the twentieth century due to the problem of radiation exposure to operators from the manual application of the radioactive sources. However, the development of remote afterloading systems, which allow the radiation to be delivered from a shielded safe, and the use of new radioactive sources in the 1950s and 1960s, reduced the risk of unnecessary radiation exposure to the operator and patients. This, together with more recent advancements in three-dimensional imaging modalities, computerised treatment planning systems and delivery equipment has made brachytherapy a safe and effective treatment for many types of cancer today.: Ch. 1  == Environmental hazards and orphanhood == Due to their small size and the poor controls over their fate during the early decades of their use, a significant risk of loss exists for brachytherapy seeds. At least two instances have been recorded of seeds escaping from the tight radiological control under which they are held in hospitals, becoming what are known as orphaned sources. The first occurred sometime in the 1930s or 1940s, when a number of gold seeds filled with highly radioactive radon-222 were illicitly melted down and mixed with other gold scrap. This scrap gold was contaminated with that isotope's long-lived daughter nuclides, lead, bismuth and polonium-210. It was then allowed to enter jewelry production in Upstate New York, where at least 144 pieces of jewelry were fabricated using this radioactive gold. Over the following decades there were several instances of fingers that required amputation to save patients from cancers caused by radioactive wedding, engagement and class rings. One Pennsylvania man was reported to have died from metastatic cancer caused by radioactive gold jewelry. Although the phenomenon of gold jewelry made from recycled radon seeds retaining their radioactivity had been noted as early as the mid-1930s by Edith Quimby, one of the founders of nuclear medicine, and the distribution and serious medical consequences of radioactive gold jewelry reported in medical journals as early as April of 1967, it was not until February of 1981 that New York State's Department of Health took concerted action to identify items made from the contaminated gold. The second occurred in Prague, sometime before 2011. In September of that year a computer scientist named Pavel Bykov, who happened to have a small geiger counter built into his wristwatch, noticed abnormal readings in the Podolí neighborhood playground where he had taken his children. After retrieving a larger counter from his home, Bykov determined that a highly radioactive source was buried in the playground. The park was evacuated by authorities soon after. Specialized excavation equipment recovered a 20 x 2mm radium-226 needle from the playground's dirt. The source retained a high degree of activity and was found to emit 500 μSv per hour from a meter away, giving a dose during that period equivalent to one year of background exposure. == See also == External beam radiotherapy Prostate brachytherapy Targeted intra-operative radiotherapy Unsealed source radiotherapy Nuclear medicine Intraoperative radiation therapy Contact X-ray brachytherapy (also called "electronic brachytherapy") == References == == External links == American Brachytherapy Society (ABS)
Wikipedia/Sealed_source_radiotherapy
Radiography is an imaging technique using X-rays, gamma rays, or similar ionizing radiation and non-ionizing radiation to view the internal form of an object. Applications of radiography include medical ("diagnostic" radiography and "therapeutic radiography") and industrial radiography. Similar techniques are used in airport security, (where "body scanners" generally use backscatter X-ray). To create an image in conventional radiography, a beam of X-rays is produced by an X-ray generator and it is projected towards the object. A certain amount of the X-rays or other radiation are absorbed by the object, dependent on the object's density and structural composition. The X-rays that pass through the object are captured behind the object by a detector (either photographic film or a digital detector). The generation of flat two-dimensional images by this technique is called projectional radiography. In computed tomography (CT scanning), an X-ray source and its associated detectors rotate around the subject, which itself moves through the conical X-ray beam produced. Any given point within the subject is crossed from many directions by many different beams at different times. Information regarding the attenuation of these beams is collated and subjected to computation to generate two-dimensional images on three planes (axial, coronal, and sagittal) which can be further processed to produce a three-dimensional image. == History == Radiography's origins and fluoroscopy's origins can both be traced to 8 November 1895, when German physics professor Wilhelm Conrad Röntgen discovered the X-ray and noted that, while it could pass through human tissue, it could not pass through bone or metal. Röntgen referred to the radiation as "X", to indicate that it was an unknown type of radiation. He received the first Nobel Prize in Physics for his discovery. There are conflicting accounts of his discovery because Röntgen had his lab notes burned after his death, but this is a likely reconstruction by his biographers: Röntgen was investigating cathode rays using a fluorescent screen painted with barium platinocyanide and a Crookes tube which he had wrapped in black cardboard to shield its fluorescent glow. He noticed a faint green glow from the screen, about 1 metre away. Röntgen realized some invisible rays coming from the tube were passing through the cardboard to make the screen glow: they were passing through an opaque object to affect the film behind it. Röntgen discovered X-rays' medical use when he made a picture of his wife's hand on a photographic plate formed due to X-rays. The photograph of his wife's hand was the first ever photograph of a human body part using X-rays. When she saw the picture, she said, "I have seen my death." The first use of X-rays under clinical conditions was by John Hall-Edwards in Birmingham, England, on 11 January 1896, when he radiographed a needle stuck in the hand of an associate. On 14 February 1896, Hall-Edwards also became the first to use X-rays in a surgical operation. The United States saw its first medical X-ray obtained using a discharge tube of Ivan Pulyui's design. In January 1896, on reading of Röntgen's discovery, Frank Austin of Dartmouth College tested all of the discharge tubes in the physics laboratory and found that only the Pulyui tube produced X-rays. This was a result of Pulyui's inclusion of an oblique "target" of mica, used for holding samples of fluorescent material, within the tube. On 3 February 1896 Gilman Frost, professor of medicine at the college, and his brother Edwin Frost, professor of physics, exposed the wrist of Eddie McCarthy, whom Gilman had treated some weeks earlier for a fracture, to the X-rays and collected the resulting image of the broken bone on gelatin photographic plates obtained from Howard Langill, a local photographer also interested in Röntgen's work. X-rays were put to diagnostic use very early; for example, Alan Archibald Campbell-Swinton opened a radiographic laboratory in the United Kingdom in 1896, before the dangers of ionizing radiation were discovered. Indeed, Marie Curie pushed for radiography to be used to treat wounded soldiers in World War I. Initially, many kinds of staff conducted radiography in hospitals, including physicists, photographers, physicians, nurses, and engineers. The medical speciality of radiology grew up over many years around the new technology. When new diagnostic tests were developed, it was natural for the radiographers to be trained in and to adopt this new technology. Radiographers now perform fluoroscopy, computed tomography, mammography, ultrasound, nuclear medicine and magnetic resonance imaging as well. Although a nonspecialist dictionary might define radiography quite narrowly as "taking X-ray images", this has long been only part of the work of "X-ray departments", radiographers, and radiologists. Initially, radiographs were known as roentgenograms, while skiagrapher (from the Ancient Greek words for "shadow" and "writer") was used until about 1918 to mean radiographer. The Japanese term for the radiograph, rentogen (レントゲン), shares its etymology with the original English term. == Medical uses == Since the body is made up of various substances with differing densities, ionising and non-ionising radiation can be used to reveal the internal structure of the body on an image receptor by highlighting these differences using attenuation, or in the case of ionising radiation, the absorption of X-ray photons by the denser substances (like calcium-rich bones). The discipline involving the study of anatomy through the use of radiographic images is known as radiographic anatomy. Medical radiography acquisition is generally carried out by radiographers, while image analysis is generally done by radiologists. Some radiographers also specialise in image interpretation. Medical radiography includes a range of modalities producing many different types of image, each of which has a different clinical application. === Projectional radiography === The creation of images by exposing an object to X-rays or other high-energy forms of electromagnetic radiation and capturing the resulting remnant beam (or "shadow") as a latent image is known as "projection radiography". The "shadow" may be converted to light using a fluorescent screen, which is then captured on photographic film, it may be captured by a phosphor screen to be "read" later by a laser (CR), or it may directly activate a matrix of solid-state detectors (DR—similar to a very large version of a CCD in a digital camera). Bone and some organs (such as lungs) especially lend themselves to projection radiography. It is a relatively low-cost investigation with a high diagnostic yield. The difference between soft and hard body parts stems mostly from the fact that carbon has a very low X-ray cross section compared to calcium. === Computed tomography === Computed tomography or CT scan (previously known as CAT scan, the "A" standing for "axial") uses ionizing radiation (x-ray radiation) in conjunction with a computer to create images of both soft and hard tissues. These images look as though the patient was sliced like bread (thus, "tomography" – "tomo" means "slice"). Though CT uses a higher amount of ionizing x-radiation than diagnostic x-rays (both utilising X-ray radiation), with advances in technology, levels of CT radiation dose and scan times have reduced. CT exams are generally short, most lasting only as long as a breath-hold, Contrast agents are also often used, depending on the tissues needing to be seen. Radiographers perform these examinations, sometimes in conjunction with a radiologist (for instance, when a radiologist performs a CT-guided biopsy). === Dual energy X-ray absorptiometry === DEXA, or bone densitometry, is used primarily for osteoporosis tests. It is not projection radiography, as the X-rays are emitted in two narrow beams that are scanned across the patient, 90 degrees from each other. Usually the hip (head of the femur), lower back (lumbar spine), or heel (calcaneum) are imaged, and the bone density (amount of calcium) is determined and given a number (a T-score). It is not used for bone imaging, as the image quality is not good enough to make an accurate diagnostic image for fractures, inflammation, etc. It can also be used to measure total body fat, though this is not common. The radiation dose received from DEXA scans is very low, much lower than projection radiography examinations. === Fluoroscopy === Fluoroscopy is a term invented by Thomas Edison during his early X-ray studies. The name refers to the fluorescence he saw while looking at a glowing plate bombarded with X-rays. The technique provides moving projection radiographs. Fluoroscopy is mainly performed to view movement (of tissue or a contrast agent), or to guide a medical intervention, such as angioplasty, pacemaker insertion, or joint repair/replacement. The last can often be carried out in the operating theatre, using a portable fluoroscopy machine called a C-arm. It can move around the surgery table and make digital images for the surgeon. Biplanar Fluoroscopy works the same as single plane fluoroscopy except displaying two planes at the same time. The ability to work in two planes is important for orthopedic and spinal surgery and can reduce operating times by eliminating re-positioning. Angiography is the use of fluoroscopy to view the cardiovascular system. An iodine-based contrast is injected into the bloodstream and watched as it travels around. Since liquid blood and the vessels are not very dense, a contrast with high density (like the large iodine atoms) is used to view the vessels under X-ray. Angiography is used to find aneurysms, leaks, blockages (thromboses), new vessel growth, and placement of catheters and stents. Balloon angioplasty is often done with angiography. === Contrast radiography === Contrast radiography uses a radiocontrast agent, a type of contrast medium, to make the structures of interest stand out visually from their background. Contrast agents are required in conventional angiography, and can be used in both projectional radiography and computed tomography (called contrast CT). === Other medical imaging === Although not technically radiographic techniques due to not using X-rays, imaging modalities such as PET and MRI are sometimes grouped in radiography because the radiology department of hospitals handle all forms of imaging. Treatment using radiation is known as radiotherapy. == Industrial radiography == Industrial radiography is a method of non-destructive testing where many types of manufactured components can be examined to verify the internal structure and integrity of the specimen. Industrial Radiography can be performed utilizing either X-rays or gamma rays. Both are forms of electromagnetic radiation. The difference between various forms of electromagnetic energy is related to the wavelength. X and gamma rays have the shortest wavelength and this property leads to the ability to penetrate, travel through, and exit various materials such as carbon steel and other metals. Specific methods include industrial computed tomography. == Image quality == Image quality will depend on resolution and density. Resolution is the ability of an image to show closely spaced structure in the object as separate entities in the image while density is the blackening power of the image. Sharpness of a radiographic image is strongly determined by the size of the X-ray source. This is determined by the area of the electron beam hitting the anode. A large photon source results in more blurring in the final image and is worsened by an increase in image formation distance. This blurring can be measured as a contribution to the modulation transfer function of the imaging system. == Radiation dose == The dosage of radiation applied in radiography varies by procedure. For example, the effective dosage of a chest x-ray is 0.1 mSv, while an abdominal CT is 10 mSv. The American Association of Physicists in Medicine (AAPM) have stated that the "risks of medical imaging at patient doses below 50 mSv for single procedures or 100 mSv for multiple procedures over short time periods are too low to be detectable and may be nonexistent." Other scientific bodies sharing this conclusion include the International Organization of Medical Physicists, the UN Scientific Committee on the Effects of Atomic Radiation, and the International Commission on Radiological Protection. Nonetheless, radiological organizations, including the Radiological Society of North America (RSNA) and the American College of Radiology (ACR), as well as multiple government agencies, indicate safety standards to ensure that radiation dosage is as low as possible. === Shielding === Lead is the most common shield against X-rays because of its highdensity (11,340 kg/m3), stopping power, ease of installation and low cost. The maximum range of a high-energy photon such as an X-ray in matter is infinite; at every point in the matter traversed by the photon, there is a probability of interaction. Thus there is a very small probability of no interaction over very large distances. The shielding of photon beam is therefore exponential (with an attenuation length being close to the radiation length of the material); doubling the thickness of shielding will square the shielding effect. Starting in the 1950s, personal lead shielding began to be used on directly on patients during all X-rays over the abdomen to intending to protect the gonads (reproductive organs) or a fetus if the patient was pregnant. Dental X-rays would also typically additionally use lead shielding to protect the thyroid. However, a consensus was reached between 2019 and 2021 that lead shielding for routine diagnostic X-rays is not necessary and may in some cases be harmful. Personal shielding for medical professionals and other people in the room is still recommended. Rooms where X-rays are performed are lined with lead. The table in this section shows the recommended thickness of lead shielding for a room where X-rays are performed as function of X-ray energy, from the Recommendations by the Second International Congress of Radiology. === Campaigns === In response to increased concern by the public over radiation doses and the ongoing progress of best practices, The Alliance for Radiation Safety in Pediatric Imaging was formed within the Society for Pediatric Radiology. In concert with the American Society of Radiologic Technologists, the American College of Radiology, and the American Association of Physicists in Medicine, the Society for Pediatric Radiology developed and launched the Image Gently campaign which is designed to maintain high quality imaging studies while using the lowest doses and best radiation safety practices available on pediatric patients. This initiative has been endorsed and applied by a growing list of various professional medical organizations around the world and has received support and assistance from companies that manufacture equipment used in radiology. Following upon the success of the Image Gently campaign, the American College of Radiology, the Radiological Society of North America, the American Association of Physicists in Medicine, and the American Society of Radiologic Technologists have launched a similar campaign to address this issue in the adult population called Image Wisely. The World Health Organization and International Atomic Energy Agency (IAEA) of the United Nations have also been working in this area and have ongoing projects designed to broaden best practices and lower patient radiation dose. === Provider payment === Contrary to advice that emphasises only conducting radiographs when in the patient's interest, recent evidence suggests that they are used more frequently when dentists are paid under fee-for-service. == Equipment == === Sources === In medicine and dentistry, projectional radiography and computed tomography images generally use X-rays created by X-ray generators, which generate X-rays from X-ray tubes. The resultant images from the radiograph (X-ray generator/machine) or CT scanner are correctly referred to as "radiograms"/"roentgenograms" and "tomograms" respectively. A number of other sources of X-ray photons are possible, and may be used in industrial radiography or research; these include betatrons, linear accelerators (linacs), and synchrotrons. For gamma rays, radioactive sources such as 192Ir, 60Co, or 137Cs are used. === Grid === An anti-scatter grid may be placed between the patient and the detector to reduce the quantity of scattered x-rays that reach the detector. This improves the contrast resolution of the image, but also increases radiation exposure for the patient. === Detectors === Detectors can be divided into two major categories: imaging detectors (such as photographic plates and X-ray film (photographic film), now mostly replaced by various digitizing devices like image plates or flat panel detectors) and dose measurement devices (such as ionization chambers, Geiger counters, and dosimeters used to measure the local radiation exposure, dose, and/or dose rate, for example, for verifying that radiation protection equipment and procedures are effective on an ongoing basis). === Side markers === A radiopaque anatomical side marker is added to each image. For example, if the patient has their right hand x-rayed, the radiographer includes a radiopaque "R" marker within the field of the x-ray beam as an indicator of which hand has been imaged. If a physical marker is not included, the radiographer may add the correct side marker later as part of digital post-processing. === Image intensifiers and array detectors === As an alternative to X-ray detectors, image intensifiers are analog devices that readily convert the acquired X-ray image into one visible on a video screen. This device is made of a vacuum tube with a wide input surface coated on the inside with caesium iodide (CsI). When hit by X-rays, phosphor material causes the photocathode adjacent to it to emit electrons. These electrons are then focused using electron lenses inside the intensifier to an output screen coated with phosphorescent materials. The image from the output can then be recorded via a camera and displayed. Digital devices known as array detectors are becoming more common in fluoroscopy. These devices are made of discrete pixelated detectors known as thin-film transistors (TFT) which can either work indirectly by using photo detectors that detect light emitted from a scintillator material such as CsI, or directly by capturing the electrons produced when the X-rays hit the detector. Direct detectors do not tend to experience the blurring or spreading effect caused by phosphorescent scintillators or by film screens since the detectors are activated directly by X-ray photons. == Dual-energy == Dual-energy radiography is where images are acquired using two separate tube voltages. This is the standard method for bone densitometry. It is also used in CT pulmonary angiography to decrease the required dose of iodinated contrast. == See also == Autoradiograph – Radiograph made by recording radiation emitted by samples on photographic plates Background radiation – Measure of ionizing radiation in the environment Computer-aided diagnosis – Type of diagnosis assisted by computers GXMO Imaging science – Representation or reproduction of an object's formPages displaying short descriptions of redirect targets List of civilian radiation accidents Medical imaging in pregnancy – Types of pregnancy imaging techniques Radiation – Waves or particles moving through space Digital radiography – Form of radiography Radiation contamination – Undesirable radioactive elements on surfaces or in gases, liquids, or solids is a problemPages displaying short descriptions of redirect targets Radiographer – Healthcare professional Thermography – Infrared imaging used to reveal temperature == References == == Further reading == == External links == MedPix Medical Image Database Video on X-ray inspection and industrial computed tomography, Karlsruhe University of Applied Sciences NIST's XAAMDI: X-Ray Attenuation and Absorption for Materials of Dosimetric Interest Database NIST's XCOM: Photon Cross Sections Database NIST's FAST: Attenuation and Scattering Tables A lost industrial radiography source event RadiologyInfo - The radiology information resource for patients: Radiography (X-rays)
Wikipedia/Radiography
Radionuclide therapy (RNT, also known as unsealed source radiotherapy or molecular radiotherapy) uses radioactive substances called radiopharmaceuticals to treat medical conditions, particularly cancer. These are introduced into the body by various means (injection or ingestion are the two most commonplace) and localise to specific locations, organs or tissues depending on their properties and administration routes. This includes anything from a simple compound such as sodium iodide that locates to the thyroid via trapping the iodide ion, to complex biopharmaceuticals such as recombinant antibodies which are attached to radionuclides and seek out specific antigens on cell surfaces. This is a type of targeted therapy which uses the physical, chemical and biological properties of the radiopharmaceutical to target areas of the body for radiation treatment. The related diagnostic modality of nuclear medicine employs the same principles but uses different types or quantities of radiopharmaceuticals in order to image or analyse functional systems within the patient. RNT contrasts with sealed-source therapy (brachytherapy) where the radionuclide remains in a capsule or metal wire during treatment and needs to be physically placed precisely at the treatment position. When the radionuclides are ligands (such as with Lutathera and Pluvicto), the technique is also known as radioligand therapy. == Clinical use == === Thyroid conditions === Iodine-131 (131I) is the most common RNT worldwide and uses the simple compound sodium iodide with a radioactive isotope of iodine. The patient (human or animal) may ingest an oral solid or liquid amount or receive an intravenous injection of a solution of the compound. The iodide ion is selectively taken up by the thyroid gland. Both benign conditions like thyrotoxicosis and certain malignant conditions like papillary thyroid cancer can be treated with the radiation emitted by radioiodine. Iodine-131 produces beta and gamma radiation. The beta radiation released damages both normal thyroid tissue and any thyroid cancer that behaves like normal thyroid in taking up iodine, so providing the therapeutic effect, whilst most of the gamma radiation escapes the patient's body. Most of the iodine not taken up by thyroid tissue is excreted through the kidneys into the urine. After radioiodine treatment the urine will be radioactive or 'hot', and the patients themselves will also emit gamma radiation. Depending on the amount of radioactivity administered, it can take several days for the radioactivity to reduce to the point where the patient does not pose a radiation hazard to bystanders. Patients are often treated as inpatients and there are international guidelines, as well as legislation in many countries, which govern the point at which they may return home. === Bone metastasis === Radium-223 chloride, strontium-89 chloride and samarium-153 EDTMP are used to treat secondary cancer in the bones. Radium and strontium mimic calcium in the body. Samarium is bound to tetraphosphate EDTMP, phosphates are taken up by osteoblastic (bone forming) repairs that occur adjacent to some metastatic lesions. === Bone marrow conditions === Beta emitting phosphorus-32 (32P), as sodium phosphate, is used to treat overactive bone marrow, in which it is otherwise naturally metabolised. === Joint inflammation === ==== Yttrium-90 colloid ==== An yttrium-90 (90Y) colloidal suspension is used for radiosynovectomy in the knee joint. === Liver tumours === ==== Yttrium-90 spheres ==== 90Y in the form of a resin or glass spheres can be used to treat primary and metastatic liver cancers. === Neuroendocrine tumours === ==== Iodine-131 mIBG ==== 131I-mIBG (metaiodobenzylguanidine) is used for the treatment of phaeochromocytoma and neuroblastoma. ==== Lutetium-177 ==== 177Lu is bound with a DOTA chelator to target neuroendocrine tumours. == Experimental antibody based methods == At the Institute for Transuranium Elements (ITU) work is being done on alpha-immunotherapy, this is an experimental method where antibodies bearing alpha isotopes are used. Bismuth-213 is one of the isotopes which has been used. This is made by the alpha decay of actinium-225. The generation of one short-lived isotope from longer lived isotope is a useful method of providing a portable supply of a short-lived isotope. This is similar to the generation of technetium-99m by a technetium generator. The actinium-225 is made by the irradiation of radium-226 with a cyclotron. == References ==
Wikipedia/Unsealed_source_radiotherapy
Scientific laws or laws of science are statements, based on repeated experiments or observations, that describe or predict a range of natural phenomena. The term law has diverse usage in many cases (approximate, accurate, broad, or narrow) across all fields of natural science (physics, chemistry, astronomy, geoscience, biology). Laws are developed from data and can be further developed through mathematics; in all cases they are directly or indirectly based on empirical evidence. It is generally understood that they implicitly reflect, though they do not explicitly assert, causal relationships fundamental to reality, and are discovered rather than invented. Scientific laws summarize the results of experiments or observations, usually within a certain range of application. In general, the accuracy of a law does not change when a new theory of the relevant phenomenon is worked out, but rather the scope of the law's application, since the mathematics or statement representing the law does not change. As with other kinds of scientific knowledge, scientific laws do not express absolute certainty, as mathematical laws do. A scientific law may be contradicted, restricted, or extended by future observations. A law can often be formulated as one or several statements or equations, so that it can predict the outcome of an experiment. Laws differ from hypotheses and postulates, which are proposed during the scientific process before and during validation by experiment and observation. Hypotheses and postulates are not laws, since they have not been verified to the same degree, although they may lead to the formulation of laws. Laws are narrower in scope than scientific theories, which may entail one or several laws. Science distinguishes a law or theory from facts. Calling a law a fact is ambiguous, an overstatement, or an equivocation. The nature of scientific laws has been much discussed in philosophy, but in essence scientific laws are simply empirical conclusions reached by the scientific method; they are intended to be neither laden with ontological commitments nor statements of logical absolutes. Social sciences such as economics have also attempted to formulate scientific laws, though these generally have much less predictive power. == Overview == A scientific law always applies to a physical system under repeated conditions, and it implies that there is a causal relationship involving the elements of the system. Factual and well-confirmed statements like "Mercury is liquid at standard temperature and pressure" are considered too specific to qualify as scientific laws. A central problem in the philosophy of science, going back to David Hume, is that of distinguishing causal relationships (such as those implied by laws) from principles that arise due to constant conjunction. Laws differ from scientific theories in that they do not posit a mechanism or explanation of phenomena: they are merely distillations of the results of repeated observation. As such, the applicability of a law is limited to circumstances resembling those already observed, and the law may be found to be false when extrapolated. Ohm's law only applies to linear networks; Newton's law of universal gravitation only applies in weak gravitational fields; the early laws of aerodynamics, such as Bernoulli's principle, do not apply in the case of compressible flow such as occurs in transonic and supersonic flight; Hooke's law only applies to strain below the elastic limit; Boyle's law applies with perfect accuracy only to the ideal gas, etc. These laws remain useful, but only under the specified conditions where they apply. Many laws take mathematical forms, and thus can be stated as an equation; for example, the law of conservation of energy can be written as Δ E = 0 {\displaystyle \Delta E=0} , where E {\displaystyle E} is the total amount of energy in the universe. Similarly, the first law of thermodynamics can be written as d U = δ Q − δ W {\displaystyle \mathrm {d} U=\delta Q-\delta W\,} , and Newton's second law can be written as F = d p d t . {\displaystyle \textstyle F={\frac {dp}{dt}}.} While these scientific laws explain what our senses perceive, they are still empirical (acquired by observation or scientific experiment) and so are not like mathematical theorems which can be proved purely by mathematics. Like theories and hypotheses, laws make predictions; specifically, they predict that new observations will conform to the given law. Laws can be falsified if they are found in contradiction with new data. Some laws are only approximations of other more general laws, and are good approximations with a restricted domain of applicability. For example, Newtonian dynamics (which is based on Galilean transformations) is the low-speed limit of special relativity (since the Galilean transformation is the low-speed approximation to the Lorentz transformation). Similarly, the Newtonian gravitation law is a low-mass approximation of general relativity, and Coulomb's law is an approximation to quantum electrodynamics at large distances (compared to the range of weak interactions). In such cases it is common to use the simpler, approximate versions of the laws, instead of the more accurate general laws. Laws are constantly being tested experimentally to increasing degrees of precision, which is one of the main goals of science. The fact that laws have never been observed to be violated does not preclude testing them at increased accuracy or in new kinds of conditions to confirm whether they continue to hold, or whether they break, and what can be discovered in the process. It is always possible for laws to be invalidated or proven to have limitations, by repeatable experimental evidence, should any be observed. Well-established laws have indeed been invalidated in some special cases, but the new formulations created to explain the discrepancies generalize upon, rather than overthrow, the originals. That is, the invalidated laws have been found to be only close approximations, to which other terms or factors must be added to cover previously unaccounted-for conditions, e.g. very large or very small scales of time or space, enormous speeds or masses, etc. This, rather than unchanging knowledge, physical laws are better viewed as a series of improving and more precise generalizations. == Properties == Scientific laws are typically conclusions based on repeated scientific experiments and observations over many years and which have become accepted universally within the scientific community. A scientific law is "inferred from particular facts, applicable to a defined group or class of phenomena, and expressible by the statement that a particular phenomenon always occurs if certain conditions be present". The production of a summary description of our environment in the form of such laws is a fundamental aim of science. Several general properties of scientific laws, particularly when referring to laws in physics, have been identified. Scientific laws are: True, at least within their regime of validity. By definition, there have never been repeatable contradicting observations. Universal. They appear to apply everywhere in the universe.: 82  Simple. They are typically expressed in terms of a single mathematical equation. Absolute. Nothing in the universe appears to affect them.: 82  Stable. Unchanged since first discovered (although they may have been shown to be approximations of more accurate laws), All-encompassing. Everything in the universe apparently must comply with them (according to observations). Generally conservative of quantity.: 59  Often expressions of existing homogeneities (symmetries) of space and time. Typically theoretically reversible in time (if non-quantum), although time itself is irreversible. Broad. In physics, laws exclusively refer to the broad domain of matter, motion, energy, and force itself, rather than more specific systems in the universe, such as living systems, e.g. the mechanics of the human body. The term "scientific law" is traditionally associated with the natural sciences, though the social sciences also contain laws. For example, Zipf's law is a law in the social sciences which is based on mathematical statistics. In these cases, laws may describe general trends or expected behaviors rather than being absolutes. In natural science, impossibility assertions come to be widely accepted as overwhelmingly probable rather than considered proved to the point of being unchallengeable. The basis for this strong acceptance is a combination of extensive evidence of something not occurring, combined with an underlying theory, very successful in making predictions, whose assumptions lead logically to the conclusion that something is impossible. While an impossibility assertion in natural science can never be absolutely proved, it could be refuted by the observation of a single counterexample. Such a counterexample would require that the assumptions underlying the theory that implied the impossibility be re-examined. Some examples of widely accepted impossibilities in physics are perpetual motion machines, which violate the law of conservation of energy, exceeding the speed of light, which violates the implications of special relativity, the uncertainty principle of quantum mechanics, which asserts the impossibility of simultaneously knowing both the position and the momentum of a particle, and Bell's theorem: no physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics. == Laws as consequences of mathematical symmetries == Some laws reflect mathematical symmetries found in nature (e.g. the Pauli exclusion principle reflects identity of electrons, conservation laws reflect homogeneity of space, time, and Lorentz transformations reflect rotational symmetry of spacetime). Many fundamental physical laws are mathematical consequences of various symmetries of space, time, or other aspects of nature. Specifically, Noether's theorem connects some conservation laws to certain symmetries. For example, conservation of energy is a consequence of the shift symmetry of time (no moment of time is different from any other), while conservation of momentum is a consequence of the symmetry (homogeneity) of space (no place in space is special, or different from any other). The indistinguishability of all particles of each fundamental type (say, electrons, or photons) results in the Dirac and Bose quantum statistics which in turn result in the Pauli exclusion principle for fermions and in Bose–Einstein condensation for bosons. Special relativity uses rapidity to express motion according to the symmetries of hyperbolic rotation, a transformation mixing space and time. Symmetry between inertial and gravitational mass results in general relativity. The inverse square law of interactions mediated by massless bosons is the mathematical consequence of the 3-dimensionality of space. One strategy in the search for the most fundamental laws of nature is to search for the most general mathematical symmetry group that can be applied to the fundamental interactions. == Laws of physics == === Conservation laws === ==== Conservation and symmetry ==== Conservation laws are fundamental laws that follow from the homogeneity of space, time and phase, in other words symmetry. Noether's theorem: Any quantity with a continuously differentiable symmetry in the action has an associated conservation law. Conservation of mass was the first law to be understood since most macroscopic physical processes involving masses, for example, collisions of massive particles or fluid flow, provide the apparent belief that mass is conserved. Mass conservation was observed to be true for all chemical reactions. In general, this is only approximative because with the advent of relativity and experiments in nuclear and particle physics: mass can be transformed into energy and vice versa, so mass is not always conserved but part of the more general conservation of mass–energy. Conservation of energy, momentum and angular momentum for isolated systems can be found to be symmetries in time, translation, and rotation. Conservation of charge was also realized since charge has never been observed to be created or destroyed and only found to move from place to place. ==== Continuity and transfer ==== Conservation laws can be expressed using the general continuity equation (for a conserved quantity) can be written in differential form as: ∂ ρ ∂ t = − ∇ ⋅ J {\displaystyle {\frac {\partial \rho }{\partial t}}=-\nabla \cdot \mathbf {J} } where ρ is some quantity per unit volume, J is the flux of that quantity (change in quantity per unit time per unit area). Intuitively, the divergence (denoted ∇⋅) of a vector field is a measure of flux diverging radially outwards from a point, so the negative is the amount piling up at a point; hence the rate of change of density in a region of space must be the amount of flux leaving or collecting in some region (see the main article for details). In the table below, the fluxes flows for various physical quantities in transport, and their associated continuity equations, are collected for comparison. More general equations are the convection–diffusion equation and Boltzmann transport equation, which have their roots in the continuity equation. === Laws of classical mechanics === ==== Principle of least action ==== Classical mechanics, including Newton's laws, Lagrange's equations, Hamilton's equations, etc., can be derived from the following principle: δ S = δ ∫ t 1 t 2 L ( q , q ˙ , t ) d t = 0 {\displaystyle \delta {\mathcal {S}}=\delta \int _{t_{1}}^{t_{2}}L(\mathbf {q} ,\mathbf {\dot {q}} ,t)\,dt=0} where S {\displaystyle {\mathcal {S}}} is the action; the integral of the Lagrangian L ( q , q ˙ , t ) = T ( q ˙ , t ) − V ( q , q ˙ , t ) {\displaystyle L(\mathbf {q} ,\mathbf {\dot {q}} ,t)=T(\mathbf {\dot {q}} ,t)-V(\mathbf {q} ,\mathbf {\dot {q}} ,t)} of the physical system between two times t1 and t2. The kinetic energy of the system is T (a function of the rate of change of the configuration of the system), and potential energy is V (a function of the configuration and its rate of change). The configuration of a system which has N degrees of freedom is defined by generalized coordinates q = (q1, q2, ... qN). There are generalized momenta conjugate to these coordinates, p = (p1, p2, ..., pN), where: p i = ∂ L ∂ q ˙ i {\displaystyle p_{i}={\frac {\partial L}{\partial {\dot {q}}_{i}}}} The action and Lagrangian both contain the dynamics of the system for all times. The term "path" simply refers to a curve traced out by the system in terms of the generalized coordinates in the configuration space, i.e. the curve q(t), parameterized by time (see also parametric equation for this concept). The action is a functional rather than a function, since it depends on the Lagrangian, and the Lagrangian depends on the path q(t), so the action depends on the entire "shape" of the path for all times (in the time interval from t1 to t2). Between two instants of time, there are infinitely many paths, but one for which the action is stationary (to the first order) is the true path. The stationary value for the entire continuum of Lagrangian values corresponding to some path, not just one value of the Lagrangian, is required (in other words it is not as simple as "differentiating a function and setting it to zero, then solving the equations to find the points of maxima and minima etc", rather this idea is applied to the entire "shape" of the function, see calculus of variations for more details on this procedure). Notice L is not the total energy E of the system due to the difference, rather than the sum: E = T + V {\displaystyle E=T+V} The following general approaches to classical mechanics are summarized below in the order of establishment. They are equivalent formulations. Newton's is commonly used due to simplicity, but Hamilton's and Lagrange's equations are more general, and their range can extend into other branches of physics with suitable modifications. From the above, any equation of motion in classical mechanics can be derived. Corollaries in mechanics : Euler's laws of motion Euler's equations (rigid body dynamics) Corollaries in fluid mechanics : Equations describing fluid flow in various situations can be derived, using the above classical equations of motion and often conservation of mass, energy and momentum. Some elementary examples follow. Archimedes' principle Bernoulli's principle Poiseuille's law Stokes' law Navier–Stokes equations Faxén's law === Laws of gravitation and relativity === Some of the more famous laws of nature are found in Isaac Newton's theories of (now) classical mechanics, presented in his Philosophiae Naturalis Principia Mathematica, and in Albert Einstein's theory of relativity. ==== Modern laws ==== Special relativity : The two postulates of special relativity are not "laws" in themselves, but assumptions of their nature in terms of relative motion. They can be stated as "the laws of physics are the same in all inertial frames" and "the speed of light is constant and has the same value in all inertial frames". The said postulates lead to the Lorentz transformations – the transformation law between two frame of references moving relative to each other. For any 4-vector A ′ = Λ A {\displaystyle A'=\Lambda A} this replaces the Galilean transformation law from classical mechanics. The Lorentz transformations reduce to the Galilean transformations for low velocities much less than the speed of light c. The magnitudes of 4-vectors are invariants – not "conserved", but the same for all inertial frames (i.e. every observer in an inertial frame will agree on the same value), in particular if A is the four-momentum, the magnitude can derive the famous invariant equation for mass–energy and momentum conservation (see invariant mass): E 2 = ( p c ) 2 + ( m c 2 ) 2 {\displaystyle E^{2}=(pc)^{2}+(mc^{2})^{2}} in which the (more famous) mass–energy equivalence E = mc2 is a special case. General relativity : General relativity is governed by the Einstein field equations, which describe the curvature of space-time due to mass–energy equivalent to the gravitational field. Solving the equation for the geometry of space warped due to the mass distribution gives the metric tensor. Using the geodesic equation, the motion of masses falling along the geodesics can be calculated. Gravitoelectromagnetism : In a relatively flat spacetime due to weak gravitational fields, gravitational analogues of Maxwell's equations can be found; the GEM equations, to describe an analogous gravitomagnetic field. They are well established by the theory, and experimental tests form ongoing research. ==== Classical laws ==== Kepler's laws, though originally discovered from planetary observations (also due to Tycho Brahe), are true for any central forces. === Thermodynamics === Newton's law of cooling Fourier's law Ideal gas law, combines a number of separately developed gas laws; Boyle's law Charles's law Gay-Lussac's law Avogadro's law, into one now improved by other equations of state Dalton's law (of partial pressures) Boltzmann equation Carnot's theorem Kopp's law === Electromagnetism === Maxwell's equations give the time-evolution of the electric and magnetic fields due to electric charge and current distributions. Given the fields, the Lorentz force law is the equation of motion for charges in the fields. These equations can be modified to include magnetic monopoles, and are consistent with our observations of monopoles either existing or not existing; if they do not exist, the generalized equations reduce to the ones above, if they do, the equations become fully symmetric in electric and magnetic charges and currents. Indeed, there is a duality transformation where electric and magnetic charges can be "rotated into one another", and still satisfy Maxwell's equations. Pre-Maxwell laws : These laws were found before the formulation of Maxwell's equations. They are not fundamental, since they can be derived from Maxwell's equations. Coulomb's law can be found from Gauss's law (electrostatic form) and the Biot–Savart law can be deduced from Ampere's law (magnetostatic form). Lenz's law and Faraday's law can be incorporated into the Maxwell–Faraday equation. Nonetheless, they are still very effective for simple calculations. Lenz's law Coulomb's law Biot–Savart law Other laws : Ohm's law Kirchhoff's laws Joule's law === Photonics === Classically, optics is based on a variational principle: light travels from one point in space to another in the shortest time. Fermat's principle In geometric optics laws are based on approximations in Euclidean geometry (such as the paraxial approximation). Law of reflection Law of refraction, Snell's law In physical optics, laws are based on physical properties of materials. Brewster's angle Malus's law Beer–Lambert law In actuality, optical properties of matter are significantly more complex and require quantum mechanics. === Laws of quantum mechanics === Quantum mechanics has its roots in postulates. This leads to results which are not usually called "laws", but hold the same status, in that all of quantum mechanics follows from them. These postulates can be summarized as follows: The state of a physical system, be it a particle or a system of many particles, is described by a wavefunction. Every physical quantity is described by an operator acting on the system; the measured quantity has a probabilistic nature. The wavefunction obeys the Schrödinger equation. Solving this wave equation predicts the time-evolution of the system's behavior, analogous to solving Newton's laws in classical mechanics. Two identical particles, such as two electrons, cannot be distinguished from one another by any means. Physical systems are classified by their symmetry properties. These postulates in turn imply many other phenomena, e.g., uncertainty principles and the Pauli exclusion principle. === Radiation laws === Applying electromagnetism, thermodynamics, and quantum mechanics, to atoms and molecules, some laws of electromagnetic radiation and light are as follows. Stefan–Boltzmann law Planck's law of black-body radiation Wien's displacement law Radioactive decay law == Laws of chemistry == Chemical laws are those laws of nature relevant to chemistry. Historically, observations led to many empirical laws, though now it is known that chemistry has its foundations in quantum mechanics. Quantitative analysis : The most fundamental concept in chemistry is the law of conservation of mass, which states that there is no detectable change in the quantity of matter during an ordinary chemical reaction. Modern physics shows that it is actually energy that is conserved, and that energy and mass are related; a concept which becomes important in nuclear chemistry. Conservation of energy leads to the important concepts of equilibrium, thermodynamics, and kinetics. Additional laws of chemistry elaborate on the law of conservation of mass. Joseph Proust's law of definite composition says that pure chemicals are composed of elements in a definite formulation; we now know that the structural arrangement of these elements is also important. Dalton's law of multiple proportions says that these chemicals will present themselves in proportions that are small whole numbers; although in many systems (notably biomacromolecules and minerals) the ratios tend to require large numbers, and are frequently represented as a fraction. The law of definite composition and the law of multiple proportions are the first two of the three laws of stoichiometry, the proportions by which the chemical elements combine to form chemical compounds. The third law of stoichiometry is the law of reciprocal proportions, which provides the basis for establishing equivalent weights for each chemical element. Elemental equivalent weights can then be used to derive atomic weights for each element. More modern laws of chemistry define the relationship between energy and its transformations. Reaction kinetics and equilibria : In equilibrium, molecules exist in mixture defined by the transformations possible on the timescale of the equilibrium, and are in a ratio defined by the intrinsic energy of the molecules—the lower the intrinsic energy, the more abundant the molecule. Le Chatelier's principle states that the system opposes changes in conditions from equilibrium states, i.e. there is an opposition to change the state of an equilibrium reaction. Transforming one structure to another requires the input of energy to cross an energy barrier; this can come from the intrinsic energy of the molecules themselves, or from an external source which will generally accelerate transformations. The higher the energy barrier, the slower the transformation occurs. There is a hypothetical intermediate, or transition structure, that corresponds to the structure at the top of the energy barrier. The Hammond–Leffler postulate states that this structure looks most similar to the product or starting material which has intrinsic energy closest to that of the energy barrier. Stabilizing this hypothetical intermediate through chemical interaction is one way to achieve catalysis. All chemical processes are reversible (law of microscopic reversibility) although some processes have such an energy bias, they are essentially irreversible. The reaction rate has the mathematical parameter known as the rate constant. The Arrhenius equation gives the temperature and activation energy dependence of the rate constant, an empirical law. Thermochemistry : Dulong–Petit law Gibbs–Helmholtz equation Hess's law Gas laws : Raoult's law Henry's law Chemical transport : Fick's laws of diffusion Graham's law Lamm equation == Laws of biology == === Ecology === Competitive exclusion principle or Gause's law === Genetics === Mendelian laws (Dominance and Uniformity, segregation of genes, and Independent Assortment) Hardy–Weinberg principle === Natural selection === Whether or not Natural Selection is a "law of nature" is controversial among biologists. Henry Byerly, an American philosopher known for his work on evolutionary theory, discussed the problem of interpreting a principle of natural selection as a law. He suggested a formulation of natural selection as a framework principle that can contribute to a better understanding of evolutionary theory. His approach was to express relative fitness, the propensity of a genotype to increase in proportionate representation in a competitive environment, as a function of adaptedness (adaptive design) of the organism. == Laws of Earth sciences == === Geography === Arbia's law of geography Tobler's first law of geography Tobler's second law of geography === Geology === Archie's law Buys Ballot's law Birch's law Byerlee's law Principle of original horizontality Law of superposition Principle of lateral continuity Principle of cross-cutting relationships Principle of faunal succession Principle of inclusions and components Walther's law == Other fields == Some mathematical theorems and axioms are referred to as laws because they provide logical foundation to empirical laws. Examples of other observed phenomena sometimes described as laws include the Titius–Bode law of planetary positions, Zipf's law of linguistics, and Moore's law of technological growth. Many of these laws fall within the scope of uncomfortable science. Other laws are pragmatic and observational, such as the law of unintended consequences. By analogy, principles in other fields of study are sometimes loosely referred to as "laws". These include Occam's razor as a principle of philosophy and the Pareto principle of economics. == History == The observation and detection of underlying regularities in nature date from prehistoric times – the recognition of cause-and-effect relationships implicitly recognises the existence of laws of nature. The recognition of such regularities as independent scientific laws per se, though, was limited by their entanglement in animism, and by the attribution of many effects that do not have readily obvious causes—such as physical phenomena—to the actions of gods, spirits, supernatural beings, etc. Observation and speculation about nature were intimately bound up with metaphysics and morality. In Europe, systematic theorizing about nature (physis) began with the early Greek philosophers and scientists and continued into the Hellenistic and Roman imperial periods, during which times the intellectual influence of Roman law increasingly became paramount.The formula "law of nature" first appears as "a live metaphor" favored by Latin poets Lucretius, Virgil, Ovid, Manilius, in time gaining a firm theoretical presence in the prose treatises of Seneca and Pliny. Why this Roman origin? According to [historian and classicist Daryn] Lehoux's persuasive narrative, the idea was made possible by the pivotal role of codified law and forensic argument in Roman life and culture. For the Romans ... the place par excellence where ethics, law, nature, religion and politics overlap is the law court. When we read Seneca's Natural Questions, and watch again and again just how he applies standards of evidence, witness evaluation, argument and proof, we can recognize that we are reading one of the great Roman rhetoricians of the age, thoroughly immersed in forensic method. And not Seneca alone. Legal models of scientific judgment turn up all over the place, and for example prove equally integral to Ptolemy's approach to verification, where the mind is assigned the role of magistrate, the senses that of disclosure of evidence, and dialectical reason that of the law itself. The precise formulation of what are now recognized as modern and valid statements of the laws of nature dates from the 17th century in Europe, with the beginning of accurate experimentation and the development of advanced forms of mathematics. During this period, natural philosophers such as Isaac Newton (1642–1727) were influenced by a religious view – stemming from medieval concepts of divine law – which held that God had instituted absolute, universal and immutable physical laws. In chapter 7 of The World, René Descartes (1596–1650) described "nature" as matter itself, unchanging as created by God, thus changes in parts "are to be attributed to nature. The rules according to which these changes take place I call the 'laws of nature'." The modern scientific method which took shape at this time (with Francis Bacon (1561–1626) and Galileo (1564–1642)) contributed to a trend of separating science from theology, with minimal speculation about metaphysics and ethics. (Natural law in the political sense, conceived as universal (i.e., divorced from sectarian religion and accidents of place), was also elaborated in this period by scholars such as Grotius (1583–1645), Spinoza (1632–1677), and Hobbes (1588–1679).) The distinction between natural law in the political-legal sense and law of nature or physical law in the scientific sense is a modern one, both concepts being equally derived from physis, the Greek word (translated into Latin as natura) for nature. == See also == == References == == Further reading == == External links == Physics Formulary, a useful book in different formats containing many or the physical laws and formulae. Eformulae.com, website containing most of the formulae in different disciplines. Stanford Encyclopedia of Philosophy: "Laws of Nature" by John W. Carroll. Baaquie, Belal E. "Laws of Physics : A Primer". Core Curriculum, National University of Singapore. Francis, Erik Max. "The laws list".. Physics. Alcyone Systems Pazameta, Zoran. "The laws of nature". Archived 2014-02-26 at the Wayback Machine Committee for the scientific investigation of Claims of the Paranormal. The Internet Encyclopedia of Philosophy. "Laws of Nature" – By Norman Swartz Mark Buchanan; Frank Close; Nancy Cartwright; Melvyn Bragg (host) (Oct 19, 2000). "Laws of Nature". In Our Time. BBC Radio 4.
Wikipedia/Law_of_physics
Timeline of particle physics technology 1896 - Charles Wilson discovers that energetic particles produce droplet tracks in supersaturated gases. 1897-1901 - Discovery of the Townsend discharge by John Sealy Townsend. 1908 - Hans Geiger and Ernest Rutherford use the Townsend discharge principle to detect alpha particles. 1911 - Charles Wilson finishes a sophisticated cloud chamber. 1928 - Hans Geiger and Walther Muller invent the Geiger Muller tube, which is based upon the gas ionisation principle used by Geiger in 1908, but is a practical device that can also detect beta and gamma radiation. This is implicitly also the invention of the Geiger Muller counter. 1934 - Ernest Lawrence and Stan Livingston invent the cyclotron. 1945 - Edwin McMillan devises a synchrotron. 1952 - Donald Glaser develops the bubble chamber. 1968 - Georges Charpak and Roger Bouclier build the first multiwire proportional mode particle detection chamber.
Wikipedia/Timeline_of_particle_physics_technology
A quantum mechanical system or particle that is bound—that is, confined spatially—can only take on certain discrete values of energy, called energy levels. This contrasts with classical particles, which can have any amount of energy. The term is commonly used for the energy levels of the electrons in atoms, ions, or molecules, which are bound by the electric field of the nucleus, but can also refer to energy levels of nuclei or vibrational or rotational energy levels in molecules. The energy spectrum of a system with such discrete energy levels is said to be quantized. In chemistry and atomic physics, an electron shell, or principal energy level, may be thought of as the orbit of one or more electrons around an atom's nucleus. The closest shell to the nucleus is called the "1 shell" (also called "K shell"), followed by the "2 shell" (or "L shell"), then the "3 shell" (or "M shell"), and so on further and further from the nucleus. The shells correspond with the principal quantum numbers (n = 1, 2, 3, 4, ...) or are labeled alphabetically with letters used in the X-ray notation (K, L, M, N, ...). Each shell can contain only a fixed number of electrons: The first shell can hold up to two electrons, the second shell can hold up to eight (2 + 6) electrons, the third shell can hold up to 18 (2 + 6 + 10) and so on. The general formula is that the nth shell can in principle hold up to 2n2 electrons. Since electrons are electrically attracted to the nucleus, an atom's electrons will generally occupy outer shells only if the more inner shells have already been completely filled by other electrons. However, this is not a strict requirement: atoms may have two or even three incomplete outer shells. (See Madelung rule for more details.) For an explanation of why electrons exist in these shells see electron configuration. If the potential energy is set to zero at infinite distance from the atomic nucleus or molecule, the usual convention, then bound electron states have negative potential energy. If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the ground state. If it is at a higher energy level, it is said to be excited, or any electrons that have higher energy than the ground state are excited. An energy level is regarded as degenerate if there is more than one measurable quantum mechanical state associated with it. == Explanation == Quantized energy levels result from the wave behavior of particles, which gives a relationship between a particle's energy and its wavelength. For a confined particle such as an electron in an atom, the wave functions that have well defined energies have the form of a standing wave. States having well-defined energies are called stationary states because they are the states that do not change in time. Informally, these states correspond to a whole number of wavelengths of the wavefunction along a closed path (a path that ends where it started), such as a circular orbit around an atom, where the number of wavelengths gives the type of atomic orbital (0 for s-orbitals, 1 for p-orbitals and so on). Elementary examples that show mathematically how energy levels come about are the particle in a box and the quantum harmonic oscillator. Any superposition (linear combination) of energy states is also a quantum state, but such states change with time and do not have well-defined energies. A measurement of the energy results in the collapse of the wavefunction, which results in a new state that consists of just a single energy state. Measurement of the possible energy levels of an object is called spectroscopy. == History == The first evidence of quantization in atoms was the observation of spectral lines in light from the sun in the early 1800s by Joseph von Fraunhofer and William Hyde Wollaston. The notion of energy levels was proposed in 1913 by Danish physicist Niels Bohr in the Bohr theory of the atom. The modern quantum mechanical theory giving an explanation of these energy levels in terms of the Schrödinger equation was advanced by Erwin Schrödinger and Werner Heisenberg in 1926. == Atoms == === Intrinsic energy levels === In the formulas for energy of electrons at various levels given below in an atom, the zero point for energy is set when the electron in question has completely left the atom; i.e. when the electron's principal quantum number n = ∞. When the electron is bound to the atom in any closer value of n, the electron's energy is lower and is considered negative. ==== Orbital state energy level: atom/ion with nucleus + one electron ==== Assume there is one electron in a given atomic orbital in a hydrogen-like atom (ion). The energy of its state is mainly determined by the electrostatic interaction of the (negative) electron with the (positive) nucleus. The energy levels of an electron around a nucleus are given by: E n = − h c R ∞ Z 2 n 2 {\displaystyle E_{n}=-hcR_{\infty }{\frac {Z^{2}}{n^{2}}}} (typically between 1 eV and 103 eV), where R∞ is the Rydberg constant, Z is the atomic number, n is the principal quantum number, h is the Planck constant, and c is the speed of light. For hydrogen-like atoms (ions) only, the Rydberg levels depend only on the principal quantum number n. This equation is obtained from combining the Rydberg formula for any hydrogen-like element (shown below) with E = hν = hc / λ assuming that the principal quantum number n above = n1 in the Rydberg formula and n2 = ∞ (principal quantum number of the energy level the electron descends from, when emitting a photon). The Rydberg formula was derived from empirical spectroscopic emission data. 1 λ = R Z 2 ( 1 n 1 2 − 1 n 2 2 ) {\displaystyle {\frac {1}{\lambda }}=RZ^{2}\left({\frac {1}{n_{1}^{2}}}-{\frac {1}{n_{2}^{2}}}\right)} An equivalent formula can be derived quantum mechanically from the time-independent Schrödinger equation with a kinetic energy Hamiltonian operator using a wave function as an eigenfunction to obtain the energy levels as eigenvalues, but the Rydberg constant would be replaced by other fundamental physics constants. ==== Electron–electron interactions in atoms ==== If there is more than one electron around the atom, electron–electron interactions raise the energy level. These interactions are often neglected if the spatial overlap of the electron wavefunctions is low. For multi-electron atoms, interactions between electrons cause the preceding equation to be no longer accurate as stated simply with Z as the atomic number. A simple (though not complete) way to understand this is as a shielding effect, where the outer electrons see an effective nucleus of reduced charge, since the inner electrons are bound tightly to the nucleus and partially cancel its charge. This leads to an approximate correction where Z is substituted with an effective nuclear charge symbolized as Zeff that depends strongly on the principal quantum number. E n , ℓ = − h c R ∞ Z e f f 2 n 2 {\displaystyle E_{n,\ell }=-hcR_{\infty }{\frac {{Z_{\rm {eff}}}^{2}}{n^{2}}}} In such cases, the orbital types (determined by the azimuthal quantum number ℓ) as well as their levels within the molecule affect Zeff and therefore also affect the various atomic electron energy levels. The Aufbau principle of filling an atom with electrons for an electron configuration takes these differing energy levels into account. For filling an atom with electrons in the ground state, the lowest energy levels are filled first and consistent with the Pauli exclusion principle, the Aufbau principle, and Hund's rule. ==== Fine structure splitting ==== Fine structure arises from relativistic kinetic energy corrections, spin–orbit coupling (an electrodynamic interaction between the electron's spin and motion and the nucleus's electric field) and the Darwin term (contact term interaction of s shell electrons inside the nucleus). These affect the levels by a typical order of magnitude of 10−3 eV. ==== Hyperfine structure ==== This even finer structure is due to electron–nucleus spin–spin interaction, resulting in a typical change in the energy levels by a typical order of magnitude of 10−4 eV. === Energy levels due to external fields === ==== Zeeman effect ==== There is an interaction energy associated with the magnetic dipole moment, μL, arising from the electronic orbital angular momentum, L, given by U = − μ L ⋅ B {\displaystyle U=-{\boldsymbol {\mu }}_{L}\cdot \mathbf {B} } with − μ L = e ℏ 2 m L = μ B L {\displaystyle -{\boldsymbol {\mu }}_{L}={\dfrac {e\hbar }{2m}}\mathbf {L} =\mu _{B}\mathbf {L} } . Additionally taking into account the magnetic momentum arising from the electron spin. Due to relativistic effects (Dirac equation), there is a magnetic momentum, μS, arising from the electron spin − μ S = − μ B g S S {\displaystyle -{\boldsymbol {\mu }}_{S}=-\mu _{\text{B}}g_{S}\mathbf {S} } , with gS the electron-spin g-factor (about 2), resulting in a total magnetic moment, μ, μ = μ L + μ S {\displaystyle {\boldsymbol {\mu }}={\boldsymbol {\mu }}_{L}+{\boldsymbol {\mu }}_{S}} . The interaction energy therefore becomes U B = − μ ⋅ B = μ B B ( M L + g S M S ) {\displaystyle U_{B}=-{\boldsymbol {\mu }}\cdot \mathbf {B} =\mu _{\text{B}}B(M_{L}+g_{S}M_{S})} . ==== Stark effect ==== == Molecules == Chemical bonds between atoms in a molecule form because they make the situation more stable for the involved atoms, which generally means the sum energy level for the involved atoms in the molecule is lower than if the atoms were not so bonded. As separate atoms approach each other to covalently bond, their orbitals affect each other's energy levels to form bonding and antibonding molecular orbitals. The energy level of the bonding orbitals is lower, and the energy level of the antibonding orbitals is higher. For the bond in the molecule to be stable, the covalent bonding electrons occupy the lower energy bonding orbital, which may be signified by such symbols as σ or π depending on the situation. Corresponding anti-bonding orbitals can be signified by adding an asterisk to get σ* or π* orbitals. A non-bonding orbital in a molecule is an orbital with electrons in outer shells which do not participate in bonding and its energy level is the same as that of the constituent atom. Such orbitals can be designated as n orbitals. The electrons in an n orbital are typically lone pairs. In polyatomic molecules, different vibrational and rotational energy levels are also involved. Roughly speaking, a molecular energy state (i.e., an eigenstate of the molecular Hamiltonian) is the sum of the electronic, vibrational, rotational, nuclear, and translational components, such that: E = E electronic + E vibrational + E rotational + E nuclear + E translational {\displaystyle E=E_{\text{electronic}}+E_{\text{vibrational}}+E_{\text{rotational}}+E_{\text{nuclear}}+E_{\text{translational}}} where Eelectronic is an eigenvalue of the electronic molecular Hamiltonian (the value of the potential energy surface) at the equilibrium geometry of the molecule. The molecular energy levels are labelled by the molecular term symbols. The specific energies of these components vary with the specific energy state and the substance. === Energy level diagrams === There are various types of energy level diagrams for bonds between atoms in a molecule. Examples Molecular orbital diagrams, Jablonski diagrams, and Franck–Condon diagrams. == Energy level transitions == Electrons in atoms and molecules can change (make transitions in) energy levels by emitting or absorbing a photon (of electromagnetic radiation), whose energy must be exactly equal to the energy difference between the two levels. Electrons can also be completely removed from a chemical species such as an atom, molecule, or ion. Complete removal of an electron from an atom can be a form of ionization, which is effectively moving the electron out to an orbital with an infinite principal quantum number, in effect so far away so as to have practically no more effect on the remaining atom (ion). For various types of atoms, there are 1st, 2nd, 3rd, etc. ionization energies for removing the 1st, then the 2nd, then the 3rd, etc. of the highest energy electrons, respectively, from the atom originally in the ground state. Energy in corresponding opposite quantities can also be released, sometimes in the form of photon energy, when electrons are added to positively charged ions or sometimes atoms. Molecules can also undergo transitions in their vibrational or rotational energy levels. Energy level transitions can also be nonradiative, meaning emission or absorption of a photon is not involved. If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the ground state. If it is at a higher energy level, it is said to be excited, or any electrons that have higher energy than the ground state are excited. Such a species can be excited to a higher energy level by absorbing a photon whose energy is equal to the energy difference between the levels. Conversely, an excited species can go to a lower energy level by spontaneously emitting a photon equal to the energy difference. A photon's energy is equal to the Planck constant (h) times its frequency (f) and thus is proportional to its frequency, or inversely to its wavelength (λ). ΔE = hf = hc / λ, since c, the speed of light, equals to fλ Correspondingly, many kinds of spectroscopy are based on detecting the frequency or wavelength of the emitted or absorbed photons to provide information on the material analyzed, including information on the energy levels and electronic structure of materials obtained by analyzing the spectrum. An asterisk is commonly used to designate an excited state. An electron transition in a molecule's bond from a ground state to an excited state may have a designation such as σ → σ*, π → π*, or n → π* meaning excitation of an electron from a σ bonding to a σ antibonding orbital, from a π bonding to a π antibonding orbital, or from an n non-bonding to a π antibonding orbital. Reverse electron transitions for all these types of excited molecules are also possible to return to their ground states, which can be designated as σ* → σ, π* → π, or π* → n. A transition in an energy level of an electron in a molecule may be combined with a vibrational transition and called a vibronic transition. A vibrational and rotational transition may be combined by rovibrational coupling. In rovibronic coupling, electron transitions are simultaneously combined with both vibrational and rotational transitions. Photons involved in transitions may have energy of various ranges in the electromagnetic spectrum, such as X-ray, ultraviolet, visible light, infrared, or microwave radiation, depending on the type of transition. In a very general way, energy level differences between electronic states are larger, differences between vibrational levels are intermediate, and differences between rotational levels are smaller, although there can be overlap. Translational energy levels are practically continuous and can be calculated as kinetic energy using classical mechanics. Higher temperature causes fluid atoms and molecules to move faster increasing their translational energy, and thermally excites molecules to higher average amplitudes of vibrational and rotational modes (excites the molecules to higher internal energy levels). This means that as temperature rises, translational, vibrational, and rotational contributions to molecular heat capacity let molecules absorb heat and hold more internal energy. Conduction of heat typically occurs as molecules or atoms collide transferring the heat between each other. At even higher temperatures, electrons can be thermally excited to higher energy orbitals in atoms or molecules. A subsequent drop of an electron to a lower energy level can release a photon, causing a possibly coloured glow. An electron further from the nucleus has higher potential energy than an electron closer to the nucleus, thus it becomes less bound to the nucleus, since its potential energy is negative and inversely dependent on its distance from the nucleus. == Crystalline materials == Crystalline solids are found to have energy bands, instead of or in addition to energy levels. Electrons can take on any energy within an unfilled band. At first this appears to be an exception to the requirement for energy levels. However, as shown in band theory, energy bands are actually made up of many discrete energy levels which are too close together to resolve. Within a band the number of levels is of the order of the number of atoms in the crystal, so although electrons are actually restricted to these energies, they appear to be able to take on a continuum of values. The important energy levels in a crystal are the top of the valence band, the bottom of the conduction band, the Fermi level, the vacuum level, and the energy levels of any defect states in the crystal. == See also == Perturbation theory (quantum mechanics) Atomic clock Computational chemistry == References ==
Wikipedia/Energy_state
Energy laws govern the use and taxation of energy, both renewable and non-renewable. These laws are the primary authorities (such as caselaw, statutes, rules, regulations and edicts) related to energy. In contrast, energy policy refers to the policy and politics of energy. Energy law includes the legal provision for oil, gasoline, and "extraction taxes." The practice of energy law includes contracts for siting, extraction, licenses for the acquisition and ownership rights in oil and gas both under the soil before discovery and after its capture, and adjudication regarding those rights. == Renewable energy law == == International law == There is a growing academic interest in international energy law, including continuing legal education seminars, treatises, law reviews, and graduate courses. In the same line, there has been growing interest on energy-specific issues and their particular relation with international trade and connected organizations like the World Trade Organization. There are also periodic international meetings such as the World Forum on Energy Regulation. == Africa == The Regional Association of Energy Regulators for Eastern and Southern Africa is an international nonprofit organization dedicated to promoting cooperation between the various countries on energy law, policy, and development. === Egypt === Egypt's Energy in Egypt is regulated by The Ministry of Electricity and Renewable Energy of Egypt, which is the government ministry in charge of managing and regulating the generation, transmission, and distribution of electricity in Egypt. Its headquarters are in Cairo. The current minister as of 2020 is Mohamed Shaker. The ministry was established in 1964 with presidential decree No. 147. The famous Aswan High Dam, which produces electricity, is government owned and regulated; its construction required the removal of Abu Simbel temples and the Temple of Dendur. Egypt has established a separate power authority to build and operate a nuclear power plant. === Ghana === Ghana has a regulatory body over energy, the Energy Commission. === Nigeria === Nigeria's government owns the Nigerian National Petroleum Corporation. The Lagos Business School has a number of academic offerings related to the legal, economic, and business management of energy, particularly oil and petroleum, which is a major sector in Nigeria's economic sector. Nigeria heavily subsidies petrol, which mainly benefits rich people. On 1 January 2012, the Nigerian government headed by president Goodluck Ebele Jonathan, tried to cease the subsidy on petrol and deregulate the oil prices by announcing the new price for petrol as US$0.88/litre from the old subsidised price of US$0.406/litre (LAGOS), which in areas distant from Lagos petrol was priced at US$1.25/litre. This led to the longest general strike (eight days), riots, Arab spring like protests and on 16 January 2012 the government capitulated by announcing a new price of US$0.60/litre with an envisaged price of US$2.0/litre in distant areas. In May 2016 the Buhari administration increased fuel prices again to NGN 145 per litre ($0.43 at black market rates for the currency). In September 2020, the government had announced an increase in the pump price of petrol to NGN 151.56 per litre from NGN 148. === Uganda === Uganda has adopted a new nuclear power law, which it hopes "will boost technical cooperation between the country and the International Atomic Energy Agency," according to "a senior agency official" from that African country. == Australia == Energy is big business in Australia. Australian Energy Producers represents 98% of the oil and gas producers in Australia. == Bangladesh == == Canada == Canada has an extensive energy law, both through the federation and the provinces, especially Alberta. These include: Alternative Fuels Act ( 1995, c. 20 ) Cooperative Energy Act ( 1980-81-82-83, c. 108 ) Energy Administration Act ( R.S., 1985, c. E-6 ) Energy Monitoring Act ( R.S., 1985, c. E-8 ) Nuclear Energy Act ( R.S., 1985, c. A-16 ) Canada Oil and Gas Operations Act ( R.S., 1985, c. O-7 ) Canada Petroleum Resources Act ( 1985, c. 36 (2nd Supp.) ) National Energy Board Act ( R.S., 1985, c. N-7 ) Electricity and Gas Inspection Act ( R.S., 1985, c. E-4 ) There is some academic interest in the energy law of Canada, with looseleaf periodical services, monographs, and consultation with lawyers specializing in that practice, available. The Supreme Court of Canada has had issued some Canadian energy case law. Canada's energy laws are so extensive and complicated in large part because of its government-owned energy resources: The oil sands are gold not only for the oil companies, but also for Alberta's provincial government, which owns the mineral rights to virtually all the land and has encouraged the industry for three-quarters of a century. Canada and the Quebec province also own extensive hydroelectric dam facilities, which have generated not only power but controversy. == China == == European Union == European energy law has been focused on the legal mechanisms for managing short-term disruptions to the continent's energy supply, such as Germany's 1974 Law to Secure the Energy Supply. The European integrated hydrogen project was a European Union project to integrate United Nations Economic Commission for Europe (ECE) guidelines and create a basis of ECE regulation of hydrogen vehicles and the necessary infrastructure replacing national legislation and regulations. The aim of this project was enhancing of the safety of hydrogen vehicles and harmonizing their licensing and approval process. Five nations have created the EurObserv'ER energy consortium. The EU has also created an Energy Community to extend their policies into Southeastern Europe. Austuraiu hosts the annual World Sustainable Energy Days. The EU regulates motor vehicle emissions; see Directive 80/1269/EEC. === Germany === Germany's renewable energy law mandates the use of renewable energy through its taxes and tariffs. It promotes the development of renewable energy sources via a system of feed-in tariffs. It regulates the amount of energy generated by the producer and the type of renewable energy source. It also creates an incentive to encourage technological advancements and costs. The results have been startling: on 6 June 2014, more than half of the nation's energy used on that date came from solar power. Despite regulatory processes adding more renewable energy to its energy mix, Germany's electric grid has become more reliable, not less. The German government has proposed abandoning "its planned phase-out of nuclear energy to help rein in surging electricity prices and protect the environment, according to proposals drawn up by an energy task force under Economy Minister Michael Glos." The German Green Party has opposed nuclear energy, as well as the market power of German utilities, claiming the "energy shortfall" has been artificially created. There is significant academic interest in German energy law. A chart summarizing German energy legislation is available. === Italy === Italy has few natural resources. lacking substantial deposits of iron, coal, or oil. Proven natural gas reserves, mainly in the Po Valley and offshore Adriatic, constitute the country's most important mineral resource. More than 80% of the country's energy sources are imported. The energy sector is highly dependent on imports from abroad: in 2006 the country imported more than 86% of its total energy consumption. In the last decade, Italy has become one of the world's largest producers of renewable energy, ranking as the world's fifth largest solar energy producer in 2009 and the sixth largest producer of wind power in 2008. In 1987, after the Chernobyl disaster, a large majority of Italians passed a referendum opting for phasing out nuclear power. The government responded by closing existing nuclear power plants and completely putting a halt to the national nuclear program. Italy also imports about 16% of its electricity need from France for 6.5 GWe, which makes it the world's biggest importer of electricity. Due to its reliance on expensive fossil fuels and imports, Italians pay approximately 45% more than the EU average for electricity. In 2004, a new Energy Law brought the possibility of joint ventures with foreign companies to build nuclear power plants and import electricity. In 2005, Italy's power company, ENEL made an agreement with Electricite de France for 200 MWe from a nuclear reactor in France and potentially an additional 1,000 MWe from new construction. As part of the agreement, ENEL received a 12.5% stake in the project and direct involvement in design, construction, and operation of the plants. In another move, ENEL also bought 66% of the Slovak Electric utility that operates six nuclear reactors. As part of this agreement, ENEL will pay the Slovak government EUR 1.6 billion to complete a nuclear power plant in Mochovce, which has a gross output of 942 MWe. With these agreements, Italy has managed to access nuclear power without placing reactors on Italian territory. === Lithuania === The nation of Lithuania has an energy law, Energetikos teisė. === Ukraine === In Ukraine, renewable energy projects are supported by a feed-in tariff system. The law of Ukraine "On alternative sources of energy" refers to alternative energy sources: solar, wind, geothermal, hydrothermal, marine and hydrokinetic energy, hydroelectricity, biomass, landfill biogas and others. Ukrainian National Energy and Utilities Regulatory Commission and State Agency on Energy Efficiency and Energy Saving of Ukraine are the main renewable energy regulation authorities. Reforms have been made by Ukrainian government in alternative energy sphere. There is a need of energy savings services in Ukraine. Its potential reaches about 5 billion EUR only in state-owned buildings. Ukraine has a separate regulatory agency to manage the Chernobyl Exclusion Zone. == Other European countries == Albania has an established the Albanian Institute of Oil and Gas. There is significant geothermal power in Iceland; about 80% of the nation's energy needs are met by geothermal sources, all of which is owned by the government, or regulated by it. == India == == Iraq == The Iraqi Oil Ministry awards contracts to only a few companies. These contracts are called Production sharing agreements. As of July 2014, there are 23 established oil companies, but only 17 banking corporations in Iraq. == Israel == The Israel Energy Sources Law, 5750-1989 ("Energy Law"), defines what is considered as "energy" and "energy source" and its purpose is to regulate the exploitation of energy sources whilst ensuring the efficiently of its use. Under the Energy Law, certain regulation methods of measurement have been nominated by the Israel legislature in order to regulate the efficiency of the use of the energy source. In addition to which entity shall be entitled to the pursuit and use of such sources. Furthermore, in Israel there are certain additional laws that deal with the use of energy sources, such as the Natural Gas Sector Law, 5762-2002 which provides the conditions for the development of the natural gas sphere in Israel, and the Electricity Sector Law, 5756–1996, which established the "Public Utility Authority – Electricity" which publishes directives and regulations for the use of renewable electricity sources, including solar energy and hydro-energy. == Japan == Prior to the earthquake and tsunami of March 2011, and the nuclear disasters that resulted from it, Japan generated 30% of its electrical power from nuclear reactors and planned to increase that share to 40%. Nuclear energy was a national strategic priority in Japan, but there had been concern about the ability of Japan's nuclear plants to withstand seismic activity. The Kashiwazaki-Kariwa Nuclear Power Plant was completely shut down for 21 months following an earthquake in 2007. The 2011 earthquake and tsunami caused the failure of cooling systems at the Fukushima I Nuclear Power Plant on March 11 and a nuclear emergency was declared. 140,000 residents were evacuated. The total amount of radioactive material released is unclear, as the crisis is ongoing. On 6 May 2011, Prime Minister Naoto Kan ordered the Hamaoka Nuclear Power Plant be shut down as an earthquake of magnitude 8.0 or higher is likely to hit the area within the next 30 years. Problems in stabilizing the Fukushima I nuclear plant had hardened attitudes to nuclear power. As of June 2011, "more than 80 percent of Japanese now say they are anti-nuclear and distrust government information on radiation". As of October 2011, there have been electricity shortages, but Japan survived the summer without the extensive blackouts that had been predicted. An energy white paper, approved by the Japanese Cabinet in October 2011, says "public confidence in safety of nuclear power was greatly damaged" by the Fukushima disaster, and calls for a reduction in the nation's reliance on nuclear power. Many of Japan's nuclear plants have been closed, or their operation has been suspended for safety inspections. The last of Japan's 54 reactors (Tomari-3) went offline for maintenance on May 5, 2012., leaving Japan completely without nuclear-produced electrical power for the first time since 1970. Despite protests, on 1 July 2012 unit 3 of the Ōi Nuclear Power Plant was restarted. As of September 2012, Ōi units 3 and 4 are Japan's only operating nuclear power plants, although the city and prefecture of Osaka have requested they be shut down. The United States-Japan Joint Nuclear Energy Action Plan is a bilateral agreement aimed at putting in place a framework for the joint research and development of nuclear energy technology, which was signed on April 18, 2007. It is believed that the agreement is the first that the US has signed to develop nuclear power technologies with another country, although Japan has agreements with Australia, Canada, China, France, and the United Kingdom. Under the plan, the United States and Japan would each conduct research into fast reactor technology, fuel cycle technology, advanced computer simulation and modeling, small and medium reactors, safeguards and physical protection; and nuclear waste management, which it to be coordinated by a joint steering committee. The treaty's progress has been in limbo since the Fukushima I nuclear accidents. The Japan Oil, Gas and Metals National Corporation (JOGMEC) is a government-owned company involved in fossil-fuel energy exploration, amongst other activities. In 2013, its corporate workers first extracted Methane clathrate from seabed deposits. == Malaysia == Malaysia heavily regulates its energy sector. From 1982 to 1992, the Government of Sabah owned Sabah Gas Industries for the downstream operations of Sabah natural gas resources, based in Labuan, Malaysia, which was put up for privatization. Its methanol plant was sold to Petronas and operates today as Petronas Methanol (Labuan) Sdn Bhd. The power station was sold to Sabah Electricity. == Mexico == Mexico had numerous laws that subsidize oil, until c. 2017. PEMEX, a government company in charge of selling oil in Mexico is subsidized by the Mexican government. This serves to quell inflationary pressures in Mexico. Mexico buys much of its gasoline and diesel from the United States and resells it at US$98 per barrel. Many residents of US border communities cross the border to buy fuel in Mexico, thereby enjoying a cheaper fuel subsidy at the expense of Mexican taxpayers. This has caused frequent supply shortages at a number of filling stations along the border for Mexican drivers, especially truck and bus drivers who use diesel. In 2017, Mexico ended its oil industry subsidies, leading to increased prices and widespread protests throughout the country. == Pakistan == == Philippines == Philippines law has provisions concerning energy, fossil fuels, and renewable energy. Energy law in the Philippines is important because that nation is one of the fastest growing in Asia, and has over 90 million residents. The earliest Philippine energy law dates from 1903, during the American Commonwealth, Act No. 667, concerning franchises for utilities, and Act No. 1022, which allowed such to have mortgages. A uniform law in 1929 allowed for new utilities. The first coal mining law, known as the Coal Land Act, dates to 1917. Oil exploration was allowed in a 1920 law. The Mining Act (1936) has been amended several times by acts and decrees. The first hydroelectric power law dates from 1933, and have been updated since, including one that created the National Power Corporation, and has been amended several times through 1967. The Renewable Energy Law (2009) encourages the development and use of non-traditional energy sources. == Russia == == Saudi Arabia == Saudi Arabia has some laws concerning energy, especially oil and gas law. Saudi Arabia is the largest oil producer in the world and therefore its energy law has great influence over the world's overall energy supply. Under the Basic Law of Saudi Arabia, all its oil and gas wealth belongs to the government: "All Allah's bestowed wealth, be it under the ground, on the surface or in national territorial waters, in the land or maritime domains under the state's control, are the property of the state as defined by law. The law defines means of exploiting, protecting, and developing such wealth in the interests of the state, its security, and economy." Energy taxes are also specifically allowed; Article 20 of the basic law states, "Taxes and fees are to be imposed on a basis of justice and only when the need for them arises. Imposition, amendment, revocation, and exemption are only permitted by law." Two ministries of the Kingdom of Saudi Arabia share the responsibility of the energy sector: the Ministry of Energy and the Ministry of Water and Electricity. The country's laws have also established other agencies that have some legal powers, but are not strictly regulatory. These include Saudi Aramco, originally a joint venture between the Kingdom and the California-Arabian Standard Oil, but now wholly owned by the Kingdom, and Saudi Consolidated Electricity Companies (SCECOs). == Singapore == == South Korea == == Sri Lanka == Sri Lanka’s energy law has undergone significant reforms to enhance efficiency, attract investment, and promote renewable energy. With a growing population and increasing energy demand, these reforms are critical for sustainable development. The earliest laws governing energy in Sri Lanka include the Ceylon Electricity Board Act, No. 17 of 1969, which established the state-owned Ceylon Electricity Board (CEB) to manage electricity generation, transmission, and distribution. This act was a cornerstone in centralising the country’s electricity sector but faced criticism for inefficiencies and financial challenges. Significant updates came with the Sri Lanka Electricity Act, No. 20 of 2009, which aimed to introduce more competition and regulatory oversight. However, it was the Sri Lanka Electricity Act, No. 36 of 2024, that marked a major overhaul of the sector. This act established the National Electricity Advisory Council and designated the Public Utilities Commission of Sri Lanka (PUCSL) as the main regulator. The 2024 Act promotes market competition, facilitates private sector investment, and encourages the use of renewable energy sources. == Turkey == Turkey's old Petroleum Law was in effect for 70 years until 2013, when it enacted a new Petroleum Law, number 6491. Amongst other provisions, it extends the permissible years for drilling permits, reduces a fee, and eliminates a state monopoly. == United Kingdom == The United Kingdom started the process of leaving the European community as of January 2020. The most recent United Kingdom energy law passed is Great British Energy Act 2025. == United States == This section concerns the law of the United States, as well as the states that are the most populous or largest producers of energy. In the United States, energy is regulated extensively through the United States Department of Energy, as well as state regulatory agencies. Every state, the Federal government, and the District of Columbia collect some motor vehicle excise taxes. Specifically, these are excise taxes on gasoline, diesel fuel, and gasohol. While many states in the western U.S. rely to a great deal on severance taxes (taxes on mineral extraction), most states get a relatively small amount of their revenue from such sources. == See also == Effects of 2000s energy crisis === General energy topics === Energy form Energy conservation Energy economics Energy markets and energy derivatives Hydraulic fracturing Induced seismicity List of energy topics World energy resources and consumption World oil market chronology from 2003 === Specific laws and policies === Atomic Energy Basic Law Correlative rights doctrine Cuius est solum eius est usque ad coelum et ad inferos Easement Electric bicycle laws Energy policy of the European Union Energy Charter Treaty Energy Star Energy security Feed-in Tariff Gasoline and diesel usage and pricing List of energy regulatory bodies List of environmental lawsuits Nuclear energy policy Production sharing agreement === Academic think-tanks and associations === Alliance to Save Energy Centre for Energy, Petroleum and Mineral Law and Policy Professional Petroleum Data Management Association Renewable Energy and Energy Efficiency Partnership RETScreen The Energy and Resources Institute Université Laval University of Wyoming === Renewable and alternative energy sources === Alternative propulsion Clean Energy Trends Clean Tech Nation Concentrated solar power Efficient energy use Electric vehicle Geothermal power Global warming Green banking Hydro One Intermittent power source International Symposium on Alcohol Fuels List of renewable energy topics by country Ocean energy Passive solar building design Photovoltaic power station Plug-in hybrid Renewable energy commercialization Renewable heat Solar power Sustainable design The Clean Tech Revolution V2G === Awards and standards === Ashden Awards ISO 14001 Leadership in Energy and Environmental Design (LEED) == References == == Further reading == Klause Bosselmann, The Principle of Sustainability (Burlington, VT: Ashland 2008) ISBN 978-0-7546-7355-2. G. T. Goodman, W. D. Rowe, Energy Risk Management (New York: Academic Press 1979) ISBN 978-0-122896804. Found at Biblio == External links == U.S. Energy Information Administration website The Institute for Energy Law website Section of Environment, Energy, and Resources of the American Bar Association website Energy law at Cornell Law School website LLM in Energy Law at Vermont Law School Energy Law Journal Journal of World Energy Law & Business, the peer-reviewed, official journal of the Association of International Petroleum Negotiators, published by Oxford University Press Energy Law Net, an interactive website for energy lawyers Pace University Energy & Climate Center website United States energy law, from FindLaw.com website Energy industry listings for United States, from FindLaw.com website Energy Industry Today website
Wikipedia/Energy_law
Rotational energy or angular kinetic energy is kinetic energy due to the rotation of an object and is part of its total kinetic energy. Looking at rotational energy separately around an object's axis of rotation, the following dependence on the object's moment of inertia is observed: E rotational = 1 2 I ω 2 {\displaystyle E_{\text{rotational}}={\tfrac {1}{2}}I\omega ^{2}} where The mechanical work required for or applied during rotation is the torque times the rotation angle. The instantaneous power of an angularly accelerating body is the torque times the angular velocity. For free-floating (unattached) objects, the axis of rotation is commonly around its center of mass. Note the close relationship between the result for rotational energy and the energy held by linear (or translational) motion: E translational = 1 2 m v 2 {\displaystyle E_{\text{translational}}={\tfrac {1}{2}}mv^{2}} In the rotating system, the moment of inertia, I, takes the role of the mass, m, and the angular velocity, ω {\displaystyle \omega } , takes the role of the linear velocity, v. The rotational energy of a rolling cylinder varies from one half of the translational energy (if it is massive) to the same as the translational energy (if it is hollow). An example is the calculation of the rotational kinetic energy of the Earth. As the Earth has a sidereal rotation period of 23.93 hours, it has an angular velocity of 7.29×10−5 rad·s−1. The Earth has a moment of inertia, I = 8.04×1037 kg·m2. Therefore, it has a rotational kinetic energy of 2.14×1029 J. Part of the Earth's rotational energy can also be tapped using tidal power. Additional friction of the two global tidal waves creates energy in a physical manner, infinitesimally slowing down Earth's angular velocity ω. Due to the conservation of angular momentum, this process transfers angular momentum to the Moon's orbital motion, increasing its distance from Earth and its orbital period (see tidal locking for a more detailed explanation of this process). == See also == Flywheel List of energy storage projects Rigid rotor Rotational spectroscopy == Notes == == References == Resnick, R. and Halliday, D. (1966) PHYSICS, Section 12-5, John Wiley & Sons Inc.
Wikipedia/Rotational_energy
Energy (from Ancient Greek ἐνέργεια (enérgeia) 'activity') is the quantitative property that is transferred to a body or to a physical system, recognizable in the performance of work and in the form of heat and light. Energy is a conserved quantity—the law of conservation of energy states that energy can be converted in form, but not created or destroyed. The unit of measurement for energy in the International System of Units (SI) is the joule (J). Forms of energy include the kinetic energy of a moving object, the potential energy stored by an object (for instance due to its position in a field), the elastic energy stored in a solid object, chemical energy associated with chemical reactions, the radiant energy carried by electromagnetic radiation, the internal energy contained within a thermodynamic system, and rest energy associated with an object's rest mass. These are not mutually exclusive. All living organisms constantly take in and release energy. The Earth's climate and ecosystems processes are driven primarily by radiant energy from the sun. The energy industry provides the energy required for human civilization to function, which it obtains from energy resources such as fossil fuels, nuclear fuel, and renewable energy. == Forms == The total energy of a system can be subdivided and classified into potential energy, kinetic energy, or combinations of the two in various ways. Kinetic energy is determined by the movement of an object – or the composite motion of the object's components – while potential energy reflects the potential of an object to have motion, generally being based upon the object's position within a field or what is stored within the field itself. While these two categories are sufficient to describe all forms of energy, it is often convenient to refer to particular combinations of potential and kinetic energy as its own form. For example, the sum of translational and rotational kinetic and potential energy within a system is referred to as mechanical energy, whereas nuclear energy refers to the combined potentials within an atomic nucleus from either the nuclear force or the weak force, among other examples. == History == The word energy derives from the Ancient Greek: ἐνέργεια, romanized: energeia, lit. 'activity, operation', which possibly appears for the first time in the work of Aristotle in the 4th century BC. In contrast to the modern definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness and pleasure. In the late 17th century, Gottfried Leibniz proposed the idea of the Latin: vis viva, or living force, which defined as the product of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To account for slowing due to friction, Leibniz theorized that thermal energy consisted of the motions of the constituent parts of matter, although it would be more than a century until this was generally accepted. The modern analog of this property, kinetic energy, differs from vis viva only by a factor of two. Writing in the early 18th century, Émilie du Châtelet proposed the concept of conservation of energy in the marginalia of her French language translation of Newton's Principia Mathematica, which represented the first formulation of a conserved measurable quantity that was distinct from momentum, and which would later be called "energy". In 1807, Thomas Young was possibly the first to use the term "energy" instead of vis viva, in its modern sense. Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential energy". The law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for some years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity, such as momentum. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat. These developments led to the theory of conservation of energy, formalized largely by William Thomson (Lord Kelvin) as the field of thermodynamics. Thermodynamics aided the rapid development of explanations of chemical processes by Rudolf Clausius, Josiah Willard Gibbs, and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time. Thus, since 1918, theorists have understood that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time. == Units of measure == In the International System of Units (SI), the unit of energy is the joule. It is a derived unit that is equal to the energy expended, or work done, in applying a force of one newton through a distance of one metre. However energy can also be expressed in many other units not part of the SI, such as ergs, calories, British thermal units, kilowatt-hours and kilocalories, which require a conversion factor when expressed in SI units. The SI unit of power, defined as energy per unit of time, is the watt, which is a joule per second. Thus, one joule is one watt-second, and 3600 joules equal one watt-hour. The CGS energy unit is the erg and the imperial and US customary unit is the foot pound. Other energy units such as the electronvolt, food calorie or thermodynamic kcal (based on the temperature change of water in a heating process), and BTU are used in specific areas of science and commerce. In 1843, English physicist James Prescott Joule, namesake of the unit of measure, discovered that the gravitational potential energy lost by a descending weight attached via a string was equal to the internal energy gained by the water through friction with the paddle. == Scientific use == === Classical mechanics === In classical mechanics, energy is a conceptually and mathematically useful property, as it is a conserved quantity. Several formulations of mechanics have been developed using energy as a core concept. Work, a function of energy, is force times distance. W = ∫ C F ⋅ d s {\displaystyle W=\int _{C}\mathbf {F} \cdot \mathrm {d} \mathbf {s} } This says that the work ( W {\displaystyle W} ) is equal to the line integral of the force F along a path C; for details see the mechanical work article. Work and thus energy is frame dependent. For example, consider a ball being hit by a bat. In the center-of-mass reference frame, the bat does no work on the ball. But, in the reference frame of the person swinging the bat, considerable work is done on the ball. The total energy of a system is sometimes called the Hamiltonian, after William Rowan Hamilton. The classical equations of motion can be written in terms of the Hamiltonian, even for highly complex or abstract systems. These classical equations have direct analogs in nonrelativistic quantum mechanics. Another energy-related concept is called the Lagrangian, after Joseph-Louis Lagrange. This formalism is as fundamental as the Hamiltonian, and both can be used to derive the equations of motion or be derived from them. It was invented in the context of classical mechanics, but is generally useful in modern physics. The Lagrangian is defined as the kinetic energy minus the potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (such as systems with friction). Noether's theorem (1918) states that any differentiable symmetry of the action of a physical system has a corresponding conservation law. Noether's theorem has become a fundamental tool of modern theoretical physics and the calculus of variations. A generalisation of the seminal formulations on constants of motion in Lagrangian and Hamiltonian mechanics (1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian; for example, dissipative systems with continuous symmetries need not have a corresponding conservation law. === Chemistry === In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular, or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structure, it is usually accompanied by a decrease, and sometimes an increase, of the total energy of the substances involved. Some energy may be transferred between the surroundings and the reactants in the form of heat or light; thus the products of a reaction have sometimes more but usually less energy than the reactants. A reaction is said to be exothermic or exergonic if the final state is lower on the energy scale than the initial state; in the less common case of endothermic reactions the situation is the reverse. Chemical reactions are usually not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at a given temperature T) is related to the activation energy E by the Boltzmann's population factor e−E/kT; that is, the probability of a molecule to have energy greater than or equal to E at a given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation. The activation energy necessary for a chemical reaction can be provided in the form of thermal energy. === Biology === In biology, energy is an attribute of all biological systems, from the biosphere to the smallest living organism. Within an organism it is responsible for growth and development of a biological cell or organelle of a biological organism. Energy used in respiration is stored in substances such as carbohydrates (including sugars), lipids, and proteins stored by cells. In human terms, the human equivalent (H-e) (Human energy conversion) indicates, for a given amount of energy expenditure, the relative quantity of energy needed for human metabolism, using as a standard an average human energy expenditure of 6,900 kJ per day and a basal metabolic rate of 80 watts. For example, if our bodies run (on average) at 80 watts, then a light bulb running at 100 watts is running at 1.25 human equivalents (100 ÷ 80) i.e. 1.25 H-e. For a difficult task of only a few seconds' duration, a person can put out thousands of watts, many times the 746 watts in one official horsepower. For tasks lasting a few minutes, a fit human can generate perhaps 1,000 watts. For an activity that must be sustained for an hour, output drops to around 300; for an activity kept up all day, 150 watts is about the maximum. The human equivalent assists understanding of energy flows in physical and biological systems by expressing energy units in human terms: it provides a "feel" for the use of a given amount of energy. Sunlight's radiant energy is also captured by plants as chemical potential energy in photosynthesis, when carbon dioxide and water (two low-energy compounds) are converted into carbohydrates, lipids, proteins and oxygen. Release of the energy stored during photosynthesis as heat or light may be triggered suddenly by a spark in a forest fire, or it may be made available more slowly for animal or human metabolism when organic molecules are ingested and catabolism is triggered by enzyme action. All living creatures rely on an external source of energy to be able to grow and reproduce – radiant energy from the Sun in the case of green plants and chemical energy (in some form) in the case of animals. The daily 1500–2000 Calories (6–8 MJ) recommended for a human adult are taken as food molecules, mostly carbohydrates and fats, of which glucose (C6H12O6) and stearin (C57H110O6) are convenient examples. The food molecules are oxidized to carbon dioxide and water in the mitochondria C 6 H 12 O 6 + 6 O 2 ⟶ 6 CO 2 + 6 H 2 O {\displaystyle {\ce {C6H12O6 + 6O2 -> 6CO2 + 6H2O}}} C 57 H 110 O 6 + ( 81 1 2 ) O 2 ⟶ 57 CO 2 + 55 H 2 O {\displaystyle {\ce {C57H110O6 + (81 1/2) O2 -> 57CO2 + 55H2O}}} and some of the energy is used to convert ADP into ATP: The rest of the chemical energy of the carbohydrate or fat are converted into heat: the ATP is used as a sort of "energy currency", and some of the chemical energy it contains is used for other metabolism when ATP reacts with OH groups and eventually splits into ADP and phosphate (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work: gain in kinetic energy of a sprinter during a 100 m race: 4 kJ gain in gravitational potential energy of a 150 kg weight lifted through 2 metres: 3 kJ daily food intake of a normal adult: 6–8 MJ It would appear that living organisms are remarkably inefficient (in the physical sense) in their use of the energy they receive (chemical or radiant energy); most machines manage higher efficiencies. In growing organisms the energy that is converted to heat serves a vital purpose, as it allows the organism tissue to be highly ordered with regard to the molecules it is built from. The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the universe: to concentrate energy (or matter) in one specific place, it is necessary to spread out a greater amount of energy (as heat) across the remainder of the universe ("the surroundings"). Simpler organisms can achieve higher energy efficiencies than more complex ones, but the complex organisms can occupy ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology. As an example, to take just the first step in the food chain: of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants, i.e. reconverted into carbon dioxide and heat. === Earth sciences === In geology, continental drift, mountain ranges, volcanoes, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior, while meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes are all a result of energy transformations in our atmosphere brought about by solar energy. Sunlight is the main input to Earth's energy budget which accounts for its temperature and climate stability. Sunlight may be stored as gravitational potential energy after it strikes the Earth, as (for example when) water evaporates from oceans and is deposited upon mountains (where, after being released at a hydroelectric dam, it can be used to drive turbines or generators to produce electricity). Sunlight also drives most weather phenomena, save a few exceptions, like those generated by volcanic events for example. An example of a solar-mediated weather event is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, suddenly give up some of their thermal energy to power a few days of violent air movement. In a slower process, radioactive decay of atoms in the core of the Earth releases heat. This thermal energy drives plate tectonics and may lift mountains, via orogenesis. This slow lifting represents a kind of gravitational potential energy storage of the thermal energy, which may later be transformed into active kinetic energy during landslides, after a triggering event. Earthquakes also release stored elastic potential energy in rocks, a store that has been produced ultimately from the same radioactive heat sources. Thus, according to present understanding, familiar events such as landslides and earthquakes release energy that has been stored as potential energy in the Earth's gravitational field or elastic strain (mechanical potential energy) in rocks. Prior to this, they represent release of energy that has been stored in heavy atoms since the collapse of long-destroyed supernova stars (which created these atoms). === Cosmology === In cosmology and astronomy the phenomena of stars, nova, supernova, quasars and gamma-ray bursts are the universe's highest-output energy transformations of matter. All stellar phenomena (including solar activity) are driven by various kinds of energy transformations. Energy in such transformations is either from gravitational collapse of matter (usually molecular hydrogen) into various classes of astronomical objects (stars, black holes, etc.), or from nuclear fusion (of lighter elements, primarily hydrogen). The nuclear fusion of hydrogen in the Sun also releases another store of potential energy which was created at the time of the Big Bang. At that time, according to theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This meant that hydrogen represents a store of potential energy that can be released by fusion. Such a fusion process is triggered by heat and pressure generated from gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into sunlight. === Quantum mechanics === In quantum mechanics, energy is defined in terms of the energy operator (Hamiltonian) as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. Its results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of a slowly changing (non-relativistic) wave function of quantum systems. The solution of this equation for a bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by Planck's relation: E = h ν {\displaystyle E=h\nu } (where h {\displaystyle h} is the Planck constant and ν {\displaystyle \nu } the frequency). In the case of an electromagnetic wave these energy states are called quanta of light or photons. === Relativity === When calculating kinetic energy (work to accelerate a massive body from zero speed to some finite speed) relativistically – using Lorentz transformations instead of Newtonian mechanics – Einstein discovered an unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest energy: energy which every massive body must possess even when being at rest. The amount of energy is directly proportional to the mass of the body: E 0 = m 0 c 2 , {\displaystyle E_{0}=m_{0}c^{2},} where m0 is the rest mass of the body, c is the speed of light in vacuum, E 0 {\displaystyle E_{0}} is the rest energy. For example, consider electron–positron annihilation, in which the rest energy of these two individual particles (equivalent to their rest mass) is converted to the radiant energy of the photons produced in the process. In this system the matter and antimatter (electrons and positrons) are destroyed and changed to non-matter (the photons). However, the total mass and total energy do not change during this interaction. The photons each have no rest mass but nonetheless have radiant energy which exhibits the same inertia as did the two original particles. This is a reversible process – the inverse process is called pair creation – in which the rest mass of particles is created from the radiant energy of two (or more) annihilating photons. In general relativity, the stress–energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian approximation. Energy and mass are manifestations of one and the same underlying physical property of a system. This property is responsible for the inertia and strength of gravitational interaction of the system ("mass manifestations"), and is also responsible for the potential ability of the system to perform work or heating ("energy manifestations"), subject to the limitations of other physical laws. In classical physics, energy is a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy–momentum 4-vector). In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of spacetime (= boosts). == Transformation == Energy may be transformed between different forms at various efficiencies. Items that transform between these forms are called transducers. Examples of transducers include a battery (from chemical energy to electric energy), a dam (from gravitational potential energy to kinetic energy of moving water (and the blades of a turbine) and ultimately to electric energy through an electric generator), and a heat engine (from heat to work). Examples of energy transformation include generating electric energy from heat energy via a steam turbine, or lifting an object against gravity using electrical energy driving a crane motor. Lifting against gravity performs mechanical work on the object and stores gravitational potential energy in the object. If the object falls to the ground, gravity does mechanical work on the object which transforms the potential energy in the gravitational field to the kinetic energy released as heat on impact with the ground. The Sun transforms nuclear potential energy to other forms of energy; its total mass does not decrease due to that itself (since it still contains the same total energy even in different forms) but its mass does decrease when the energy escapes out to its surroundings, largely as radiant energy. There are strict limits to how efficiently heat can be converted into work in a cyclic process, e.g. in a heat engine, as described by Carnot's theorem and the second law of thermodynamics. However, some energy transformations can be quite efficient. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often determined by entropy (equal energy spread among all available degrees of freedom) considerations. In practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces. Energy transformations in the universe over time are characterized by various kinds of potential energy, that has been available since the Big Bang, being "released" (transformed to more active types of energy such as kinetic or radiant energy) when a triggering mechanism is available. Familiar examples of such processes include nucleosynthesis, a process ultimately using the gravitational potential energy released from the gravitational collapse of supernovae to "store" energy in the creation of heavy isotopes (such as uranium and thorium), and nuclear decay, a process in which energy is released that was originally stored in these heavy elements, before they were incorporated into the Solar System and the Earth. This energy is triggered and released in nuclear fission bombs or in civil nuclear power generation. Similarly, in the case of a chemical explosion, chemical potential energy is transformed to kinetic and thermal energy in a very short time. Yet another example is that of a pendulum. At its highest points the kinetic energy is zero and the gravitational potential energy is at its maximum. At its lowest point the kinetic energy is at its maximum and is equal to the decrease in potential energy. If one (unrealistically) assumes that there is no friction or other losses, the conversion of energy between these processes would be perfect, and the pendulum would continue swinging forever. Energy is also transferred from potential energy ( E p {\displaystyle E_{p}} ) to kinetic energy ( E k {\displaystyle E_{k}} ) and then back to potential energy constantly. This is referred to as conservation of energy. In this isolated system, energy cannot be created or destroyed; therefore, the initial energy and the final energy will be equal to each other. This can be demonstrated by the following: The equation can then be simplified further since E p = m g h {\displaystyle E_{p}=mgh} (mass times acceleration due to gravity times the height) and E k = 1 2 m v 2 {\textstyle E_{k}={\frac {1}{2}}mv^{2}} (half mass times velocity squared). Then the total amount of energy can be found by adding E p + E k = E total {\displaystyle E_{p}+E_{k}=E_{\text{total}}} . === Conservation of energy and mass in transformation === Energy gives rise to weight when it is trapped in a system with zero momentum, where it can be weighed. It is also equivalent to mass, and this mass is always associated with it. Mass is also equivalent to a certain amount of energy, and likewise always appears associated with it, as described in mass–energy equivalence. The formula E = mc2, derived by Albert Einstein (1905) quantifies the relationship between relativistic mass and energy within the concept of special relativity. In different theoretical frameworks, similar formulas were derived by J.J. Thomson (1881), Henri Poincaré (1900), Friedrich Hasenöhrl (1904) and others (see Mass–energy equivalence#History for further information). Part of the rest energy (equivalent to rest mass) of matter may be converted to other forms of energy (still exhibiting mass), but neither energy nor mass can be destroyed; rather, both remain constant during any process. However, since c 2 {\displaystyle c^{2}} is extremely large relative to ordinary human scales, the conversion of an everyday amount of rest mass (for example, 1 kg) from rest energy to other forms of energy (such as kinetic energy, thermal energy, or the radiant energy carried by light and other radiation) can liberate tremendous amounts of energy (~ 9×1016 joules, equivalent to 21 megatons of TNT), as can be seen in nuclear reactors and nuclear weapons. Conversely, the mass equivalent of an everyday amount energy is minuscule, which is why a loss of energy (loss of mass) from most systems is difficult to measure on a weighing scale, unless the energy loss is very large. Examples of large transformations between rest energy (of matter) and other forms of energy (e.g., kinetic energy into particles with rest mass) are found in nuclear physics and particle physics. Often, however, the complete conversion of matter (such as atoms) to non-matter (such as photons) is forbidden by conservation laws. === Reversible and non-reversible transformations === Thermodynamics divides energy transformation into two kinds: reversible processes and irreversible processes. An irreversible process is one in which energy is dissipated (spread) into empty energy states available in a volume, from which it cannot be recovered into more concentrated forms (fewer quantum states), without degradation of even more energy. A reversible process is one in which this sort of dissipation does not happen. For example, conversion of energy from one type of potential field to another is reversible, as in the pendulum system described above. In processes where heat is generated, quantum states of lower energy, present as possible excitations in fields between atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly stay as thermal energy and cannot be completely recovered as usable energy, except at the price of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as an expansion of matter, or a randomization in a crystal). As the universe evolves with time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or as other kinds of increases in disorder). This has led to the hypothesis of the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do work through a heat engine, or be transformed to other usable forms of energy (through the use of generators attached to heat engines), continues to decrease. == Conservation of energy == The fact that energy can be neither created nor destroyed is called the law of conservation of energy. In the form of the first law of thermodynamics, this states that a closed system's energy is constant unless energy is transferred in or out as work or heat, and that no energy is lost in transfer. The total inflow of energy into a system must equal the total outflow of energy from the system, plus the change in the energy contained within the system. Whenever one measures (or calculates) the total energy of a system of particles whose interactions do not depend explicitly on time, it is found that the total energy of the system always remains constant. While heat can always be fully converted into work in a reversible isothermal expansion of an ideal gas, for cyclic processes of practical interest in heat engines the second law of thermodynamics states that the system doing work always loses some energy as waste heat. This creates a limit to the amount of heat energy that can do work in a cyclic process, a limit called the available energy. Mechanical and other forms of energy can be transformed in the other direction into thermal energy without such limitations. The total energy of a system can be calculated by adding up all forms of energy in the system. Richard Feynman said during a 1961 lecture: There is a fact, or if you wish, a law, governing all natural phenomena that are known to date. There is no known exception to this law – it is exact so far as we know. The law is called the conservation of energy. It states that there is a certain quantity, which we call energy, that does not change in manifold changes which nature undergoes. That is a most abstract idea, because it is a mathematical principle; it says that there is a numerical quantity which does not change when something happens. It is not a description of a mechanism, or anything concrete; it is just a strange fact that we can calculate some number and when we finish watching nature go through her tricks and calculate the number again, it is the same. Most kinds of energy (with gravitational energy being a notable exception) are subject to strict local conservation laws as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa. This law is a fundamental principle of physics. As shown rigorously by Noether's theorem, the conservation of energy is a mathematical consequence of translational symmetry of time, a property of most phenomena below the cosmic scale that makes them independent of their locations on the time coordinate. Put differently, yesterday, today, and tomorrow are physically indistinguishable. This is because energy is the quantity which is canonical conjugate to time. This mathematical entanglement of energy and time also results in the uncertainty principle – it is impossible to define the exact amount of energy during any definite time interval (though this is practically significant only for very short time intervals). The uncertainty principle should not be confused with energy conservation – rather it provides mathematical limits to which energy can in principle be defined and measured. Each of the basic forces of nature is associated with a different type of potential energy, and all types of potential energy (like all other types of energy) appear as system mass, whenever present. For example, a compressed spring will be slightly more massive than before it was compressed. Likewise, whenever energy is transferred between systems by any mechanism, an associated mass is transferred with it. In quantum mechanics energy is expressed using the Hamiltonian operator. On any time scales, the uncertainty in the energy is by Δ E Δ t ≥ ℏ 2 {\displaystyle \Delta E\Delta t\geq {\frac {\hbar }{2}}} which is similar in form to the Heisenberg Uncertainty Principle (but not really mathematically equivalent thereto, since H and t are not dynamically conjugate variables, neither in classical nor in quantum mechanics). In particle physics, this inequality permits a qualitative understanding of virtual particles, which carry momentum. The exchange of virtual particles with real particles is responsible for the creation of all known fundamental forces (more accurately known as fundamental interactions). Virtual photons are also responsible for the electrostatic interaction between electric charges (which results in Coulomb's law), for spontaneous radiative decay of excited atomic and nuclear states, for the Casimir force, for the Van der Waals force and some other observable phenomena. == Energy transfer == === Closed systems === Energy transfer can be considered for the special case of systems which are closed to transfers of matter. The portion of the energy which is transferred by conservative forces over a distance is measured as the work the source system does on the receiving system. The portion of the energy which does not do work during the transfer is called heat. Energy can be transferred between systems in a variety of ways. Examples include the transmission of electromagnetic energy via photons, physical collisions which transfer kinetic energy, tidal interactions, and the conductive transfer of thermal energy. Energy is strictly conserved and is also locally conserved wherever it can be defined. In thermodynamics, for closed systems, the process of energy transfer is described by the first law: where E {\displaystyle E} is the amount of energy transferred, W {\displaystyle W} represents the work done on or by the system, and Q {\displaystyle Q} represents the heat flow into or out of the system. As a simplification, the heat term, Q {\displaystyle Q} , can sometimes be ignored, especially for fast processes involving gases, which are poor conductors of heat, or when the thermal efficiency of the transfer is high. For such adiabatic processes, This simplified equation is the one used to define the joule, for example. === Open systems === Beyond the constraints of closed systems, open systems can gain or lose energy in association with matter transfer (this process is illustrated by injection of an air-fuel mixture into a car engine, a system which gains in energy thereby, without addition of either work or heat). Denoting this energy by E matter {\displaystyle E_{\text{matter}}} , one may write == Thermodynamics == === Internal energy === Internal energy is the sum of all microscopic forms of energy of a system. It is the energy needed to create the system. It is related to the potential energy, e.g., molecular structure, crystal structure, and other geometric aspects, as well as the motion of the particles, in form of kinetic energy. Thermodynamics is chiefly concerned with changes in internal energy and not its absolute value, which is impossible to determine with thermodynamics alone. === First law of thermodynamics === The first law of thermodynamics asserts that the total energy of a system and its surroundings (but not necessarily thermodynamic free energy) is always conserved and that heat flow is a form of energy transfer. For homogeneous systems, with a well-defined temperature and pressure, a commonly used corollary of the first law is that, for a system subject only to pressure forces and heat transfer (e.g., a cylinder-full of gas) without chemical changes, the differential change in the internal energy of the system (with a gain in energy signified by a positive quantity) is given as d E = T d S − P d V , {\displaystyle \mathrm {d} E=T\mathrm {d} S-P\mathrm {d} V\,,} where the first term on the right is the heat transferred into the system, expressed in terms of temperature T and entropy S (in which entropy increases and its change dS is positive when heat is added to the system), and the last term on the right hand side is identified as work done on the system, where pressure is P and volume V (the negative sign results since compression of the system requires work to be done on it and so the volume change, dV, is negative when work is done on the system). This equation is highly specific, ignoring all chemical, electrical, nuclear, and gravitational forces, effects such as advection of any form of energy other than heat and PV-work. The general formulation of the first law (i.e., conservation of energy) is valid even in situations in which the system is not homogeneous. For these cases the change in internal energy of a closed system is expressed in a general form by d E = δ Q + δ W {\displaystyle \mathrm {d} E=\delta Q+\delta W} where δ Q {\displaystyle \delta Q} is the heat supplied to the system and δ W {\displaystyle \delta W} is the work applied to the system. === Equipartition of energy === The energy of a mechanical harmonic oscillator (a mass on a spring) is alternately kinetic and potential energy. At two points in the oscillation cycle it is entirely kinetic, and at two points it is entirely potential. Over a whole cycle, or over many cycles, average energy is equally split between kinetic and potential. This is an example of the equipartition principle: the total energy of a system with many degrees of freedom is equally split among all available degrees of freedom, on average. This principle is vitally important to understanding the behavior of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts of a system. When an isolated system is given more degrees of freedom (i.e., given new available energy states that are the same as existing states), then total energy spreads over all available degrees equally without distinction between "new" and "old" degrees. This mathematical result is part of the second law of thermodynamics. The second law of thermodynamics is simple only for systems which are near or in a physical equilibrium state. For non-equilibrium systems, the laws governing the systems' behavior are still debatable. One of the guiding principles for these systems is the principle of maximum entropy production. It states that nonequilibrium systems behave in such a way as to maximize their entropy production. == See also == == Notes == == References == == Further reading == === Journals === The Journal of Energy History / Revue d'histoire de l'énergie (JEHRHE), 2018– == External links == Differences between Heat and Thermal energy (Archived 2016-08-27 at the Wayback Machine) – BioCab
Wikipedia/Energy_transfer
In chemistry, the lattice energy is the energy change (released) upon formation of one mole of a crystalline compound from its infinitely separated constituents, which are assumed to initially be in the gaseous state at 0 K. It is a measure of the cohesive forces that bind crystalline solids. The size of the lattice energy is connected to many other physical properties including solubility, hardness, and volatility. Since it generally cannot be measured directly, the lattice energy is usually deduced from experimental data via the Born–Haber cycle. == Lattice energy and lattice enthalpy == The concept of lattice energy was originally applied to the formation of compounds with structures like rocksalt (NaCl) and sphalerite (ZnS) where the ions occupy high-symmetry crystal lattice sites. In the case of NaCl, lattice energy is the energy change of the reaction: Na + ( g ) + Cl − ( g ) ⟶ NaCl ( s ) {\displaystyle {\ce {Na^+ (g) + Cl^- (g) -> NaCl (s)}}} which amounts to −786 kJ/mol. Some chemistry textbooks as well as the widely used CRC Handbook of Chemistry and Physics define lattice energy with the opposite sign, i.e. as the energy required to convert the crystal into infinitely separated gaseous ions in vacuum, an endothermic process. Following this convention, the lattice energy of NaCl would be +786 kJ/mol. Both sign conventions are widely used. The relationship between the lattice energy Δ U l {\displaystyle \Delta U_{l}} and the lattice enthalpy Δ H l {\displaystyle \Delta H_{l}} at pressure P {\displaystyle P} is given by the following equation: Δ U l = Δ H l − P Δ V m {\displaystyle \Delta U_{l}=\Delta H_{l}-P\Delta V_{m}} , where Δ U l {\displaystyle \Delta U_{l}} is the lattice energy (i.e., the molar internal energy change), Δ H l {\displaystyle \Delta H_{l}} is the lattice enthalpy, and Δ V m {\displaystyle \Delta V_{m}} the change of molar volume due to the formation of the lattice. Since the molar volume of the solid is much smaller than that of the gases, Δ V m < 0 {\displaystyle \Delta V_{m}<0} . The formation of a crystal lattice from ions in vacuum must lower the internal energy due to the net attractive forces involved, and so Δ U l < 0 {\displaystyle \Delta U_{l}<0} . The − P Δ V m {\displaystyle -P\Delta V_{m}} term is positive but is relatively small at low pressures, and so the value of the lattice enthalpy is also negative (and exothermic). Both, lattice energy and lattice enthalpy are identical at 0 K and the difference may be disregarded in practice at normal temperatures. == Theoretical treatments == === Lattice energy of ionic compounds === The lattice energy of an ionic compound depends strongly upon the charges of the ions that comprise the solid, which must attract or repel one another via Coulomb's law. More subtly, the relative and absolute sizes of the ions influence Δ H l {\displaystyle \Delta H_{l}} . London dispersion forces also exist between ions and contribute to the lattice energy via polarization effects. For ionic compounds made up of molecular cations and/or anions, there may also be ion-dipole and dipole-dipole interactions if either molecule has a molecular dipole moment. The theoretical treatments described below are focused on compounds made of atomic cations and anions, and neglect contributions to the internal energy of the lattice from thermalized lattice vibrations. === Born-Landé equation === In 1918 Max Born and Alfred Landé proposed that the lattice energy could be derived from the electric potential of the ionic lattice and a repulsive potential energy term. This equation estimates the lattice energy based on electrostatic interactions and a repulsive term characterized by a power-law dependence (using a Born exponent, n {\displaystyle n} ). It was published building on earlier work by Born on ionic lattices. Δ U l = − N A M z + z − e 2 4 π ε 0 r 0 ( 1 − 1 n ) {\displaystyle \Delta U_{l}=-{\frac {N_{A}Mz^{+}z^{-}e^{2}}{4\pi \varepsilon _{0}r_{0}}}\left(1-{\frac {1}{n}}\right)} where N A {\displaystyle N_{A}} is the Avogadro constant, M {\displaystyle M} is the Madelung constant, z + {\displaystyle z^{+}} / z − {\displaystyle z^{-}} are the charge numbers of the cations and anions, e {\displaystyle e} is the elementary charge (1.6022×10−19 C), ε 0 {\displaystyle \varepsilon _{0}} is the permittivity of free space ( 4 π ε 0 {\displaystyle 4\pi \varepsilon _{0}} = 1.112×10−10 C2/(J·m)), r 0 {\displaystyle r_{0}} is the distance to the closest ion (nearest neighbour) and n {\displaystyle n} is the Born exponent (a number between 5 and 12, determined experimentally by measuring the compressibility of the solid, or derived theoretically). The Born–Landé equation above shows that the lattice energy of a compound depends principally on two factors: as the charges on the ions increase, the lattice energy increases (becomes more negative) when the ions are closer together, the lattice energy increases (becomes more negative) Barium oxide (BaO), for instance, which has the NaCl structure and therefore the same Madelung constant, has a bond radius of 275 picometers and a lattice energy of −3054 kJ/mol, while sodium chloride (NaCl) has a bond radius of 283 picometers and a lattice energy of −786 kJ/mol. The bond radii are similar but the charge numbers are not, with BaO having charge numbers of (+2,−2) and NaCl having (+1,−1); the Born–Landé equation predicts that the difference in charge numbers is the principal reason for the large difference in lattice energies. === Born-Mayer equation === In 1932, Born and Joseph E. Mayer refined the Born-Landé equation by replacing the power-law repulsive term with an exponential term e − r 0 / ρ {\displaystyle e^{-r_{0}/\rho }} which better accounts for the quantum mechanical repulsion effect between the ions. This equation improved the accuracy for the description of many ionic compounds: Δ U l = − N A M z + z − e 2 4 π ε 0 r 0 ( 1 − ρ r 0 ) {\displaystyle \Delta U_{l}=-{\frac {N_{A}Mz^{+}z^{-}e^{2}}{4\pi \varepsilon _{0}r_{0}}}\left(1-{\frac {\rho }{r_{0}}}\right)} where N A {\displaystyle N_{A}} is the Avogadro constant, M {\displaystyle M} is the Madelung constant, z + {\displaystyle z^{+}} / z − {\displaystyle z^{-}} are the charge numbers of the cations and anions, e {\displaystyle e} is the elementary charge (1.6022×10−19 C), ε 0 {\displaystyle \varepsilon _{0}} is the permittivity of free space (8.854×10−12 C2 J−1 m−1), r 0 {\displaystyle r_{0}} is the distance to the closest ion and ρ {\displaystyle \rho } is a constant that depends on the compressibility of the crystal (30 - 34.5 pm works well for alkali halides), used to represent the repulsion between ions at short range. Same as before, it can be seen that large values of r 0 {\displaystyle r_{0}} results in low lattice energies, whereas high ionic charges result in high lattice energies. === Kapustinskii equation === Developed in 1956 by Anatolii Kapustinskii, this is a generalized empirical equation useful for a wide range of ionic compounds, including those with complex ions. It builds upon the previous equations and provides a simplified way to estimate the lattice energy of ionic compounds based on the charges and radii of the ions. It is an approximation that facilitates calculations compared to the Born-Landé and Born-Mayer equations, easier for quick estimates where high precision is not required. Δ U l = − κ Z | z + z − | r 0 ( 1 − ρ r 0 ) {\displaystyle \Delta U_{l}=-{\frac {\kappa Z|z^{+}z^{-}|}{r_{0}}}\left(1-{\frac {\rho }{r_{0}}}\right)} where κ {\displaystyle \kappa } is the Kapustinskii constant (1.202·105 (kJ·Å)/mol), Z {\displaystyle Z} is the number of ions per formula unit, z + {\displaystyle z^{+}} / z − {\displaystyle z^{-}} are the charge numbers of the cations and anions, r 0 {\displaystyle r_{0}} is the distance to the closest ion and ρ {\displaystyle \rho } is a constant that depends on the compressibility of the crystal (30 - 34.5 pm works well for alkali halides), used to represent the repulsion between ions at short range. === Polarization effects === For certain ionic compounds, the calculation of the lattice energy requires the explicit inclusion of polarization effects. In these cases the polarization energy Epol associated with ions on polar lattice sites may be included in the Born–Haber cycle. As an example, one may consider the case of iron-pyrite FeS2. It has been shown that neglect of polarization led to a 15% difference between theory and experiment in the case of FeS2, whereas including it reduced the error to 2%. == Representative lattice energies == The following table presents a list of lattice energies for some common compounds as well as their structure type. == See also == Bond energy Born–Haber cycle Chemical bond Enthalpy of melting Enthalpy change of solution Heat of dilution Ionic conductivity Kapustinskii equation Madelung constant == Notes == == References ==
Wikipedia/Lattice_energy
In physical chemistry, the Arrhenius equation is a formula for the temperature dependence of reaction rates. The equation was proposed by Svante Arrhenius in 1889, based on the work of Dutch chemist Jacobus Henricus van 't Hoff who had noted in 1884 that the Van 't Hoff equation for the temperature dependence of equilibrium constants suggests such a formula for the rates of both forward and reverse reactions. This equation has a vast and important application in determining the rate of chemical reactions and for calculation of energy of activation. Arrhenius provided a physical justification and interpretation for the formula. Currently, it is best seen as an empirical relationship.: 188  It can be used to model the temperature variation of diffusion coefficients, population of crystal vacancies, creep rates, and many other thermally induced processes and reactions. The Eyring equation, developed in 1935, also expresses the relationship between rate and energy. == Formulation == The Arrhenius equation describes the exponential dependence of the rate constant of a chemical reaction on the absolute temperature as k = A e − E a R T , {\displaystyle k=Ae^{\frac {-E_{\text{a}}}{RT}},} where k is the rate constant (frequency of collisions resulting in a reaction), T is the absolute temperature, A is the pre-exponential factor or Arrhenius factor or frequency factor. Arrhenius originally considered A to be a temperature-independent constant for each chemical reaction. However more recent treatments include some temperature dependence – see § Modified Arrhenius equation below. Ea is the molar activation energy for the reaction, R is the universal gas constant. Alternatively, the equation may be expressed as k = A e − E a k B T , {\displaystyle k=Ae^{\frac {-E_{\text{a}}}{k_{\text{B}}T}},} where Ea is the activation energy for the reaction (in the same unit as kBT), kB is the Boltzmann constant. The only difference is the unit of Ea: the former form uses energy per mole, which is common in chemistry, while the latter form uses energy per molecule directly, which is common in physics. The different units are accounted for in using either the gas constant, R, or the Boltzmann constant, kB, as the multiplier of temperature T. The unit of the pre-exponential factor A are identical to those of the rate constant and will vary depending on the order of the reaction. If the reaction is first order it has the unit s−1, and for that reason it is often called the frequency factor or attempt frequency of the reaction. Most simply, k is the number of collisions that result in a reaction per second, A is the number of collisions (leading to a reaction or not) per second occurring with the proper orientation to react and e − E a R T {\displaystyle e^{\frac {-E_{\text{a}}}{RT}}} is the probability that any given collision will result in a reaction. It can be seen that either increasing the temperature or decreasing the activation energy (for example through the use of catalysts) will result in an increase in rate of reaction. Given the small temperature range of kinetic studies, it is reasonable to approximate the activation energy as being independent of the temperature. Similarly, under a wide range of practical conditions, the weak temperature dependence of the pre-exponential factor is negligible compared to the temperature dependence of the factor ⁠ e − E a R T {\displaystyle e^{\frac {-E_{\text{a}}}{RT}}} ⁠; except in the case of "barrierless" diffusion-limited reactions, in which case the pre-exponential factor is dominant and is directly observable. With this equation it can be roughly estimated that the rate of reaction increases by a factor of about 2 to 3 for every 10 °C rise in temperature, for common values of activation energy and temperature range. The e − E a R T {\displaystyle e^{\frac {-E_{a}}{RT}}} factor denotes the fraction of molecules with energy greater than or equal to E a {\displaystyle E_{a}} . == Derivation == Van't Hoff argued that the temperature T {\displaystyle T} of a reaction and the standard equilibrium constant k e 0 {\displaystyle k_{\text{e}}^{0}} exhibit the relation: where Δ U 0 {\displaystyle \Delta U^{0}} denotes the apposite standard internal energy change value. Let k f {\displaystyle k_{\text{f}}} and k b {\displaystyle k_{\text{b}}} respectively denote the forward and backward reaction rates of the reaction of interest, then ⁠ k e 0 = k f k b {\displaystyle k_{\text{e}}^{0}={\frac {k_{\text{f}}}{k_{\text{b}}}}} ⁠, an equation from which ln ⁡ k e 0 = ln ⁡ k f − ln ⁡ k b {\displaystyle \ln k_{\text{e}}^{0}=\ln k_{\text{f}}-\ln k_{\text{b}}} naturally follows. Substituting the expression for ln ⁡ k e 0 {\displaystyle \ln k_{\text{e}}^{0}} in eq.(1), we obtain ⁠ d ln ⁡ k f d T − d ln ⁡ k b d T = Δ U 0 R T 2 {\displaystyle {\frac {d\ln k_{\text{f}}}{dT}}-{\frac {d\ln k_{\text{b}}}{dT}}={\frac {\Delta U^{0}}{RT^{2}}}} ⁠. The preceding equation can be broken down into the following two equations: and where E f {\displaystyle E_{\text{f}}} and E b {\displaystyle E_{\text{b}}} are the activation energies associated with the forward and backward reactions respectively, with Δ U 0 = E f − E b {\displaystyle \Delta U^{0}=E_{\text{f}}-E_{\text{b}}} . Experimental findings suggest that the constants in eq.(2) and eq.(3) can be treated as being equal to zero, so that and Integrating these equations and taking the exponential yields the results ⁠ k f = A f e − E f / R T {\displaystyle k_{\text{f}}=A_{\text{f}}e^{-E_{\text{f}}/RT}} ⁠ and ⁠ k b = A b e − E b / R T {\displaystyle k_{\text{b}}=A_{\text{b}}e^{-E_{\text{b}}/RT}} ⁠, where each pre-exponential factor A f {\displaystyle A_{\text{f}}} or A b {\displaystyle A_{\text{b}}} is mathematically the exponential of the constant of integration for the respective indefinite integral in question. == Arrhenius plot == Taking the natural logarithm of Arrhenius equation yields: ln ⁡ k = ln ⁡ A − E a R 1 T . {\displaystyle \ln k=\ln A-{\frac {E_{\text{a}}}{R}}{\frac {1}{T}}.} Rearranging yields: ln ⁡ k = − E a R ( 1 T ) + ln ⁡ A . {\displaystyle \ln k={\frac {-E_{\text{a}}}{R}}\left({\frac {1}{T}}\right)+\ln A.} This has the same form as an equation for a straight line: y = a x + b , {\displaystyle y=ax+b,} where x is the reciprocal of T. So, when a reaction has a rate constant obeying the Arrhenius equation, a plot of ln k versus T−1 gives a straight line, whose slope and intercept can be used to determine Ea and A respectively. This procedure is common in experimental chemical kinetics. The activation energy is simply obtained by multiplying by (−R) the slope of the straight line drawn from a plot of ln k versus (1/T): E a ≡ − R [ ∂ ln ⁡ k ∂ ( 1 / T ) ] P . {\displaystyle E_{\text{a}}\equiv -R\left[{\frac {\partial \ln k}{\partial (1/T)}}\right]_{P}.} == Modified Arrhenius equation == The modified Arrhenius equation makes explicit the temperature dependence of the pre-exponential factor. The modified equation is usually of the form k = A T n e − E a R T . {\displaystyle k=AT^{n}e^{\frac {-E_{\text{a}}}{RT}}.} The original Arrhenius expression above corresponds to n = 0. Fitted rate constants typically lie in the range −1 < n < 1. Theoretical analyses yield various predictions for n. It has been pointed out that "it is not feasible to establish, on the basis of temperature studies of the rate constant, whether the predicted T1/2 dependence of the pre-exponential factor is observed experimentally".: 190  However, if additional evidence is available, from theory and/or from experiment (such as density dependence), there is no obstacle to incisive tests of the Arrhenius law. Another common modification is the stretched exponential form k = A exp ⁡ [ − ( E a R T ) β ] , {\displaystyle k=A\exp \left[-\left({\frac {E_{a}}{RT}}\right)^{\beta }\right],} where β is a dimensionless number of order 1. This is typically regarded as a purely empirical correction or fudge factor to make the model fit the data, but can have theoretical meaning, for example showing the presence of a range of activation energies or in special cases like the Mott variable range hopping. == Theoretical interpretation == === Arrhenius's concept of activation energy === Arrhenius argued that for reactants to transform into products, they must first acquire a minimum amount of energy, called the activation energy Ea. At an absolute temperature T, the fraction of molecules that have a kinetic energy greater than Ea can be calculated from statistical mechanics. The concept of activation energy explains the exponential nature of the relationship, and in one way or another, it is present in all kinetic theories. The calculations for reaction rate constants involve an energy averaging over a Maxwell–Boltzmann distribution with E a {\displaystyle E_{\text{a}}} as lower bound and so are often of the type of incomplete gamma functions, which turn out to be proportional to e − E a R T {\displaystyle e^{\frac {-E_{\text{a}}}{RT}}} . === Collision theory === One approach is the collision theory of chemical reactions, developed by Max Trautz and William Lewis in the years 1916–18. In this theory, molecules are supposed to react if they collide with a relative kinetic energy along their line of centers that exceeds Ea. The number of binary collisions between two unlike molecules per second per unit volume is found to be z A B = N A N B d A B 2 8 π k B T μ A B , {\displaystyle z_{AB}=N_{\text{A}}N_{\text{B}}d_{AB}^{2}{\sqrt {\frac {8\pi k_{\text{B}}T}{\mu _{AB}}}},} where NA and NB are the number densities of A and B, dAB is the average diameter of A and B, T is the temperature which is multiplied by the Boltzmann constant kB to convert to energy, and μAB is the reduced mass of A and B. The rate constant is then calculated as ⁠ k = z A B e − E a R T {\displaystyle k=z_{AB}e^{\frac {-E_{\text{a}}}{RT}}} ⁠, so that the collision theory predicts that the pre-exponential factor is equal to the collision number zAB. However for many reactions this agrees poorly with experiment, so the rate constant is written instead as ⁠ k = ρ z A B e − E a R T {\displaystyle k=\rho z_{AB}e^{\frac {-E_{\text{a}}}{RT}}} ⁠. Here ρ {\displaystyle \rho } is an empirical steric factor, often much less than 1.00, which is interpreted as the fraction of sufficiently energetic collisions in which the two molecules have the correct mutual orientation to react. === Transition state theory === The Eyring equation, another Arrhenius-like expression, appears in the "transition state theory" of chemical reactions, formulated by Eugene Wigner, Henry Eyring, Michael Polanyi and M. G. Evans in the 1930s. The Eyring equation can be written: k = k B T h e − Δ G ‡ R T = k B T h e Δ S ‡ R e − Δ H ‡ R T , {\displaystyle k={\frac {k_{\text{B}}T}{h}}e^{-{\frac {\Delta G^{\ddagger }}{RT}}}={\frac {k_{\text{B}}T}{h}}e^{\frac {\Delta S^{\ddagger }}{R}}e^{-{\frac {\Delta H^{\ddagger }}{RT}}},} where Δ G ‡ {\displaystyle \Delta G^{\ddagger }} is the Gibbs energy of activation, Δ S ‡ {\displaystyle \Delta S^{\ddagger }} is the entropy of activation, Δ H ‡ {\displaystyle \Delta H^{\ddagger }} is the enthalpy of activation, k B {\displaystyle k_{\text{B}}} is the Boltzmann constant, and h {\displaystyle h} is the Planck constant. At first sight this looks like an exponential multiplied by a factor that is linear in temperature. However, free energy is itself a temperature-dependent quantity. The free energy of activation Δ G ‡ = Δ H ‡ − T Δ S ‡ {\displaystyle \Delta G^{\ddagger }=\Delta H^{\ddagger }-T\Delta S^{\ddagger }} is the difference of an enthalpy term and an entropy term multiplied by the absolute temperature. The pre-exponential factor depends primarily on the entropy of activation. The overall expression again takes the form of an Arrhenius exponential (of enthalpy rather than energy) multiplied by a slowly varying function of T. The precise form of the temperature dependence depends upon the reaction, and can be calculated using formulas from statistical mechanics involving the partition functions of the reactants and of the activated complex. === Limitations of the idea of Arrhenius activation energy === Both the Arrhenius activation energy and the rate constant k are experimentally determined, and represent macroscopic reaction-specific parameters that are not simply related to threshold energies and the success of individual collisions at the molecular level. Consider a particular collision (an elementary reaction) between molecules A and B. The collision angle, the relative translational energy, the internal (particularly vibrational) energy will all determine the chance that the collision will produce a product molecule AB. Macroscopic measurements of E and k are the result of many individual collisions with differing collision parameters. To probe reaction rates at molecular level, experiments are conducted under near-collisional conditions and this subject is often called molecular reaction dynamics. Another situation where the explanation of the Arrhenius equation parameters falls short is in heterogeneous catalysis, especially for reactions that show Langmuir-Hinshelwood kinetics. Clearly, molecules on surfaces do not "collide" directly, and a simple molecular cross-section does not apply here. Instead, the pre-exponential factor reflects the travel across the surface towards the active site. There are deviations from the Arrhenius law during the glass transition in all classes of glass-forming matter. The Arrhenius law predicts that the motion of the structural units (atoms, molecules, ions, etc.) should slow down at a slower rate through the glass transition than is experimentally observed. In other words, the structural units slow down at a faster rate than is predicted by the Arrhenius law. This observation is made reasonable assuming that the units must overcome an energy barrier by means of a thermal activation energy. The thermal energy must be high enough to allow for translational motion of the units which leads to viscous flow of the material. == See also == Accelerated aging Eyring equation Q10 (temperature coefficient) Van 't Hoff equation Clausius–Clapeyron relation Gibbs–Helmholtz equation Cherry blossom front – predicted using the Arrhenius equation == References == == Bibliography == Pauling, L. C. (1988). General Chemistry. Dover Publications. Laidler, K. J. (1987). Chemical Kinetics (3rd ed.). Harper & Row. Laidler, K. J. (1993). The World of Physical Chemistry. Oxford University Press. == External links == Carbon Dioxide solubility in Polyethylene – Using Arrhenius equation for calculating species solubility in polymers
Wikipedia/Arrhenius_equation
Energy conversion efficiency (η) is the ratio between the useful output of an energy conversion machine and the input, in energy terms. The input, as well as the useful output may be chemical, electric power, mechanical work, light (radiation), or heat. The resulting value, η (eta), ranges between 0 and 1. == Overview == Energy conversion efficiency depends on the usefulness of the output. All or part of the heat produced from burning a fuel may become rejected waste heat if, for example, work is the desired output from a thermodynamic cycle. Energy converter is an example of an energy transformation. For example, a light bulb falls into the categories energy converter. η = P o u t P i n {\displaystyle \eta ={\frac {P_{\mathrm {out} }}{P_{\mathrm {in} }}}} Even though the definition includes the notion of usefulness, efficiency is considered a technical or physical term. Goal or mission oriented terms include effectiveness and efficacy. Generally, energy conversion efficiency is a dimensionless number between 0 and 1.0, or 0% to 100%. Efficiencies cannot exceed 100%, which would result in a perpetual motion machine, which is impossible. However, other effectiveness measures that can exceed 1.0 are used for refrigerators, heat pumps and other devices that move heat rather than convert it. It is not called efficiency, but the coefficient of performance, or COP. It is a ratio of useful heating or cooling provided relative to the work (energy) required. Higher COPs equate to higher efficiency, lower energy (power) consumption and thus lower operating costs. The COP usually exceeds 1, especially in heat pumps, because instead of just converting work to heat (which, if 100% efficient, would be a COP of 1), it pumps additional heat from a heat source to where the heat is required. Most air conditioners have a COP of 2.3 to 3.5. When talking about the efficiency of heat engines and power stations the convention should be stated, i.e., HHV (a.k.a. Gross Heating Value, etc.) or LCV (a.k.a. Net Heating value), and whether gross output (at the generator terminals) or net output (at the power station fence) are being considered. The two are separate but both must be stated. Failure to do so causes endless confusion. Related, more specific terms include Electrical efficiency, useful power output per electrical power consumed; Mechanical efficiency, where one form of mechanical energy (e.g. potential energy of water) is converted to mechanical energy (work); Thermal efficiency or Fuel efficiency, useful heat and/or work output per input energy such as the fuel consumed; 'Total efficiency', e.g., for cogeneration, useful electric power and heat output per fuel energy consumed. Same as the thermal efficiency. Luminous efficiency, that portion of the emitted electromagnetic radiation is usable for human vision. == Chemical conversion efficiency == The change of Gibbs energy of a defined chemical transformation at a particular temperature is the minimum theoretical quantity of energy required to make that change occur (if the change in Gibbs energy between reactants and products is positive) or the maximum theoretical energy that might be obtained from that change (if the change in Gibbs energy between reactants and products is negative). The energy efficiency of a process involving chemical change may be expressed relative to these theoretical minima or maxima.The difference between the change of enthalpy and the change of Gibbs energy of a chemical transformation at a particular temperature indicates the heat input required or the heat removal (cooling) required to maintain that temperature. A fuel cell may be considered to be the reverse of electrolysis. For example, an ideal fuel cell operating at a temperature of 25 °C having gaseous hydrogen and gaseous oxygen as inputs and liquid water as the output could produce a theoretical maximum amount of electrical energy of 237.129 kJ (0.06587 kWh) per gram mol (18.0154 gram) of water produced and would require 48.701 kJ (0.01353 kWh) per gram mol of water produced of heat energy to be removed from the cell to maintain that temperature. An ideal electrolysis unit operating at a temperature of 25 °C having liquid water as the input and gaseous hydrogen and gaseous oxygen as products would require a theoretical minimum input of electrical energy of 237.129 kJ (0.06587 kWh) per gram mol (18.0154 gram) of water consumed and would require 48.701 kJ (0.01353 kWh) per gram mol of water consumed of heat energy to be added to the unit to maintain that temperature. It would operate at a cell voltage of 1.24 V. For a water electrolysis unit operating at a constant temperature of 25 °C without the input of any additional heat energy, electrical energy would have to be supplied at a rate equivalent of the enthalpy (heat) of reaction or 285.830 kJ (0.07940 kWh) per gram mol of water consumed. It would operate at a cell voltage of 1.48 V. The electrical energy input of this cell is 1.20 times greater than the theoretical minimum so the energy efficiency is 0.83 compared to the ideal cell. A water electrolysis unit operating with a higher voltage that 1.48 V and at a temperature of 25 °C would have to have heat energy removed in order to maintain a constant temperature and the energy efficiency would be less than 0.83. The large entropy difference between liquid water and gaseous hydrogen plus gaseous oxygen accounts for the significant difference between the Gibbs energy of reaction and the enthalpy (heat) of reaction. == Fuel heating values and efficiency == In Europe the usable energy content of a fuel is typically calculated using the lower heating value (LHV) of that fuel, the definition of which assumes that the water vapor produced during fuel combustion (oxidation) remains gaseous, and is not condensed to liquid water so the latent heat of vaporization of that water is not usable. Using the LHV, a condensing boiler can achieve a "heating efficiency" in excess of 100% (this does not violate the first law of thermodynamics as long as the LHV convention is understood, but does cause confusion). This is because the apparatus recovers part of the heat of vaporization, which is not included in the definition of the lower heating value of a fuel. In the U.S. and elsewhere, the higher heating value (HHV) is used, which includes the latent heat for condensing the water vapor, and thus the thermodynamic maximum of 100% efficiency cannot be exceeded. == Wall-plug efficiency, luminous efficiency, and efficacy == In optical systems such as lighting and lasers, the energy conversion efficiency is often referred to as wall-plug efficiency. The wall-plug efficiency is the measure of output radiative-energy, in watts (joules per second), per total input electrical energy in watts. The output energy is usually measured in terms of absolute irradiance and the wall-plug efficiency is given as a percentage of the total input energy, with the inverse percentage representing the losses. The wall-plug efficiency differs from the luminous efficiency in that wall-plug efficiency describes the direct output/input conversion of energy (the amount of work that can be performed) whereas luminous efficiency takes into account the human eye's varying sensitivity to different wavelengths (how well it can illuminate a space). Instead of using watts, the power of a light source to produce wavelengths proportional to human perception is measured in lumens. The human eye is most sensitive to wavelengths of 555 nanometers (greenish-yellow) but the sensitivity decreases dramatically to either side of this wavelength, following a Gaussian power-curve and dropping to zero sensitivity at the red and violet ends of the spectrum. Due to this the eye does not usually see all of the wavelengths emitted by a particular light-source, nor does it see all of the wavelengths within the visual spectrum equally. Yellow and green, for example, make up more than 50% of what the eye perceives as being white, even though in terms of radiant energy white-light is made from equal portions of all colors (i.e.: a 5 mW green laser appears brighter than a 5 mW red laser, yet the red laser stands-out better against a white background). Therefore, the radiant intensity of a light source may be much greater than its luminous intensity, meaning that the source emits more energy than the eye can use. Likewise, the lamp's wall-plug efficiency is usually greater than its luminous efficiency. The effectiveness of a light source to convert electrical energy into wavelengths of visible light, in proportion to the sensitivity of the human eye, is referred to as luminous efficacy, which is measured in units of lumens per watt (lm/w) of electrical input-energy. Unlike efficacy (effectiveness), which is a unit of measurement, efficiency is a unitless number expressed as a percentage, requiring only that the input and output units be of the same type. The luminous efficiency of a light source is thus the percentage of luminous efficacy per theoretical maximum efficacy at a specific wavelength. The amount of energy carried by a photon of light is determined by its wavelength. In lumens, this energy is offset by the eye's sensitivity to the selected wavelengths. For example, a green laser pointer can have greater than 30 times the apparent brightness of a red pointer of the same power output. At 555 nm in wavelength, 1 watt of radiant energy is equivalent to 683 lumens, thus a monochromatic light source at this wavelength, with a luminous efficacy of 683 lm/w, would have a luminous efficiency of 100%. The theoretical-maximum efficacy lowers for wavelengths at either side of 555 nm. For example, low-pressure sodium lamps produce monochromatic light at 589 nm with a luminous efficacy of 200 lm/w, which is the highest of any lamp. The theoretical-maximum efficacy at that wavelength is 525 lm/w, so the lamp has a luminous efficiency of 38.1%. Because the lamp is monochromatic, the luminous efficiency nearly matches the wall-plug efficiency of < 40%. Calculations for luminous efficiency become more complex for lamps that produce white light or a mixture of spectral lines. Fluorescent lamps have higher wall-plug efficiencies than low-pressure sodium lamps, but only have half the luminous efficacy of ~ 100 lm/w, thus the luminous efficiency of fluorescents is lower than sodium lamps. A xenon flashtube has a typical wall-plug efficiency of 50–70%, exceeding that of most other forms of lighting. Because the flashtube emits large amounts of infrared and ultraviolet radiation, only a portion of the output energy is used by the eye. The luminous efficacy is therefore typically around 50 lm/w. However, not all applications for lighting involve the human eye nor are restricted to visible wavelengths. For laser pumping, the efficacy is not related to the human eye so it is not called "luminous" efficacy, but rather simply "efficacy" as it relates to the absorption lines of the laser medium. Krypton flashtubes are often chosen for pumping Nd:YAG lasers, even though their wall-plug efficiency is typically only ~ 40%. Krypton's spectral lines better match the absorption lines of the neodymium-doped crystal, thus the efficacy of krypton for this purpose is much higher than xenon; able to produce up to twice the laser output for the same electrical input. All of these terms refer to the amount of energy and lumens as they exit the light source, disregarding any losses that might occur within the lighting fixture or subsequent output optics. Luminaire efficiency refers to the total lumen-output from the fixture per the lamp output. With the exception of a few light sources, such as incandescent light bulbs, most light sources have multiple stages of energy conversion between the "wall plug" (electrical input point, which may include batteries, direct wiring, or other sources) and the final light-output, with each stage producing a loss. Low-pressure sodium lamps initially convert the electrical energy using an electrical ballast, to maintain the proper current and voltage, but some energy is lost in the ballast. Similarly, fluorescent lamps also convert the electricity using a ballast (electronic efficiency). The electricity is then converted into light energy by the electrical arc (electrode efficiency and discharge efficiency). The light is then transferred to a fluorescent coating that only absorbs suitable wavelengths, with some losses of those wavelengths due to reflection off and transmission through the coating (transfer efficiency). The number of photons absorbed by the coating will not match the number then reemitted as fluorescence (quantum efficiency). Finally, due to the phenomenon of the Stokes shift, the re-emitted photons will have a longer wavelength (thus lower energy) than the absorbed photons (fluorescence efficiency). In very similar fashion, lasers also experience many stages of conversion between the wall plug and the output aperture. The terms "wall-plug efficiency" or "energy conversion efficiency" are therefore used to denote the overall efficiency of the energy-conversion device, deducting the losses from each stage, although this may exclude external components needed to operate some devices, such as coolant pumps. == Example of energy conversion efficiency == == See also == == References == == External links == Does it make sense to switch to LED?
Wikipedia/Energy_conversion_efficiency
In physics, and in particular as measured by radiometry, radiant energy is the energy of electromagnetic and gravitational radiation. As energy, its SI unit is the joule (J). The quantity of radiant energy may be calculated by integrating radiant flux (or power) with respect to time. The symbol Qe is often used throughout literature to denote radiant energy ("e" for "energetic", to avoid confusion with photometric quantities). In branches of physics other than radiometry, electromagnetic energy is referred to using E or W. The term is used particularly when electromagnetic radiation is emitted by a source into the surrounding environment. This radiation may be visible or invisible to the human eye. == Terminology use and history == The term "radiant energy" is most commonly used in the fields of radiometry, solar energy, heating and lighting, but is also sometimes used in other fields (such as telecommunications). In modern applications involving transmission of power from one location to another, "radiant energy" is sometimes used to refer to the electromagnetic waves themselves, rather than their energy (a property of the waves). In the past, the term "electro-radiant energy" has also been used. The term "radiant energy" also applies to gravitational radiation. For example, the first gravitational waves ever observed were produced by a black hole collision that emitted about 5.3×1047 joules of gravitational-wave energy. == Analysis == Because electromagnetic (EM) radiation can be conceptualized as a stream of photons, radiant energy can be viewed as photon energy – the energy carried by these photons. Alternatively, EM radiation can be viewed as an electromagnetic wave, which carries energy in its oscillating electric and magnetic fields. These two views are completely equivalent and are reconciled to one another in quantum field theory (see wave-particle duality). EM radiation can have various frequencies. The bands of frequency present in a given EM signal may be sharply defined, as is seen in atomic spectra, or may be broad, as in blackbody radiation. In the particle picture, the energy carried by each photon is proportional to its frequency. In the wave picture, the energy of a monochromatic wave is proportional to its intensity. This implies that if two EM waves have the same intensity, but different frequencies, the one with the higher frequency "contains" fewer photons, since each photon is more energetic. When EM waves are absorbed by an object, the energy of the waves is converted to heat (or converted to electricity in case of a photoelectric material). This is a very familiar effect, since sunlight warms surfaces that it irradiates. Often this phenomenon is associated particularly with infrared radiation, but any kind of electromagnetic radiation will warm an object that absorbs it. EM waves can also be reflected or scattered, in which case their energy is redirected or redistributed as well. === Open systems === Radiant energy is one of the mechanisms by which energy can enter or leave an open system. Such a system can be man-made, such as a solar energy collector, or natural, such as the Earth's atmosphere. In geophysics, most atmospheric gases, including the greenhouse gases, allow the Sun's short-wavelength radiant energy to pass through to the Earth's surface, heating the ground and oceans. The absorbed solar energy is partly re-emitted as longer wavelength radiation (chiefly infrared radiation), some of which is absorbed by the atmospheric greenhouse gases. Radiant energy is produced in the sun as a result of nuclear fusion. == Applications == Radiant energy is used for radiant heating. It can be generated electrically by infrared lamps, or can be absorbed from sunlight and used to heat water. The heat energy is emitted from a warm element (floor, wall, overhead panel) and warms people and other objects in rooms rather than directly heating the air. Because of this, the air temperature may be lower than in a conventionally heated building, even though the room appears just as comfortable. Various other applications of radiant energy have been devised. These include treatment and inspection, separating and sorting, medium of control, and medium of communication. Many of these applications involve a source of radiant energy and a detector that responds to that radiation and provides a signal representing some characteristic of the radiation. Radiant energy detectors produce responses to incident radiant energy either as an increase or decrease in electric potential or current flow or some other perceivable change, such as exposure of photographic film. == SI radiometry units == == See also == == Notes and references == == Further reading ==
Wikipedia/Light_energy
Exergy, often referred to as "available energy" or "useful work potential", is a fundamental concept in the field of thermodynamics and engineering. It plays a crucial role in understanding and quantifying the quality of energy within a system and its potential to perform useful work. Exergy analysis has widespread applications in various fields, including energy engineering, environmental science, and industrial processes. From a scientific and engineering perspective, second-law-based exergy analysis is valuable because it provides a number of benefits over energy analysis alone. These benefits include the basis for determining energy quality (or exergy content), enhancing the understanding of fundamental physical phenomena, and improving design, performance evaluation and optimization efforts. In thermodynamics, the exergy of a system is the maximum useful work that can be produced as the system is brought into equilibrium with its environment by an ideal process. The specification of an "ideal process" allows the determination of "maximum work" production. From a conceptual perspective, exergy is the "ideal" potential of a system to do work or cause a change as it achieves equilibrium with its environment. Exergy is also known as "availability". Exergy is non-zero when there is dis-equilibrium between the system and its environment, and exergy is zero when equilibrium is established (the state of maximum entropy for the system plus its environment). Determining exergy was one of the original goals of thermodynamics. The term "exergy" was coined in 1956 by Zoran Rant (1904–1972) by using the Greek ex and ergon, meaning "from work",[3] but the concept had been earlier developed by J. Willard Gibbs (the namesake of Gibbs free energy) in 1873.[4] Energy is neither created nor destroyed, but is simply converted from one form to another (see First law of thermodynamics). In contrast to energy, exergy is always destroyed when a process is non-ideal or irreversible (see Second law of thermodynamics). To illustrate, when someone states that "I used a lot of energy running up that hill", the statement contradicts the first law. Although the energy is not consumed, intuitively we perceive that something is. The key point is that energy has quality or measures of usefulness, and this energy quality (or exergy content) is what is consumed or destroyed. This occurs because everything, all real processes, produce entropy and the destruction of exergy or the rate of "irreversibility" is proportional to this entropy production (Gouy–Stodola theorem). Where entropy production may be calculated as the net increase in entropy of the system together with its surroundings. Entropy production is due to things such as friction, heat transfer across a finite temperature difference and mixing. In distinction from "exergy destruction", "exergy loss" is the transfer of exergy across the boundaries of a system, such as with mass or heat loss, where the exergy flow or transfer is potentially recoverable. The energy quality or exergy content of these mass and energy losses are low in many situations or applications, where exergy content is defined as the ratio of exergy to energy on a percentage basis. For example, while the exergy content of electrical work produced by a thermal power plant is 100%, the exergy content of low-grade heat rejected by the power plant, at say, 41 degrees Celsius, relative to an environment temperature of 25 degrees Celsius, is only 5%. == Definitions == Exergy is a combination property of a system and its environment because it depends on the state of both and is a consequence of dis-equilibrium between them. Exergy is neither a thermodynamic property of matter nor a thermodynamic potential of a system. Exergy and energy always have the same units, and the joule (symbol: J) is the unit of energy in the International System of Units (SI). The internal energy of a system is always measured from a fixed reference state and is therefore always a state function. Some authors define the exergy of the system to be changed when the environment changes, in which case it is not a state function. Other writers prefer a slightly alternate definition of the available energy or exergy of a system where the environment is firmly defined, as an unchangeable absolute reference state, and in this alternate definition, exergy becomes a property of the state of the system alone. However, from a theoretical point of view, exergy may be defined without reference to any environment. If the intensive properties of different finitely extended elements of a system differ, there is always the possibility to extract mechanical work from the system. Yet, with such an approach one has to abandon the requirement that the environment is large enough relative to the "system" such that its intensive properties, such as temperature, are unchanged due to its interaction with the system. So that exergy is defined in an absolute sense, it will be assumed in this article that, unless otherwise stated, that the environment's intensive properties are unchanged by its interaction with the system. For a heat engine, the exergy can be simply defined in an absolute sense, as the energy input times the Carnot efficiency, assuming the low-temperature heat reservoir is at the temperature of the environment. Since many systems can be modeled as a heat engine, this definition can be useful for many applications. === Terminology === The term exergy is also used, by analogy with its physical definition, in information theory related to reversible computing. Exergy is also synonymous with available energy, exergic energy, essergy (considered archaic), utilizable energy, available useful work, maximum (or minimum) work, maximum (or minimum) work content, reversible work, ideal work, availability or available work. === Implications === The exergy destruction of a cycle is the sum of the exergy destruction of the processes that compose that cycle. The exergy destruction of a cycle can also be determined without tracing the individual processes by considering the entire cycle as a single process and using one of the exergy destruction equations. === Examples === For two thermal reservoirs at temperatures TH and TC < TH, as considered by Carnot, the exergy is the work W that can be done by a reversible engine. Specifically, with QH the heat provided by the hot reservoir, Carnot's analysis gives W/QH = (TH − TC)/TH. Although, exergy or maximum work is determined by conceptually utilizing an ideal process, it is the property of a system in a given environment. Exergy analysis is not merely for reversible cycles, but for all cycles (including non-cyclic or non-ideal), and indeed for all thermodynamic processes. As an example, consider the non-cyclic process of expansion of an ideal gas. For free expansion in an isolated system, the energy and temperature do not change, so by energy conservation no work is done. On the other hand, for expansion done against a moveable wall that always matched the (varying) pressure of the expanding gas (so the wall develops negligible kinetic energy), with no heat transfer (adiabatic wall), the maximum work would be done. This corresponds to the exergy. Thus, in terms of exergy, Carnot considered the exergy for a cyclic process with two thermal reservoirs (fixed temperatures). Just as the work done depends on the process, so the exergy depends on the process, reducing to Carnot's result for Carnot's case. W. Thomson (from 1892, Lord Kelvin), as early as 1849 was exercised by what he called “lost energy”, which appears to be the same as “destroyed energy” and what has been called “anergy”. In 1874 he wrote that “lost energy” is the same as the energy dissipated by, e.g., friction, electrical conduction (electric field-driven charge diffusion), heat conduction (temperature-driven thermal diffusion), viscous processes (transverse momentum diffusion) and particle diffusion (ink in water). On the other hand, Kelvin did not indicate how to compute the “lost energy”. This awaited the 1931 and 1932 works of Onsager on irreversible processes. == Mathematical description == === An application of the second law of thermodynamics === Exergy uses system boundaries in a way that is unfamiliar to many. We imagine the presence of a Carnot engine between the system and its reference environment even though this engine does not exist in the real world. Its only purpose is to measure the results of a "what-if" scenario to represent the most efficient work interaction possible between the system and its surroundings. If a real-world reference environment is chosen that behaves like an unlimited reservoir that remains unaltered by the system, then Carnot's speculation about the consequences of a system heading towards equilibrium with time is addressed by two equivalent mathematical statements. Let B, the exergy or available work, decrease with time, and Stotal, the entropy of the system and its reference environment enclosed together in a larger isolated system, increase with time: For macroscopic systems (above the thermodynamic limit), these statements are both expressions of the second law of thermodynamics if the following expression is used for exergy: where the extensive quantities for the system are: U = Internal energy, V = Volume, and Ni = Moles of component i. The intensive quantities for the surroundings are: PR = Pressure, TR = temperature, μi, R = Chemical potential of component i. Indeed the total entropy of the universe reads: the second term − U / T R − P R V / T R + ∑ i μ i , R N i / T R {\displaystyle -U/T_{R}-P_{R}V/T_{R}+\sum _{i}\mu _{i,R}N_{i}/T_{R}} being the entropy of the surroundings to within a constant. Individual terms also often have names attached to them: P R V {\displaystyle P_{R}V} is called "available PV work", T R S {\displaystyle T_{R}S} is called "entropic loss" or "heat loss" and the final term is called "available chemical energy." Other thermodynamic potentials may be used to replace internal energy so long as proper care is taken in recognizing which natural variables correspond to which potential. For the recommended nomenclature of these potentials, see (Alberty, 2001)[2]. Equation (2) is useful for processes where system volume, entropy, and the number of moles of various components change because internal energy is also a function of these variables and no others. An alternative definition of internal energy does not separate available chemical potential from U. This expression is useful (when substituted into equation (1)) for processes where system volume and entropy change, but no chemical reaction occurs: In this case, a given set of chemicals at a given entropy and volume will have a single numerical value for this thermodynamic potential. A multi-state system may complicate or simplify the problem because the Gibbs phase rule predicts that intensive quantities will no longer be completely independent from each other. === A historical and cultural tangent === In 1848, William Thomson, 1st Baron Kelvin, asked (and immediately answered) the question Is there any principle on which an absolute thermometric scale can be founded? It appears to me that Carnot's theory of the motive power of heat enables us to give an affirmative answer.[3] With the benefit of the hindsight contained in equation (5), we are able to understand the historical impact of Kelvin's idea on physics. Kelvin suggested that the best temperature scale would describe a constant ability for a unit of temperature in the surroundings to alter the available work from Carnot's engine. From equation (3): Rudolf Clausius recognized the presence of a proportionality constant in Kelvin's analysis and gave it the name entropy in 1865 from the Greek for "transformation" because it quantifies the amount of energy lost during the conversion from heat to work. The available work from a Carnot engine is at its maximum when the surroundings are at a temperature of absolute zero. Physicists then, as now, often look at a property with the word "available" or "utilizable" in its name with a certain unease. The idea of what is available raises the question of "available to what?" and raises a concern about whether such a property is anthropocentric. Laws derived using such a property may not describe the universe but instead, describe what people wish to see. The field of statistical mechanics (beginning with the work of Ludwig Boltzmann in developing the Boltzmann equation) relieved many physicists of this concern. From this discipline, we now know that macroscopic properties may all be determined from properties on a microscopic scale where entropy is more "real" than temperature itself (see Thermodynamic temperature). Microscopic kinetic fluctuations among particles cause entropic loss, and this energy is unavailable for work because these fluctuations occur randomly in all directions. The anthropocentric act is taken, in the eyes of some physicists and engineers today, when someone draws a hypothetical boundary, in fact, he says: "This is my system. What occurs beyond it is surroundings." In this context, exergy is sometimes described as an anthropocentric property, both by some who use it and by some who don't. However, exergy is based on the dis-equilibrium between a system and its environment, so its very real and necessary to define the system distinctly from its environment. It can be agreed that entropy is generally viewed as a more fundamental property of matter than exergy. === A potential for every thermodynamic situation === In addition to U {\displaystyle U} and U [ μ ] {\displaystyle U[{\boldsymbol {\mu }}]} , the other thermodynamic potentials are frequently used to determine exergy. For a given set of chemicals at a given entropy and pressure, enthalpy H is used in the expression: For a given set of chemicals at a given temperature and volume, Helmholtz free energy A is used in the expression: For a given set of chemicals at a given temperature and pressure, Gibbs free energy G is used in the expression: where G {\displaystyle G} is evaluated at the isothermal system temperature ( T {\displaystyle T} ), and B {\displaystyle B} is defined with respect to the isothermal temperature of the system's environment ( T R {\displaystyle T_{R}} ). The exergy B {\displaystyle B} is the energy H {\displaystyle H} reduced by the product of the entropy times the environment temperature T R {\displaystyle T_{R}} , which is the slope or partial derivative of the internal energy with respect to entropy in the environment. That is, higher entropy reduces the exergy or free energy available relative to the energy level H {\displaystyle H} . Work can be produced from this energy, such as in an isothermal process, but any entropy generation during the process will cause the destruction of exergy (irreversibility) and the reduction of these thermodynamic potentials. Further, exergy losses can occur if mass and energy is transferred out of the system at non-ambient or elevated temperature, pressure or chemical potential. Exergy losses are potentially recoverable though because the exergy has not been destroyed, such as what occurs in waste heat recovery systems (although the energy quality or exergy content is typically low). As a special case, an isothermal process operating at ambient temperature will have no thermally related exergy losses. === Exergy Analysis involving Radiative Heat Transfer === All matter emits radiation continuously as a result of its non-zero (absolute) temperature. This emitted energy flow is proportional to the material’s temperature raised to the fourth power. As a result, any radiation conversion device that seeks to absorb and convert radiation (while reflecting a fraction of the incoming source radiation) inherently emits its own radiation. Also, given that reflected and emitted radiation can occupy the same direction or solid angle, the entropy flows, and as a result, the exergy flows, are generally not independent. The entropy and exergy balance equations for a control volume (CV), re-stated to correctly apply to situations involving radiative transfer, are expressed as, d S C V d t = ∫ C V b o u n d a r y ( q c c T b + J N e t R a d ) d A + ∑ i m ( m ˙ i s i ) − ∑ o n ( m ˙ o s o ) + S ˙ g e n {\displaystyle {\frac {dS_{CV}}{dt}}=\int _{CVboundary}({\frac {q_{cc}}{T_{b}}}+J_{NetRad})dA+\sum _{i}^{m}({{\dot {m}}_{i}}s_{i})-\sum _{o}^{n}({{\dot {m}}_{o}}s_{o})+{\dot {S}}_{gen}} where S g e n {\displaystyle {S}_{gen}} or Π denotes entropy production within the control volume, and, d X C V d t = ∫ C V b o u n d a r y [ q c c ( 1 − T o T b ) + M N e t R a d ] d A − ( W ˙ C V − P o ( d V C V d t ) ) + ∑ i r ( m ˙ i ( h i − T o s i ) ) − I ˙ C V {\displaystyle {\frac {{dX}_{CV}}{dt}}=\int _{CVboundary}[{q_{cc}}(1-{\frac {T_{o}}{T_{b}}})+M_{NetRad}]dA-({{\dot {W}}_{CV}}-P_{o}({\frac {dV_{CV}}{dt}}))+\sum _{i}^{r}({{\dot {m}}_{i}}(h_{i}-{T_{o}}{s_{i}}))-{\dot {I}}_{CV}} This rate equation for the exergy within an open system X (Ξ or B) takes into account the exergy transfer rates across the system boundary by heat transfer (q for conduction & convection, and M by radiative fluxes), by mechanical or electrical work transfer (W), and by mass transfer (m), as well as taking into account the exergy destruction (I) that occurs within the system due to irreversibility’s or non-ideal processes. Note that chemical exergy, kinetic energy, and gravitational potential energy have been excluded for simplicity. The exergy irradiance or flux M, and the exergy radiance N (where M = πN for isotropic radiation), depend on the spectral and directional distribution of the radiation (for example, see the next section on ‘Exergy Flux of Radiation with an Arbitrary Spectrum’). Sunlight can be crudely approximated as blackbody, or more accurately, as graybody radiation. Noting that, although a graybody spectrum looks similar to a blackbody spectrum, the entropy and exergy are very different. Petela determined that the exergy of isotropic blackbody radiation was given by the expression, M B R = c X 4 V = σ T 4 ( 1 − 4 3 x + 1 3 x 4 ) {\displaystyle M_{BR}={\frac {cX}{4V}}=\sigma T^{4}(1-{\frac {4}{3}}x+{\frac {1}{3}}x^{4})} where the exergy within the enclosed system is X (Ξ or B), c is the speed of light, V is the volume occupied by the enclosed radiation system or void, T is the material emission temperature, To is the environmental temperature, and x is the dimensionless temperature ratio To/T. However, for decades this result was contested in terms of its relevance to the conversion of radiation fluxes, and in particular, solar radiation. For example, Bejan stated that “Petela’s efficiency is no more than a convenient, albeit artificial way, of non-dimensionalizing the calculated work output” and that Petela’s efficiency “is not a ‘conversion efficiency.’ ” However, it has been shown that Petela’s result represents the exergy of blackbody radiation. This was done by resolving a number of issues, including that of inherent irreversibility, defining the environment in terms of radiation, the effect of inherent emission by the conversion device and the effect of concentrating source radiation. ==== Exergy Flux of Radiation with an Arbitrary Spectrum (including Sunlight) ==== In general, terrestrial solar radiation has an arbitrary non-blackbody spectrum. Ground level spectrums can vary greatly due to reflection, scattering and absorption in the atmosphere. While the emission spectrums of thermal radiation in engineering systems can vary widely as well. In determining the exergy of radiation with an arbitrary spectrum, it must be considered whether reversible or ideal conversion (zero entropy production) is possible. It has been shown that reversible conversion of blackbody radiation fluxes across an infinitesimal temperature difference is theoretically possible ]. However, this reversible conversion can only be theoretically achieved because equilibrium can exist between blackbody radiation and matter. However, non-blackbody radiation cannot even exist in equilibrium with itself, nor with its own emitting material. Unlike blackbody radiation, non-blackbody radiation cannot exist in equilibrium with matter, so it appears likely that the interaction of non-blackbody radiation with matter is always an inherently irreversible process. For example, an enclosed non-blackbody radiation system (such as a void inside a solid mass) is unstable and will spontaneously equilibriate to blackbody radiation unless the enclosure is perfectly reflecting (i.e., unless there is no thermal interaction of the radiation with its enclosure – which is not possible in actual, or real, non-ideal systems). Consequently, a cavity initially devoid of thermal radiation inside a non-blackbody material will spontaneously and rapidly (due to the high velocity of the radiation), through a series of absorption and emission interactions, become filled with blackbody radiation rather than non-blackbody radiation. The approaches by Petela and Karlsson both assume that reversible conversion of non-blackbody radiation is theoretically possible, that is, without addressing or considering the issue. Exergy is not a property of the system alone, it’s a property of both the system and its environment. Thus, it is of key importance non-blackbody radiation cannot exist in equilibrium with matter, indicating that the interaction of non-blackbody radiation with matter is an inherently irreversible process. The flux (irradiance) of radiation with an arbitrary spectrum, based on the inherent irreversibility of non-blackbody radiation conversion, is given by the expression, M = H − T o ( 4 3 σ 0.25 H 0.75 ) + σ 3 T o 4 ) {\displaystyle M=H-T_{o}({\frac {4}{3}}\sigma ^{0.25}H^{0.75})+{\frac {\sigma }{3}}T_{o}^{4})} The exergy flux M {\displaystyle M} is expressed as a function of only the energy flux or irradiance H {\displaystyle H} and the environment temperature T o {\displaystyle T_{o}} . For graybody radiation, the exergy flux is given by the expression, M G R = σ T 4 ( ϵ − 4 3 x ϵ 0.75 + 1 3 x 4 ) {\displaystyle M_{GR}=\sigma T^{4}(\epsilon -{\frac {4}{3}}x\epsilon ^{0.75}+{\frac {1}{3}}x^{4})} As one would expect, the exergy flux of non-blackbody radiation reduces to the result for blackbody radiation when emissivity is equal to one. Note that the exergy flux of graybody radiation can be a small fraction of the energy flux. For example, the ratio of exergy flux to energy flux ( M / H ) {\displaystyle (M/H)} for graybody radiation with emissivity ϵ = 0.50 {\displaystyle \epsilon =0.50} is equal to 40.0%, for T = 500 o C {\displaystyle T=500^{o}C} and T o = 27 o C ( x = 0.388 ) {\displaystyle T_{o}=27^{o}C(x=0.388)} . That is, a maximum of only 40% of the graybody energy flux can be converted to work in this case (already only 50% of that of the blackbody energy flux with the same emission temperature). Graybody radiation has a spectrum that looks similar to the blackbody spectrum, but the entropy and exergy flux cannot be accurately approximated as that of blackbody radiation with the same emission temperature. However, it can be reasonably approximated by the entropy flux of blackbody radiation with the same energy flux (lower emission temperature). Blackbody radiation has the highest entropy-to-energy ratio of all radiation with the same energy flux, but the lowest entropy-to-energy ratio, and the highest exergy content, of all radiation with the same emission temperature. For example, the exergy content of graybody radiation is lower than that of blackbody radiation with the same emission temperature and decreases as emissivity decreases. For the example above with x = 0.388 {\displaystyle x=0.388} the exergy flux of the blackbody radiation source flux is 52.5% of the energy flux compared to 40.0% for graybody radiation with ϵ = 0.50 {\displaystyle \epsilon =0.50} , or compared to 15.5% for graybody radiation with ϵ = 0.10 {\displaystyle \epsilon =0.10} . ==== The Exergy Flux of Sunlight ==== In addition to the production of power directly from sunlight, solar radiation provides most of the exergy for processes on Earth, including processes that sustain living systems directly, as well as all fuels and energy sources that are used for transportation and electric power production (directly or indirectly). This is primarily with the exception of nuclear fission power plants and geothermal energy (due to natural fission decay). Solar energy is, for the most part, thermal radiation from the Sun with an emission temperature near 5762 Kelvin, but it also includes small amounts of higher energy radiation from the fusion reaction or higher thermal emission temperatures within the Sun. The source of most energy on Earth is nuclear in origin. The figure below depicts typical solar radiation spectrums under clear sky conditions for AM0 (extraterrestrial solar radiation), AM1 (terrestrial solar radiation with solar zenith angle of 0 degrees) and AM4 (terrestrial solar radiation with solar zenith angle of 75.5 degrees). The solar spectrum at sea level (terrestrial solar spectrum) depends on a number of factors including the position of the Sun in the sky, atmospheric turbidity, the level of local atmospheric pollution, and the amount and type of cloud cover. These spectrums are for relatively clear air (α = 1.3, β = 0.04) assuming a U.S. standard atmosphere with 20 mm of precipitable water vapor and 3.4 mm of ozone. The Figure shows the spectral energy irradiance (W/m2μm) which does not provide information regarding the directional distribution of the solar radiation. The exergy content of the solar radiation, assuming that it is subtended by the solid angle of the ball of the Sun (no circumsolar), is 93.1%, 92.3% and 90.8%, respectively, for the AM0, AM1 and the AM4 spectrums. The exergy content of terrestrial solar radiation is also reduced because of the diffuse component caused by the complex interaction of solar radiation, originally in a very small solid angle beam, with material in the Earth’s atmosphere. The characteristics and magnitude of diffuse terrestrial solar radiation depends on a number of factors, as mentioned, including the position of the Sun in the sky, atmospheric turbidity, the level of local atmospheric pollution, and the amount and type of cloud cover. Solar radiation under clear sky conditions exhibits a maximum intensity towards the Sun (circumsolar radiation) but also exhibits an increase in intensity towards the horizon (horizon brightening). In contrast for opaque overcast skies the solar radiation can be completely diffuse with a maximum intensity in the direction of the zenith and monotonically decreasing towards the horizon. The magnitude of the diffuse component generally varies with frequency, being highest in the ultraviolet region. The dependence of the exergy content on directional distribution can be illustrated by considering, for example, the AM1 and AM4 terrestrial spectrums depicted in the figure, with the following simplified cases of directional distribution: • For AM1: 80% of the solar radiation is contained in the solid angle subtended by the Sun, 10% is contained and isotropic in a solid angle 0.008 sr (this field of view includes circumsolar radiation), while the remaining 10% of the solar radiation is diffuse and isotropic in the solid angle 2π sr. • For AM4: 65% of the solar radiation is contained in the solid angle subtended by the Sun, 20% of the solar radiation is contained and isotropic in a solid angle 0.008 sr, while the remaining 15% of the solar radiation is diffuse and isotropic in the solid angle 2π sr. Note that when the Sun is low in the sky the diffuse component can be the dominant part of the incident solar radiation. For these cases of directional distribution, the exergy content of the terrestrial solar radiation for the AM1 and AM4 spectrum depicted are 80.8% and 74.0%, respectively. From these sample calculations it is evideνnt that the exergy content of terrestrial solar radiation is strongly dependent on the directional distribution of the radiation. This result is interesting because one might expect that the performance of a conversion device would depend on the incoming rate of photons and their spectral distribution but not on the directional distribution of the incoming photons. However, for a given incoming flux of photons with a certain spectral distribution, the entropy (level of disorder) is higher the more diffuse the directional distribution. From the second law of thermodynamics, the incoming entropy of the solar radiation cannot be destroyed and consequently reduces the maximum work output that can be obtained by a conversion device. === Chemical exergy === Similar to thermomechanical exergy, chemical exergy depends on the temperature and pressure of a system as well as on the composition. The key difference in evaluating chemical exergy versus thermomechanical exergy is that thermomechanical exergy does not take into account the difference in a system and the environment's chemical composition. If the temperature, pressure or composition of a system differs from the environment's state, then the overall system will have exergy. The definition of chemical exergy resembles the standard definition of thermomechanical exergy, but with a few differences. Chemical exergy is defined as the maximum work that can be obtained when the considered system is brought into reaction with reference substances present in the environment. Defining the exergy reference environment is one of the most vital parts of analyzing chemical exergy. In general, the environment is defined as the composition of air at 25 °C and 1 atm of pressure. At these properties air consists of N2=75.67%, O2=20.35%, H2O(g)=3.12%, CO2=0.03% and other gases=0.83%. These molar fractions will become of use when applying Equation 8 below. CaHbOc is the substance that is entering a system that one wants to find the maximum theoretical work of. By using the following equations, one can calculate the chemical exergy of the substance in a given system. Below, Equation 9 uses the Gibbs function of the applicable element or compound to calculate the chemical exergy. Equation 10 is similar but uses standard molar chemical exergy, which scientists have determined based on several criteria, including the ambient temperature and pressure that a system is being analyzed and the concentration of the most common components. These values can be found in thermodynamic books or in online tables. ==== Important equations ==== where: g ¯ x {\displaystyle {\bar {g}}_{x}} is the Gibbs function of the specific substance in the system at ( T 0 , p 0 ) {\displaystyle \left(T_{0},p_{0}\right)} . ( g ¯ F {\displaystyle {\bar {g}}_{F}} refers to the substance that is entering the system) R ¯ {\displaystyle {\bar {R}}} is the Universal gas constant (8.314462 J/mol•K) T 0 {\displaystyle T_{0}} is the temperature that the system is being evaluated at in absolute temperature y x e {\displaystyle y_{x}^{e}} is the molar fraction of the given substance in the environment, i.e. air where e ¯ x c h {\displaystyle {\bar {e}}_{x}^{ch}} is the standard molar chemical exergy taken from a table for the specific conditions that the system is being evaluated. Equation 10 is more commonly used due to the simplicity of only having to look up the standard chemical exergy for given substances. Using a standard table works well for most cases, even if the environmental conditions vary slightly, the difference is most likely negligible. ==== Total exergy ==== After finding the chemical exergy in a given system, one can find the total exergy by adding it to the thermomechanical exergy. Depending on the situation, the amount of chemical exergy added can be very small. If the system being evaluated involves combustion, the amount of chemical exergy is very large and necessary to find the total exergy of the system. === Irreversibility === Irreversibility accounts for the amount of exergy destroyed in a closed system, or in other words, the wasted work potential. This is also called dissipated energy. For highly efficient systems, the value of I, is low, and vice versa. The equation to calculate the irreversibility of a closed system, as it relates to the exergy of that system, is as follows: where S gen {\displaystyle S_{\text{gen}}} , also denoted by Π, is the entropy generated by processes within the system. If I > 0 {\displaystyle I>0} then there are irreversibilities present in the system. If I = 0 {\displaystyle I=0} then there are no irreversibilities present in the system. The value of I, the irreversibility, can not be negative, as this implies entropy destruction, a direct violation of the second law of thermodynamics. Exergy analysis also relates the actual work of a work producing device to the maximal work, that could be obtained in the reversible or ideal process: That is, the irreversibility is the ideal maximum work output minus the actual work production. Whereas, for a work consuming device such as refrigeration or heat pump, irreversibility is the actual work input minus the ideal minimum work input. The first term at the right part is related to the difference in exergy at inlet and outlet of the system: where B is also denoted by Ξ or X. For an isolated system there are no heat or work interactions or transfers of exergy between the system and its surroundings. The exergy of an isolated system can therefore only decrease, by a magnitude equal to the irreversibility of that system or process, == Applications == Applying equation (1) to a subsystem yields: This expression applies equally well for theoretical ideals in a wide variety of applications: electrolysis (decrease in G), galvanic cells and fuel cells (increase in G), explosives (increase in A), heating and refrigeration (exchange of H), motors (decrease in U) and generators (increase in U). Utilization of the exergy concept often requires careful consideration of the choice of reference environment because, as Carnot knew, unlimited reservoirs do not exist in the real world. A system may be maintained at a constant temperature to simulate an unlimited reservoir in the lab or in a factory, but those systems cannot then be isolated from a larger surrounding environment. However, with a proper choice of system boundaries, a reasonable constant reservoir can be imagined. A process sometimes must be compared to "the most realistic impossibility," and this invariably involves a certain amount of guesswork. === Engineering applications === One goal of energy and exergy methods in engineering is to compute what comes into and out of several possible designs before a design is built. Energy input and output will always balance according to the First Law of Thermodynamics or the energy conservation principle. Exergy output will not equal the exergy input for real processes since a part of the exergy input is always destroyed according to the Second Law of Thermodynamics for real processes. After the input and output are calculated, an engineer will often want to select the most efficient process. An energy efficiency or first law efficiency will determine the most efficient process based on wasting as little energy as possible relative to energy inputs. An exergy efficiency or second-law efficiency will determine the most efficient process based on wasting and destroying as little available work as possible from a given input of available work, per unit of whatever the desired output is. Exergy has been applied in a number of design applications in order to optimize systems or identify components or subsystems with the greatest potential for improvement. For instance, an exergy analysis of environmental control systems on the International Space Station revealed the oxygen generation assembly as the subsystem which destroyed the most exergy. Exergy is particularly useful for broad engineering analyses with many systems of varied nature, since it can account for mechanical, electrical, nuclear, chemical, or thermal systems. For this reason, Exergy analysis has also been used to optimize the performance of rocket vehicles. Exergy analysis affords additional insight, relative to energy analysis alone, because it incorporates the second law, and considers both the system and its relationship with its environment. For example, exergy analysis has been used to compare possible power generation and storage systems on the moon, since exergy analysis is conducted in reference to the unique environmental operating conditions of a specific application, such as on the surface of the Moon. Application of exergy to unit operations in chemical plants was partially responsible for the huge growth of the chemical industry during the 20th century. As a simple example of exergy, air at atmospheric conditions of temperature, pressure, and composition contains energy but no exergy when it is chosen as the thermodynamic reference state known as ambient. Individual processes on Earth such as combustion in a power plant often eventually result in products that are incorporated into the atmosphere, so defining this reference state for exergy is useful even though the atmosphere itself is not at equilibrium and is full of long and short term variations. If standard ambient conditions are used for calculations during chemical plant operation when the actual weather is very cold or hot, then certain parts of a chemical plant might seem to have an exergy efficiency of greater than 100%. Without taking into account the non-standard atmospheric temperature variation, these calculations can give an impression of being a perpetual motion machine. Using actual conditions will give actual values, but standard ambient conditions are useful for initial design calculations. === Applications in natural resource utilization === In recent decades, utilization of exergy has spread outside of physics and engineering to the fields of industrial ecology, ecological economics, systems ecology, and energetics. Defining where one field ends and the next begins is a matter of semantics, but applications of exergy can be placed into rigid categories. After the milestone work of Jan Szargut who emphasized the relation between exergy and availability, it is necessary to remember "Exergy Ecology and Democracy". by Goran Wall, a short essay, which evidences the strict relation that relates exergy disruption with environmental and social disruption. From this activity it has derived a fundamental research activity in ecological economics and environmental accounting perform exergy-cost analyses in order to evaluate the impact of human activity on the current and future natural environment. As with ambient air, this often requires the unrealistic substitution of properties from a natural environment in place of the reference state environment of Carnot. For example, ecologists and others have developed reference conditions for the ocean and for the Earth's crust. Exergy values for human activity using this information can be useful for comparing policy alternatives based on the efficiency of utilizing natural resources to perform work. Typical questions that may be answered are: Does the human production of one unit of an economic good by method A utilize more of a resource's exergy than by method B? Does the human production of economic good A utilize more of a resource's exergy than the production of good B? Does the human production of economic good A utilize a resource's exergy more efficiently than the production of good B? There has been some progress in standardizing and applying these methods. Measuring exergy requires the evaluation of a system's reference state environment. With respect to the applications of exergy on natural resource utilization, the process of quantifying a system requires the assignment of value (both utilized and potential) to resources that are not always easily dissected into typical cost-benefit terms. However, to fully realize the potential of a system to do work, it is becoming increasingly imperative to understand exergetic potential of natural resources, and how human interference alters this potential. Referencing the inherent qualities of a system in place of a reference state environment is the most direct way that ecologists determine the exergy of a natural resource. Specifically, it is easiest to examine the thermodynamic properties of a system, and the reference substances that are acceptable within the reference environment. This determination allows for the assumption of qualities in a natural state: deviation from these levels may indicate a change in the environment caused by outside sources. There are three kinds of reference substances that are acceptable, due to their proliferation on the planet: gases within the atmosphere, solids within the Earth's crust, and molecules or ions in seawater. By understanding these basic models, it's possible to determine the exergy of multiple earth systems interacting, like the effects of solar radiation on plant life. These basic categories are utilized as the main components of a reference environment when examining how exergy can be defined through natural resources. Other qualities within a reference state environment include temperature, pressure, and any number of combinations of substances within a defined area. Again, the exergy of a system is determined by the potential of that system to do work, so it is necessary to determine the baseline qualities of a system before it is possible to understand the potential of that system. The thermodynamic value of a resource can be found by multiplying the exergy of the resource by the cost of obtaining the resource and processing it. Today, it is becoming increasingly popular to analyze the environmental impacts of natural resource utilization, especially for energy usage. To understand the ramifications of these practices, exergy is utilized as a tool for determining the impact potential of emissions, fuels, and other sources of energy. Combustion of fossil fuels, for example, is examined with respect to assessing the environmental impacts of burning coal, oil, and natural gas. The current methods for analyzing the emissions from these three products can be compared to the process of determining the exergy of the systems affected; specifically, it is useful to examine these with regard to the reference state environment of gases within the atmosphere. In this way, it is easier to determine how human action is affecting the natural environment. === Applications in sustainability === In systems ecology, researchers sometimes consider the exergy of the current formation of natural resources from a small number of exergy inputs (usually solar radiation, tidal forces, and geothermal heat). This application not only requires assumptions about reference states, but it also requires assumptions about the real environments of the past that might have been close to those reference states. Can we decide which is the most "realistic impossibility" over such a long period of time when we are only speculating about the reality? For instance, comparing oil exergy to coal exergy using a common reference state would require geothermal exergy inputs to describe the transition from biological material to fossil fuels during millions of years in the Earth's crust, and solar radiation exergy inputs to describe the material's history before then when it was part of the biosphere. This would need to be carried out mathematically backwards through time, to a presumed era when the oil and coal could be assumed to be receiving the same exergy inputs from these sources. A speculation about a past environment is different from assigning a reference state with respect to known environments today. Reasonable guesses about real ancient environments may be made, but they are untestable guesses, and so some regard this application as pseudoscience or pseudo-engineering. The field describes this accumulated exergy in a natural resource over time as embodied energy with units of the "embodied joule" or "emjoule". The important application of this research is to address sustainability issues in a quantitative fashion through a sustainability measurement: Does the human production of an economic good deplete the exergy of Earth's natural resources more quickly than those resources are able to receive exergy? If so, how does this compare to the depletion caused by producing the same good (or a different one) using a different set of natural resources? === Exergy and environmental policy === Today environmental policies does not consider exergy as an instrument for a more equitable and effective environmental policy. Recently, exergy analysis allowed to obtain an important fault in today governmental GHGs emission balances, which often do not consider international transport related emissions, therefore the impacts of import/export are not accounted, Therefore, some preliminary cases of the impacts of import export transportation and of technology had provided evidencing the opportunity of introducing an effective exergy based taxation which can reduce the fiscal impact on citizens. In addition Exergy can be a precious instrument for an effective estimation of the path toward UN sustainable development goals (SDG). === Assigning one thermodynamically obtained value to an economic good === A technique proposed by systems ecologists is to consolidate the three exergy inputs described in the last section into the single exergy input of solar radiation, and to express the total input of exergy into an economic good as a solar embodied joule or sej. (See Emergy) Exergy inputs from solar, tidal, and geothermal forces all at one time had their origins at the beginning of the solar system under conditions which could be chosen as an initial reference state, and other speculative reference states could in theory be traced back to that time. With this tool we would be able to answer: What fraction of the total human depletion of the Earth's exergy is caused by the production of a particular economic good? What fraction of the total human and non-human depletion of the Earth's exergy is caused by the production of a particular economic good? No additional thermodynamic laws are required for this idea, and the principles of energetics may confuse many issues for those outside the field. The combination of untestable hypotheses, unfamiliar jargon that contradicts accepted jargon, intense advocacy among its supporters, and some degree of isolation from other disciplines have contributed to this protoscience being regarded by many as a pseudoscience. However, its basic tenets are only a further utilization of the exergy concept. === Implications in the development of complex physical systems === A common hypothesis in systems ecology is that the design engineer's observation that a greater capital investment is needed to create a process with increased exergy efficiency is actually the economic result of a fundamental law of nature. By this view, exergy is the analogue of economic currency in the natural world. The analogy to capital investment is the accumulation of exergy into a system over long periods of time resulting in embodied energy. The analogy of capital investment resulting in a factory with high exergy efficiency is an increase in natural organizational structures with high exergy efficiency. (See Maximum power). Researchers in these fields describe biological evolution in terms of increases in organism complexity due to the requirement for increased exergy efficiency because of competition for limited sources of exergy. Some biologists have a similar hypothesis. A biological system (or a chemical plant) with a number of intermediate compartments and intermediate reactions is more efficient because the process is divided up into many small substeps, and this is closer to the reversible ideal of an infinite number of infinitesimal substeps. Of course, an excessively large number of intermediate compartments comes at a capital cost that may be too high. Testing this idea in living organisms or ecosystems is impossible for all practical purposes because of the large time scales and small exergy inputs involved for changes to take place. However, if this idea is correct, it would not be a new fundamental law of nature. It would simply be living systems and ecosystems maximizing their exergy efficiency by utilizing laws of thermodynamics developed in the 19th century. === Philosophical and cosmological implications === Some proponents of utilizing exergy concepts describe them as a biocentric or ecocentric alternative for terms like quality and value. The "deep ecology" movement views economic usage of these terms as an anthropocentric philosophy which should be discarded. A possible universal thermodynamic concept of value or utility appeals to those with an interest in monism. For some, the result of this line of thinking about tracking exergy into the deep past is a restatement of the cosmological argument that the universe was once at equilibrium and an input of exergy from some First Cause created a universe full of available work. Current science is unable to describe the first 10−43 seconds of the universe (See Timeline of the Big Bang). An external reference state is not able to be defined for such an event, and (regardless of its merits), such an argument may be better expressed in terms of entropy. == Quality of energy types == The ratio of exergy to energy in a substance can be considered a measure of energy quality. Forms of energy such as macroscopic kinetic energy, electrical energy, and chemical Gibbs free energy are 100% recoverable as work, and therefore have exergy equal to their energy. However, forms of energy such as radiation and thermal energy can not be converted completely to work, and have exergy content less than their energy content. The exact proportion of exergy in a substance depends on the amount of entropy relative to the surrounding environment as determined by the Second Law of Thermodynamics. Exergy is useful when measuring the efficiency of an energy conversion process. The exergetic, or 2nd Law, efficiency is a ratio of the exergy output divided by the exergy input. This formulation takes into account the quality of the energy, often offering a more accurate and useful analysis than efficiency estimates only using the First Law of Thermodynamics. Work can be extracted also from bodies colder than the surroundings. When the flow of energy is coming into the body, work is performed by this energy obtained from the large reservoir, the surrounding. A quantitative treatment of the notion of energy quality rests on the definition of energy. According to the standard definition, Energy is a measure of the ability to do work. Work can involve the movement of a mass by a force that results from a transformation of energy. If there is an energy transformation, the second principle of energy flow transformations says that this process must involve the dissipation of some energy as heat. Measuring the amount of heat released is one way of quantifying the energy, or ability to do work and apply a force over a distance. === Exergy of heat available at a temperature === Maximal possible conversion of heat to work, or exergy content of heat, depends on the temperature at which heat is available and the temperature level at which the reject heat can be disposed, that is the temperature of the surrounding. The upper limit for conversion is known as Carnot efficiency and was discovered by Nicolas Léonard Sadi Carnot in 1824. See also Carnot heat engine. Carnot efficiency is where TH is the higher temperature and TC is the lower temperature, both as absolute temperature. From Equation 15 it is clear that in order to maximize efficiency one should maximize TH and minimize TC. Exergy exchanged is then: where Tsource is the temperature of the heat source, and To is the temperature of the surrounding. === Connection with economic value === Exergy in a sense can be understood as a measure of the value of energy. Since high-exergy energy carriers can be used for more versatile purposes, due to their ability to do more work, they can be postulated to hold more economic value. This can be seen in the prices of energy carriers, i.e. high-exergy energy carriers such as electricity tend to be more valuable than low-exergy ones such as various fuels or heat. This has led to the substitution of more valuable high-exergy energy carriers with low-exergy energy carriers, when possible. An example is heating systems, where higher investment to heating systems allows using low-exergy energy sources. Thus high-exergy content is being substituted with capital investments. === Exergy based Life Cycle Assessment (LCA) === Exergy of a system is the maximum useful work possible during a process that brings the system into equilibrium with a heat reservoir. Wall clearly states the relation between exergy analysis and resource accounting. This intuition confirmed by Dewulf Sciubba lead to exergo-economic accounting and to methods specifically dedicated to LCA such as exergetic material input per unit of service (EMIPS). The concept of material input per unit of service (MIPS) is quantified in terms of the second law of thermodynamics, allowing the calculation of both resource input and service output in exergy terms. This exergetic material input per unit of service (EMIPS) has been elaborated for transport technology. The service not only takes into account the total mass to be transported and the total distance, but also the mass per single transport and the delivery time. The applicability of the EMIPS methodology relates specifically to the transport system and allows an effective coupling with life cycle assessment. The exergy analysis according to EMIPS allowed the definition of a precise strategy for reducing environmental impacts of transport toward more sustainable transport. Such a strategy requires the reduction of the weight of vehicles, sustainable styles of driving, reducing the friction of tires, encouraging electric and hybrid vehicles, improving the walking and cycling environment in cities, and by enhancing the role of public transport, especially electric rail. == History == === Carnot === In 1824, Sadi Carnot studied the improvements developed for steam engines by James Watt and others. Carnot utilized a purely theoretical perspective for these engines and developed new ideas. He wrote: The question has often been raised whether the motive power of heat is unbounded, whether the possible improvements in steam engines have an assignable limit—a limit by which the nature of things will not allow to be passed by any means whatever... In order to consider in the most general way the principle of the production of motion by heat, it must be considered independently of any mechanism or any particular agent. It is necessary to establish principles applicable not only to steam-engines but to all imaginable heat-engines... The production of motion in steam-engines is always accompanied by a circumstance on which we should fix our attention. This circumstance is the re-establishing of equilibrium... Imagine two bodies A and B, kept each at a constant temperature, that of A being higher than that of B. These two bodies, to which we can give or from which we can remove the heat without causing their temperatures to vary, exercise the functions of two unlimited reservoirs...[4] Carnot next described what is now called the Carnot engine, and proved by a thought experiment that any heat engine performing better than this engine would be a perpetual motion machine. Even in the 1820s, there was a long history of science forbidding such devices. According to Carnot, "Such a creation is entirely contrary to ideas now accepted, to the laws of mechanics and of sound physics. It is inadmissible."[4] This description of an upper bound to the work that may be done by an engine was the earliest modern formulation of the second law of thermodynamics. Because it involves no mathematics, it still often serves as the entry point for a modern understanding of both the second law and entropy. Carnot's focus on heat engines, equilibrium, and heat reservoirs is also the best entry point for understanding the closely related concept of exergy. Carnot believed in the incorrect caloric theory of heat that was popular during his time, but his thought experiment nevertheless described a fundamental limit of nature. As kinetic theory replaced caloric theory through the early and mid-19th century (see Timeline of thermodynamics), several scientists added mathematical precision to the first and second laws of thermodynamics and developed the concept of entropy. Carnot's focus on processes at the human scale (above the thermodynamic limit) led to the most universally applicable concepts in physics. Entropy and the second-law are applied today in fields ranging from quantum mechanics to physical cosmology. === Gibbs === In the 1870s, Josiah Willard Gibbs unified a large quantity of 19th century thermochemistry into one compact theory. Gibbs's theory incorporated the new concept of a chemical potential to cause change when distant from a chemical equilibrium into the older work begun by Carnot in describing thermal and mechanical equilibrium and their potentials for change. Gibbs's unifying theory resulted in the thermodynamic potential state functions describing differences from thermodynamic equilibrium. In 1873, Gibbs derived the mathematics of "available energy of the body and medium" into the form it has today.[3] (See the equations above). The physics describing exergy has changed little since that time. === Helmholtz === In the 1880s, German scientist Hermann von Helmholtz derived the equation for the maximum work which can be reversibly obtained from a closed system. === Rant === In 1956, Yugoslav scholar Zoran Rant proposed the concept of Exergy, extending Gibbs and Helmholtz' work. Since then, continuous development in exergy analysis has seen many applications in thermodynamics, and exergy has been accepted as the maximum theoretical useful work which can be obtained from a system with respect to its environment. == See also == Emergy Entropy production Stellar engine Thermodynamic free energy World energy supply and consumption == Notes == == References == == Further reading == Bastianoni, E.; Facchini, A.; Susani, L.; Tiezzi (2007). "Emergy as a function of exergy". Energy. 32 (7): 1158–1162. doi:10.1016/j.energy.2006.08.009. Stephen Jay Kline (1999). The Low-Down on Entropy and Interpretive Thermodynamics, La Cañada, CA: DCW Industries. ISBN 1928729010. == External links == Energy, Incorporating Exergy, An International Journal An Annotated Bibliography of Exergy/Availability Exergy – a useful concept by Göran Wall Exergetics textbook for self-study by Göran Wall Exergy by Isidoro Martinez Exergy calculator by The Exergoecology Portal Global Exergy Resource Chart Guidebook to IEA ECBCS Annex 37, Low Exergy Systems for Heating and Cooling of Buildings Introduction to the Concept of Exergy
Wikipedia/Available_energy
A timeline of events in the history of thermodynamics. == Before 1800 == 1593 – Galileo Galilei invents one of the first thermoscopes, also known as Galileo thermometer 1650 – Otto von Guericke builds the first vacuum pump 1660 – Robert Boyle experimentally discovers Boyle's law, relating the pressure and volume of a gas (published 1662) 1665 – Robert Hooke published his book Micrographia, which contained the statement: "Heat being nothing else but a very brisk and vehement agitation of the parts of a body." 1667 – J. J. Becher puts forward a theory of combustion involving combustible earth in his book Physica subterranea (see Phlogiston theory). 1676–1689 – Gottfried Leibniz develops the concept of vis viva, a limited version of the conservation of energy 1679 – Denis Papin designed a steam digester which inspired the development of the piston-and-cylinder steam engine. 1694–1734 – Georg Ernst Stahl names Becher's combustible earth as phlogiston and develops the theory 1698 – Thomas Savery patents an early steam engine 1702 – Guillaume Amontons introduces the concept of absolute zero, based on observations of gases 1738 – Daniel Bernoulli publishes Hydrodynamica, initiating the kinetic theory 1749 – Émilie du Châtelet, in her French translation and commentary on Newton's Philosophiae Naturalis Principia Mathematica, derives the conservation of energy from the first principles of Newtonian mechanics. 1761 – Joseph Black discovers that ice absorbs heat without changing its temperature when melting 1772 – Black's student Daniel Rutherford discovers nitrogen, which he calls phlogisticated air, and together they explain the results in terms of the phlogiston theory 1776 – John Smeaton publishes a paper on experiments related to power, work, momentum, and kinetic energy, supporting the conservation of energy 1777 – Carl Wilhelm Scheele distinguishes heat transfer by thermal radiation from that by convection and conduction 1783 – Antoine Lavoisier discovers oxygen and develops an explanation for combustion; in his paper "Réflexions sur le phlogistique", he deprecates the phlogiston theory and proposes a caloric theory 1784 – Jan Ingenhousz describes Brownian motion of charcoal particles on water 1791 – Pierre Prévost shows that all bodies radiate heat, no matter how hot or cold they are 1798 – Count Rumford (Benjamin Thompson) publishes his paper "An Inquiry Concerning the Source of the Heat Which Is Excited by Friction" detailing measurements of the frictional heat generated in boring cannons and develops the idea that heat is a form of kinetic energy; his measurements are inconsistent with caloric theory, but are also sufficiently imprecise as to leave room for doubt. == 1800–1847 == 1802 – Joseph Louis Gay-Lussac publishes Charles's law, discovered (but unpublished) by Jacques Charles around 1787; this shows the dependency between temperature and volume. Gay-Lussac also formulates the law relating temperature with pressure (the pressure law, or Gay-Lussac's law) 1804 – Sir John Leslie observes that a matte black surface radiates heat more effectively than a polished surface, suggesting the importance of black-body radiation 1805 – William Hyde Wollaston defends the conservation of energy in On the Force of Percussion 1808 – John Dalton defends caloric theory in A New System of Chemistry and describes how it combines with matter, especially gases; he proposes that the heat capacity of gases varies inversely with atomic weight 1810 – Sir John Leslie freezes water to ice artificially 1813 – Peter Ewart supports the idea of the conservation of energy in his paper On the measure of moving force; the paper strongly influences Dalton and his pupil, James Joule 1819 – Pierre Louis Dulong and Alexis Thérèse Petit give the Dulong-Petit law for the specific heat capacity of a crystal 1820 – John Herapath develops some ideas in the kinetic theory of gases but mistakenly associates temperature with molecular momentum rather than kinetic energy; his work receives little attention other than from Joule 1822 – Joseph Fourier formally introduces the use of dimensions for physical quantities in his Théorie Analytique de la Chaleur 1822 – Marc Seguin writes to John Herschel supporting the conservation of energy and kinetic theory 1824 – Sadi Carnot analyzes the efficiency of steam engines using caloric theory; he develops the notion of a reversible process and, in postulating that no such thing exists in nature, lays the foundation for the second law of thermodynamics, and initiating the science of thermodynamics 1827 – Robert Brown discovers the Brownian motion of pollen and dye particles in water 1831 – Macedonio Melloni demonstrates that black-body radiation can be reflected, refracted, and polarised in the same way as light 1834 – Émile Clapeyron popularises Carnot's work through a graphical and analytic formulation. He also combined Boyle's law, Charles's law, and Gay-Lussac's law to produce a combined gas law. PV/T = k 1841 – Julius Robert von Mayer, an amateur scientist, writes a paper on the conservation of energy, but his lack of academic training leads to its rejection 1842 – Mayer makes a connection between work, heat, and the human metabolism based on his observations of blood made while a ship's surgeon; he calculates the mechanical equivalent of heat 1842 – William Robert Grove demonstrates the thermal dissociation of molecules into their constituent atoms, by showing that steam can be disassociated into oxygen and hydrogen, and the process reversed 1843 – John James Waterston fully expounds the kinetic theory of gases, but according to D Levermore "there is no evidence that any physical scientist read the book; perhaps it was overlooked because of its misleading title, Thoughts on the Mental Functions." 1843 – James Joule experimentally finds the mechanical equivalent of heat 1845 – Henri Victor Regnault added Avogadro's law to the combined gas law to produce the ideal gas law. PV = nRT 1846 – Grove publishes an account of the general theory of the conservation of energy in On The Correlation of Physical Forces 1847 – Hermann von Helmholtz publishes a definitive statement of the conservation of energy, the first law of thermodynamics == 1848–1899 == 1848 – William Thomson extends the concept of absolute zero from gases to all substances 1849 – William John Macquorn Rankine calculates the correct relationship between saturated vapour pressure and temperature using his hypothesis of molecular vortices 1850 – Rankine uses his vortex theory to establish accurate relationships between the temperature, pressure, and density of gases, and expressions for the latent heat of evaporation of a liquid; he accurately predicts the surprising fact that the apparent specific heat of saturated steam will be negative 1850 – Rudolf Clausius coined the term "entropy" (das Wärmegewicht, symbolized S) to denote heat lost or turned into waste. ("Wärmegewicht" translates literally as "heat-weight"; the corresponding English term stems from the Greek τρέπω, "I turn".) 1850 – Clausius gives the first clear joint statement of the first and second law of thermodynamics, abandoning the caloric theory, but preserving Carnot's principle 1851 – Thomson gives an alternative statement of the second law 1852 – Joule and Thomson demonstrate that a rapidly expanding gas cools, later named the Joule–Thomson effect or Joule–Kelvin effect 1854 – Helmholtz puts forward the idea of the heat death of the universe 1854 – Clausius establishes the importance of dQ/T (Clausius's theorem), but does not yet name the quantity 1854 – Rankine introduces his thermodynamic function, later identified as entropy 1856 – August Krönig publishes an account of the kinetic theory of gases, probably after reading Waterston's work 1857 – Clausius gives a modern and compelling account of the kinetic theory of gases in his On the nature of motion called heat 1859 – James Clerk Maxwell discovers the distribution law of molecular velocities 1859 – Gustav Kirchhoff shows that energy emission from a black body is a function of only temperature and frequency 1862 – "Disgregation", a precursor of entropy, was defined in 1862 by Clausius as the magnitude of the degree of separation of molecules of a body 1865 – Clausius introduces the modern macroscopic concept of entropy 1865 – Josef Loschmidt applies Maxwell's theory to estimate the number-density of molecules in gases, given observed gas viscosities. 1867 – Maxwell asks whether Maxwell's demon could reverse irreversible processes 1870 – Clausius proves the scalar virial theorem 1872 – Ludwig Boltzmann states the Boltzmann equation for the temporal development of distribution functions in phase space, and publishes his H-theorem 1873 - Johannes Diderik van der Waals formulates his equation of state 1874 – Thomson formally states the second law of thermodynamics 1876 – Josiah Willard Gibbs publishes the first of two papers (the second appears in 1878) which discuss phase equilibria, statistical ensembles, the free energy as the driving force behind chemical reactions, and chemical thermodynamics in general. 1876 – Loschmidt criticises Boltzmann's H theorem as being incompatible with microscopic reversibility (Loschmidt's paradox). 1877 – Boltzmann states the relationship between entropy and probability 1879 – Jožef Stefan observes that the total radiant flux from a blackbody is proportional to the fourth power of its temperature and states the Stefan–Boltzmann law 1884 – Boltzmann derives the Stefan–Boltzmann blackbody radiant flux law from thermodynamic considerations 1888 – Henri-Louis Le Chatelier states his principle that the response of a chemical system perturbed from equilibrium will be to counteract the perturbation 1889 – Walther Nernst relates the voltage of electrochemical cells to their chemical thermodynamics via the Nernst equation 1889 – Svante Arrhenius introduces the idea of activation energy for chemical reactions, giving the Arrhenius equation 1893 – Wilhelm Wien discovers the displacement law for a blackbody's maximum specific intensity == 1900–1944 == 1900 – Max Planck suggests that light may be emitted in discrete frequencies, giving his law of black-body radiation 1905 – Albert Einstein, in the first of his miracle year papers, argues that the reality of quanta would explain the photoelectric effect 1905 – Einstein mathematically analyzes Brownian motion as a result of random molecular motion in his paper On the movement of small particles suspended in a stationary liquid demanded by the molecular-kinetic theory of heat 1906 – Nernst presents a formulation of the third law of thermodynamics 1907 – Einstein uses quantum theory to estimate the heat capacity of an Einstein solid 1909 – Constantin Carathéodory develops an axiomatic system of thermodynamics 1910 – Einstein and Marian Smoluchowski find the Einstein–Smoluchowski formula for the attenuation coefficient due to density fluctuations in a gas 1911 – Paul Ehrenfest and Tatjana Ehrenfest–Afanassjewa publish their classical review on the statistical mechanics of Boltzmann, Begriffliche Grundlagen der statistischen Auffassung in der Mechanik 1912 – Peter Debye gives an improved heat capacity estimate by allowing low-frequency phonons 1916 – Sydney Chapman and David Enskog systematically develop the kinetic theory of gases 1916 – Einstein considers the thermodynamics of atomic spectral lines and predicts stimulated emission 1919 – James Jeans discovers that the dynamical constants of motion determine the distribution function for a system of particles 1920 – Meghnad Saha states his ionization equation 1923 – Debye and Erich Hückel publish a statistical treatment of the dissociation of electrolytes 1924 – Satyendra Nath Bose introduces Bose–Einstein statistics, in a paper translated by Einstein 1926 – Enrico Fermi and Paul Dirac introduce Fermi–Dirac statistics 1927 – John von Neumann introduces the density matrix representation, establishing quantum statistical mechanics 1928 – John B. Johnson discovers Johnson noise in a resistor 1928 – Harry Nyquist derives the fluctuation-dissipation theorem, a relationship to explain Johnson noise in a resistor 1931 – Lars Onsager publishes his groundbreaking paper deriving the Onsager reciprocal relations 1935 – Ralph H. Fowler invents the title 'the zeroth law of thermodynamics' to summarise postulates made by earlier physicists that thermal equilibrium between systems is a transitive relation 1938 – Anatoly Vlasov proposes the Vlasov equation for a correct dynamical description of ensembles of particles with collective long range interaction 1939 – Nikolay Krylov and Nikolay Bogolyubov give the first consistent microscopic derivation of the Fokker–Planck equation in the single scheme of classical and quantum mechanics 1942 – Joseph L. Doob states his theorem on Gauss–Markov processes 1944 – Lars Onsager gives an analytic solution to the 2-dimensional Ising model, including its phase transition == 1945–present == 1945–1946 – Nikolay Bogoliubov develops a general method for a microscopic derivation of kinetic equations for classical statistical systems using BBGKY hierarchy 1947 – Nikolay Bogoliubov and Kirill Gurov extend this method for a microscopic derivation of kinetic equations for quantum statistical systems 1948 – Claude Elwood Shannon establishes information theory 1957 – Aleksandr Solomonovich Kompaneets derives his Compton scattering Fokker–Planck equation 1957 – Ryogo Kubo derives the first of the Green-Kubo relations for linear transport coefficients 1957 – Edwin T. Jaynes publishes two papers detailing the MaxEnt interpretation of thermodynamics from information theory 1960–1965 – Dmitry Zubarev develops the method of non-equilibrium statistical operator, which becomes a classical tool in the statistical theory of non-equilibrium processes 1972 – Jacob Bekenstein suggests that black holes have an entropy proportional to their surface area 1974 – Stephen Hawking predicts that black holes will radiate particles with a black-body spectrum which can cause black hole evaporation 1977 – Ilya Prigogine wins the Nobel prize for his work on dissipative structures in thermodynamic systems far from equilibrium. The importation and dissipation of energy could reverse the 2nd law of thermodynamics == See also == Timeline of heat engine technology History of physics History of thermodynamics Thermodynamics Timeline of information theory List of textbooks in thermodynamics and statistical mechanics == References ==
Wikipedia/Timeline_of_thermodynamics,_statistical_mechanics,_and_random_processes
Electrical energy is the energy transferred as electric charges move between points with different electric potential, that is, as they move across a potential difference. As electric potential is lost or gained, work is done changing the energy of some system. The amount of work in joules is given by the product of the charge that has moved, in coulombs, and the potential difference that has been crossed, in volts. Electrical energy is usually sold by the kilowatt hour (1 kW·h = 3.6 MJ) which is the product of the power in kilowatts multiplied by running time in hours. Electric utilities measure energy using an electricity meter, which keeps a running total of the electrical energy delivered to a customer. Electric heating is an example of converting electrical energy into thermal energy. The simplest and most common type of electric heater uses electrical resistance to convert the energy. There are other ways to use electrical energy. Electric charges moves as a current the heater element which has a potential difference between the ends: energy is transferred from the charges to the element, increasing the element's temperature and thermal energy as the charges lose potential energy. == Electricity generation == Electricity generation is the process of generating electrical energy from other forms of energy. The fundamental principle of electricity generation was discovered during the 1820s and early 1830s by the British scientist Michael Faraday. His basic method is still used today: electric current is generated by the movement of a loop of wire, or disc of copper between the poles of a magnet. For electrical utilities, it is the first step in the delivery of electricity to consumers. The other processes, electricity transmission, distribution, and electrical energy storage and recovery using pumped-storage methods are normally carried out by the electric power industry. Electricity is most often generated at a power station by electromechanical generators, primarily driven by heat engines fueled by chemical combustion or nuclear fission but also by other means such as the kinetic energy of flowing water and wind. There are many other technologies that can be and are used to generate electricity such as solar photovoltaics and geothermal power. == References ==
Wikipedia/Electric_energy
In physics, potential energy is the energy of an object or system due to the body's position relative to other objects, or the configuration of its particles. The energy is equal to the work done against any restoring forces, such as gravity or those in a spring. The term potential energy was introduced by the 19th-century Scottish engineer and physicist William Rankine, although it has links to the ancient Greek philosopher Aristotle's concept of potentiality. Common types of potential energy include gravitational potential energy, the elastic potential energy of a deformed spring, and the electric potential energy of an electric charge and an electric field. The unit for energy in the International System of Units (SI) is the joule (symbol J). Potential energy is associated with forces that act on a body in a way that the total work done by these forces on the body depends only on the initial and final positions of the body in space. These forces, whose total work is path independent, are called conservative forces. If the force acting on a body varies over space, then one has a force field; such a field is described by vectors at every point in space, which is, in turn, called a vector field. A conservative vector field can be simply expressed as the gradient of a certain scalar function, called a scalar potential. The potential energy is related to, and can be obtained from, this potential function. == Overview == There are various types of potential energy, each associated with a particular type of force. For example, the work of an elastic force is called elastic potential energy; work of the gravitational force is called gravitational potential energy; work of the Coulomb force is called electric potential energy; work of the nuclear force acting on the baryon charge is called nuclear potential energy; work of intermolecular forces is called intermolecular potential energy. Chemical potential energy, such as the energy stored in fossil fuels, is the work of the Coulomb force during rearrangement of configurations of electrons and nuclei in atoms and molecules. Thermal energy usually has two components: the kinetic energy of random motions of particles and the potential energy of their configuration. Forces derivable from a potential are also called conservative forces. The work done by a conservative force is W = − Δ U , {\displaystyle W=-\Delta U,} where Δ U {\displaystyle \Delta U} is the change in the potential energy associated with the force. The negative sign provides the convention that work done against a force field increases potential energy, while work done by the force field decreases potential energy. Common notations for potential energy are PE, U, V, and Ep. Potential energy is the energy by virtue of an object's position relative to other objects. Potential energy is often associated with restoring forces such as a spring or the force of gravity. The action of stretching a spring or lifting a mass is performed by an external force that works against the force field of the potential. This work is stored in the force field, which is said to be stored as potential energy. If the external force is removed the force field acts on the body to perform the work as it moves the body back to the initial position, reducing the stretch of the spring or causing a body to fall. Consider a ball whose mass is m dropped from height h. The acceleration g of free fall is approximately constant, so the weight force of the ball mg is constant. The product of force and displacement gives the work done, which is equal to the gravitational potential energy, thus U g = m g h . {\displaystyle U_{\text{g}}=mgh.} The more formal definition is that potential energy is the energy difference between the energy of an object in a given position and its energy at a reference position. == History == From around 1840 scientists sought to define and understand energy and work. The term "potential energy" was coined by William Rankine a Scottish engineer and physicist in 1853 as part of a specific effort to develop terminology. He chose the term as part of the pair "actual" vs "potential" going back to work by Aristotle. In his 1867 discussion of the same topic Rankine describes potential energy as 'energy of configuration' in contrast to actual energy as 'energy of activity'. Also in 1867, William Thomson introduced "kinetic energy" as the opposite of "potential energy", asserting that all actual energy took the form of ⁠1/2⁠mv2. Once this hypothesis became widely accepted, the term "actual energy" gradually faded. == Work and potential energy == Potential energy is closely linked with forces. If the work done by a force on a body that moves from A to B does not depend on the path between these points (if the work is done by a conservative force), then the work of this force measured from A assigns a scalar value to every other point in space and defines a scalar potential field. In this case, the force can be defined as the negative of the vector gradient of the potential field. If the work for an applied force is independent of the path, then the work done by the force is evaluated from the start to the end of the trajectory of the point of application. This means that there is a function U(x), called a "potential", that can be evaluated at the two points xA and xB to obtain the work over any trajectory between these two points. It is tradition to define this function with a negative sign so that positive work is a reduction in the potential, that is W = ∫ C F ⋅ d x = U ( x A ) − U ( x B ) {\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {x} =U(\mathbf {x} _{\text{A}})-U(\mathbf {x} _{\text{B}})} where C is the trajectory taken from A to B. Because the work done is independent of the path taken, then this expression is true for any trajectory, C, from A to B. The function U(x) is called the potential energy associated with the applied force. Examples of forces that have potential energies are gravity and spring forces. === Derivable from a potential === In this section the relationship between work and potential energy is presented in more detail. The line integral that defines work along curve C takes a special form if the force F is related to a scalar field U′(x) so that F = ∇ U ′ = ( ∂ U ′ ∂ x , ∂ U ′ ∂ y , ∂ U ′ ∂ z ) . {\displaystyle \mathbf {F} ={\nabla U'}=\left({\frac {\partial U'}{\partial x}},{\frac {\partial U'}{\partial y}},{\frac {\partial U'}{\partial z}}\right).} This means that the units of U′ must be this case, work along the curve is given by W = ∫ C F ⋅ d x = ∫ C ∇ U ′ ⋅ d x , {\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {x} =\int _{C}\nabla U'\cdot d\mathbf {x} ,} which can be evaluated using the gradient theorem to obtain W = U ′ ( x B ) − U ′ ( x A ) . {\displaystyle W=U'(\mathbf {x} _{\text{B}})-U'(\mathbf {x} _{\text{A}}).} This shows that when forces are derivable from a scalar field, the work of those forces along a curve C is computed by evaluating the scalar field at the start point A and the end point B of the curve. This means the work integral does not depend on the path between A and B and is said to be independent of the path. Potential energy U = −U′(x) is traditionally defined as the negative of this scalar field so that work by the force field decreases potential energy, that is W = U ( x A ) − U ( x B ) . {\displaystyle W=U(\mathbf {x} _{\text{A}})-U(\mathbf {x} _{\text{B}}).} In this case, the application of the del operator to the work function yields, ∇ W = − ∇ U = − ( ∂ U ∂ x , ∂ U ∂ y , ∂ U ∂ z ) = F , {\displaystyle {\nabla W}=-{\nabla U}=-\left({\frac {\partial U}{\partial x}},{\frac {\partial U}{\partial y}},{\frac {\partial U}{\partial z}}\right)=\mathbf {F} ,} and the force F is said to be "derivable from a potential". This also necessarily implies that F must be a conservative vector field. The potential U defines a force F at every point x in space, so the set of forces is called a force field. === Computing potential energy === Given a force field F(x), evaluation of the work integral using the gradient theorem can be used to find the scalar function associated with potential energy. This is done by introducing a parameterized curve γ(t) = r(t) from γ(a) = A to γ(b) = B, and computing, ∫ γ ∇ Φ ( r ) ⋅ d r = ∫ a b ∇ Φ ( r ( t ) ) ⋅ r ′ ( t ) d t , = ∫ a b d d t Φ ( r ( t ) ) d t = Φ ( r ( b ) ) − Φ ( r ( a ) ) = Φ ( x B ) − Φ ( x A ) . {\displaystyle {\begin{aligned}\int _{\gamma }\nabla \Phi (\mathbf {r} )\cdot d\mathbf {r} &=\int _{a}^{b}\nabla \Phi (\mathbf {r} (t))\cdot \mathbf {r} '(t)dt,\\&=\int _{a}^{b}{\frac {d}{dt}}\Phi (\mathbf {r} (t))dt=\Phi (\mathbf {r} (b))-\Phi (\mathbf {r} (a))=\Phi \left(\mathbf {x} _{B}\right)-\Phi \left(\mathbf {x} _{A}\right).\end{aligned}}} For the force field F, let v = dr/dt, then the gradient theorem yields, ∫ γ F ⋅ d r = ∫ a b F ⋅ v d t , = − ∫ a b d d t U ( r ( t ) ) d t = U ( x A ) − U ( x B ) . {\displaystyle {\begin{aligned}\int _{\gamma }\mathbf {F} \cdot d\mathbf {r} &=\int _{a}^{b}\mathbf {F} \cdot \mathbf {v} \,dt,\\&=-\int _{a}^{b}{\frac {d}{dt}}U(\mathbf {r} (t))\,dt=U(\mathbf {x} _{A})-U(\mathbf {x} _{B}).\end{aligned}}} The power applied to a body by a force field is obtained from the gradient of the work, or potential, in the direction of the velocity v of the point of application, that is P ( t ) = − ∇ U ⋅ v = F ⋅ v . {\displaystyle P(t)=-{\nabla U}\cdot \mathbf {v} =\mathbf {F} \cdot \mathbf {v} .} Examples of work that can be computed from potential functions are gravity and spring forces. == Potential energy for near-Earth gravity == For small height changes, gravitational potential energy can be computed using U g = m g h , {\displaystyle U_{\text{g}}=mgh,} where m is the mass in kilograms, g is the local gravitational field (9.8 metres per second squared on Earth), h is the height above a reference level in metres, and U is the energy in joules. In classical physics, gravity exerts a constant downward force F = (0, 0, Fz) on the center of mass of a body moving near the surface of the Earth. The work of gravity on a body moving along a trajectory r(t) = (x(t), y(t), z(t)), such as the track of a roller coaster is calculated using its velocity, v = (vx, vy, vz), to obtain W = ∫ t 1 t 2 F ⋅ v d t = ∫ t 1 t 2 F z v z d t = F z Δ z . {\displaystyle W=\int _{t_{1}}^{t_{2}}{\boldsymbol {F}}\cdot {\boldsymbol {v}}\,dt=\int _{t_{1}}^{t_{2}}F_{\text{z}}v_{\text{z}}\,dt=F_{\text{z}}\Delta z.} where the integral of the vertical component of velocity is the vertical distance. The work of gravity depends only on the vertical movement of the curve r(t). == Potential energy for a linear spring == A horizontal spring exerts a force F = (−kx, 0, 0) that is proportional to its deformation in the axial or x-direction. The work of this spring on a body moving along the space curve s(t) = (x(t), y(t), z(t)), is calculated using its velocity, v = (vx, vy, vz), to obtain W = ∫ 0 t F ⋅ v d t = − ∫ 0 t k x v x d t = − ∫ 0 t k x d x d t d t = ∫ x ( t 0 ) x ( t ) k x d x = 1 2 k x 2 {\displaystyle W=\int _{0}^{t}\mathbf {F} \cdot \mathbf {v} \,dt=-\int _{0}^{t}kxv_{\text{x}}\,dt=-\int _{0}^{t}kx{\frac {dx}{dt}}dt=\int _{x(t_{0})}^{x(t)}kx\,dx={\frac {1}{2}}kx^{2}} For convenience, consider contact with the spring occurs at t = 0, then the integral of the product of the distance x and the x-velocity, xvx, is x2/2. The function U ( x ) = 1 2 k x 2 , {\displaystyle U(x)={\frac {1}{2}}kx^{2},} is called the potential energy of a linear spring. Elastic potential energy is the potential energy of an elastic object (for example a bow or a catapult) that is deformed under tension or compression (or stressed in formal terminology). It arises as a consequence of a force that tries to restore the object to its original shape, which is most often the electromagnetic force between the atoms and molecules that constitute the object. If the stretch is released, the energy is transformed into kinetic energy. == Potential energy for gravitational forces between two bodies == The gravitational potential function, also known as gravitational potential energy, is: U = − G M m r , {\displaystyle U=-{\frac {GMm}{r}},} The negative sign follows the convention that work is gained from a loss of potential energy. === Derivation === The gravitational force between two bodies of mass M and m separated by a distance r is given by Newton's law of universal gravitation F = − G M m r 2 r ^ , {\displaystyle \mathbf {F} =-{\frac {GMm}{r^{2}}}\mathbf {\hat {r}} ,} where r ^ {\displaystyle \mathbf {\hat {r}} } is a vector of length 1 pointing from M to m and G is the gravitational constant. Let the mass m move at the velocity v then the work of gravity on this mass as it moves from position r(t1) to r(t2) is given by W = − ∫ r ( t 1 ) r ( t 2 ) G M m r 3 r ⋅ d r = − ∫ t 1 t 2 G M m r 3 r ⋅ v d t . {\displaystyle W=-\int _{\mathbf {r} (t_{1})}^{\mathbf {r} (t_{2})}{\frac {GMm}{r^{3}}}\mathbf {r} \cdot d\mathbf {r} =-\int _{t_{1}}^{t_{2}}{\frac {GMm}{r^{3}}}\mathbf {r} \cdot \mathbf {v} \,dt.} The position and velocity of the mass m are given by r = r e r , v = r ˙ e r + r θ ˙ e t , {\displaystyle \mathbf {r} =r\mathbf {e} _{r},\qquad \mathbf {v} ={\dot {r}}\mathbf {e} _{\text{r}}+r{\dot {\theta }}\mathbf {e} _{\text{t}},} where er and et are the radial and tangential unit vectors directed relative to the vector from M to m. Use this to simplify the formula for work of gravity to, W = − ∫ t 1 t 2 G m M r 3 ( r e r ) ⋅ ( r ˙ e r + r θ ˙ e t ) d t = − ∫ t 1 t 2 G m M r 3 r r ˙ d t = G M m r ( t 2 ) − G M m r ( t 1 ) . {\displaystyle W=-\int _{t_{1}}^{t_{2}}{\frac {GmM}{r^{3}}}(r\mathbf {e} _{\text{r}})\cdot ({\dot {r}}\mathbf {e} _{\text{r}}+r{\dot {\theta }}\mathbf {e} _{\text{t}})\,dt=-\int _{t_{1}}^{t_{2}}{\frac {GmM}{r^{3}}}r{\dot {r}}dt={\frac {GMm}{r(t_{2})}}-{\frac {GMm}{r(t_{1})}}.} This calculation uses the fact that d d t r − 1 = − r − 2 r ˙ = − r ˙ r 2 . {\displaystyle {\frac {d}{dt}}r^{-1}=-r^{-2}{\dot {r}}=-{\frac {\dot {r}}{r^{2}}}.} == Potential energy for electrostatic forces between two bodies == The electrostatic force exerted by a charge Q on another charge q separated by a distance r is given by Coulomb's law F = 1 4 π ε 0 Q q r 2 r ^ , {\displaystyle \mathbf {F} ={\frac {1}{4\pi \varepsilon _{0}}}{\frac {Qq}{r^{2}}}\mathbf {\hat {r}} ,} where r ^ {\displaystyle \mathbf {\hat {r}} } is a vector of length 1 pointing from Q to q and ε0 is the vacuum permittivity. The work W required to move q from A to any point B in the electrostatic force field is given by the potential function U ( r ) = 1 4 π ε 0 Q q r . {\displaystyle U(r)={\frac {1}{4\pi \varepsilon _{0}}}{\frac {Qq}{r}}.} == Reference level == The potential energy is a function of the state a system is in, and is defined relative to that for a particular state. This reference state is not always a real state; it may also be a limit, such as with the distances between all bodies tending to infinity, provided that the energy involved in tending to that limit is finite, such as in the case of inverse-square law forces. Any arbitrary reference state could be used; therefore it can be chosen based on convenience. Typically the potential energy of a system depends on the relative positions of its components only, so the reference state can also be expressed in terms of relative positions. == Gravitational potential energy == Gravitational energy is the potential energy associated with gravitational force, as work is required to elevate objects against Earth's gravity. The potential energy due to elevated positions is called gravitational potential energy, and is evidenced by water in an elevated reservoir or kept behind a dam. If an object falls from one point to another point inside a gravitational field, the force of gravity will do positive work on the object, and the gravitational potential energy will decrease by the same amount. Consider a book placed on top of a table. As the book is raised from the floor to the table, some external force works against the gravitational force. If the book falls back to the floor, the "falling" energy the book receives is provided by the gravitational force. Thus, if the book falls off the table, this potential energy goes to accelerate the mass of the book and is converted into kinetic energy. When the book hits the floor this kinetic energy is converted into heat, deformation, and sound by the impact. The factors that affect an object's gravitational potential energy are its height relative to some reference point, its mass, and the strength of the gravitational field it is in. Thus, a book lying on a table has less gravitational potential energy than the same book on top of a taller cupboard and less gravitational potential energy than a heavier book lying on the same table. An object at a certain height above the Moon's surface has less gravitational potential energy than at the same height above the Earth's surface because the Moon's gravity is weaker. "Height" in the common sense of the term cannot be used for gravitational potential energy calculations when gravity is not assumed to be a constant. The following sections provide more detail. === Local approximation === The strength of a gravitational field varies with location. However, when the change of distance is small in relation to the distances from the center of the source of the gravitational field, this variation in field strength is negligible and we can assume that the force of gravity on a particular object is constant. Near the surface of the Earth, for example, we assume that the acceleration due to gravity is a constant g = 9.8 m/s2 (standard gravity). In this case, a simple expression for gravitational potential energy can be derived using the W = Fd equation for work, and the equation W F = − Δ U F . {\displaystyle W_{\text{F}}=-\Delta U_{\text{F}}.} The amount of gravitational potential energy held by an elevated object is equal to the work done against gravity in lifting it. The work done equals the force required to move it upward multiplied with the vertical distance it is moved (remember W = Fd). The upward force required while moving at a constant velocity is equal to the weight, mg, of an object, so the work done in lifting it through a height h is the product mgh. Thus, when accounting only for mass, gravity, and altitude, the equation is: U = m g h {\displaystyle U=mgh} where U is the potential energy of the object relative to its being on the Earth's surface, m is the mass of the object, g is the acceleration due to gravity, and h is the altitude of the object. Hence, the potential difference is Δ U = m g Δ h . {\displaystyle \Delta U=mg\Delta h.} === General formula === However, over large variations in distance, the approximation that g is constant is no longer valid, and we have to use calculus and the general mathematical definition of work to determine gravitational potential energy. For the computation of the potential energy, we can integrate the gravitational force, whose magnitude is given by Newton's law of gravitation, with respect to the distance r between the two bodies. Using that definition, the gravitational potential energy of a system of masses m1 and M2 at a distance r using the Newtonian constant of gravitation G is U = − G m 1 M 2 r + K , {\displaystyle U=-G{\frac {m_{1}M_{2}}{r}}+K,} where K is an arbitrary constant dependent on the choice of datum from which potential is measured. Choosing the convention that K = 0 (i.e. in relation to a point at infinity) makes calculations simpler, albeit at the cost of making U negative; for why this is physically reasonable, see below. Given this formula for U, the total potential energy of a system of n bodies is found by summing, for all n ( n − 1 ) 2 {\textstyle {\frac {n(n-1)}{2}}} pairs of two bodies, the potential energy of the system of those two bodies. Considering the system of bodies as the combined set of small particles the bodies consist of, and applying the previous on the particle level we get the negative gravitational binding energy. This potential energy is more strongly negative than the total potential energy of the system of bodies as such since it also includes the negative gravitational binding energy of each body. The potential energy of the system of bodies as such is the negative of the energy needed to separate the bodies from each other to infinity, while the gravitational binding energy is the energy needed to separate all particles from each other to infinity. U = − m ( G M 1 r 1 + G M 2 r 2 ) {\displaystyle U=-m\left(G{\frac {M_{1}}{r_{1}}}+G{\frac {M_{2}}{r_{2}}}\right)} therefore, U = − m ∑ G M r , {\displaystyle U=-m\sum G{\frac {M}{r}},} === Negative gravitational energy === As with all potential energies, only differences in gravitational potential energy matter for most physical purposes, and the choice of zero point is arbitrary. Given that there is no reasonable criterion for preferring one particular finite r over another, there seem to be only two reasonable choices for the distance at which U becomes zero: r = 0 {\displaystyle r=0} and r = ∞ {\displaystyle r=\infty } . The choice of U = 0 {\displaystyle U=0} at infinity may seem peculiar, and the consequence that gravitational energy is always negative may seem counterintuitive, but this choice allows gravitational potential energy values to be finite, albeit negative. The singularity at r = 0 {\displaystyle r=0} in the formula for gravitational potential energy means that the only other apparently reasonable alternative choice of convention, with U = 0 {\displaystyle U=0} for ⁠ r = 0 {\displaystyle r=0} ⁠, would result in potential energy being positive, but infinitely large for all nonzero values of r, and would make calculations involving sums or differences of potential energies beyond what is possible with the real number system. Since physicists abhor infinities in their calculations, and r is always non-zero in practice, the choice of U = 0 {\displaystyle U=0} at infinity is by far the more preferable choice, even if the idea of negative energy in a gravity well appears to be peculiar at first. The negative value for gravitational energy also has deeper implications that make it seem more reasonable in cosmological calculations where the total energy of the universe can meaningfully be considered; see inflation theory for more on this. === Uses === Gravitational potential energy has a number of practical uses, notably the generation of pumped-storage hydroelectricity. For example, in Dinorwig, Wales, there are two lakes, one at a higher elevation than the other. At times when surplus electricity is not required (and so is comparatively cheap), water is pumped up to the higher lake, thus converting the electrical energy (running the pump) to gravitational potential energy. At times of peak demand for electricity, the water flows back down through electrical generator turbines, converting the potential energy into kinetic energy and then back into electricity. The process is not completely efficient and some of the original energy from the surplus electricity is in fact lost to friction. Gravitational potential energy is also used to power clocks in which falling weights operate the mechanism. It is also used by counterweights for lifting up an elevator, crane, or sash window. Roller coasters are an entertaining way to utilize potential energy – chains are used to move a car up an incline (building up gravitational potential energy), to then have that energy converted into kinetic energy as it falls. Another practical use is utilizing gravitational potential energy to descend (perhaps coast) downhill in transportation such as the descent of an automobile, truck, railroad train, bicycle, airplane, or fluid in a pipeline. In some cases the kinetic energy obtained from the potential energy of descent may be used to start ascending the next grade such as what happens when a road is undulating and has frequent dips. The commercialization of stored energy (in the form of rail cars raised to higher elevations) that is then converted to electrical energy when needed by an electrical grid, is being undertaken in the United States in a system called Advanced Rail Energy Storage (ARES). == Chemical potential energy == Chemical potential energy is a form of potential energy related to the structural arrangement of atoms or molecules. This arrangement may be the result of chemical bonds within a molecule or otherwise. Chemical energy of a chemical substance can be transformed to other forms of energy by a chemical reaction. As an example, when a fuel is burned the chemical energy is converted to heat, same is the case with digestion of food metabolized in a biological organism. Green plants transform solar energy to chemical energy through the process known as photosynthesis, and electrical energy can be converted to chemical energy through electrochemical reactions. The similar term chemical potential is used to indicate the potential of a substance to undergo a change of configuration, be it in the form of a chemical reaction, spatial transport, particle exchange with a reservoir, etc. == Electric potential energy == An object can have potential energy by virtue of its electric charge and several forces related to their presence. There are two main types of this kind of potential energy: electrostatic potential energy, electrodynamic potential energy (also sometimes called magnetic potential energy). === Electrostatic potential energy === Electrostatic potential energy between two bodies in space is obtained from the force exerted by a charge Q on another charge q, which is given by F e = − 1 4 π ε 0 Q q r 2 r ^ , {\displaystyle \mathbf {F} _{e}=-{\frac {1}{4\pi \varepsilon _{0}}}{\frac {Qq}{r^{2}}}\mathbf {\hat {r}} ,} where r ^ {\displaystyle \mathbf {\hat {r}} } is a vector of length 1 pointing from Q to q and ε0 is the vacuum permittivity. If the electric charge of an object can be assumed to be at rest, then it has potential energy due to its position relative to other charged objects. The electrostatic potential energy is the energy of an electrically charged particle (at rest) in an electric field. It is defined as the work that must be done to move it from an infinite distance away to its present location, adjusted for non-electrical forces on the object. This energy will generally be non-zero if there is another electrically charged object nearby. The work W required to move q from A to any point B in the electrostatic force field is given by Δ U A B ( r ) = − ∫ A B F e ⋅ d r {\displaystyle \Delta U_{AB}({\mathbf {r} })=-\int _{A}^{B}\mathbf {F_{e}} \cdot d\mathbf {r} } typically given in J for Joules. A related quantity called electric potential (commonly denoted with a V for voltage) is equal to the electric potential energy per unit charge. === Magnetic potential energy === The energy of a magnetic moment μ {\displaystyle {\boldsymbol {\mu }}} in an externally produced magnetic B-field B has potential energy U = − μ ⋅ B . {\displaystyle U=-{\boldsymbol {\mu }}\cdot \mathbf {B} .} The magnetization M in a field is U = − 1 2 ∫ M ⋅ B d V , {\displaystyle U=-{\frac {1}{2}}\int \mathbf {M} \cdot \mathbf {B} \,dV,} where the integral can be over all space or, equivalently, where M is nonzero. Magnetic potential energy is the form of energy related not only to the distance between magnetic materials, but also to the orientation, or alignment, of those materials within the field. For example, the needle of a compass has the lowest magnetic potential energy when it is aligned with the north and south poles of the Earth's magnetic field. If the needle is moved by an outside force, torque is exerted on the magnetic dipole of the needle by the Earth's magnetic field, causing it to move back into alignment. The magnetic potential energy of the needle is highest when its field is in the same direction as the Earth's magnetic field. Two magnets will have potential energy in relation to each other and the distance between them, but this also depends on their orientation. If the opposite poles are held apart, the potential energy will be higher the further they are apart and lower the closer they are. Conversely, like poles will have the highest potential energy when forced together, and the lowest when they spring apart. == Nuclear potential energy == Nuclear potential energy is the potential energy of the particles inside an atomic nucleus. The nuclear particles are bound together by the strong nuclear force. Their rest mass provides the potential energy for certain kinds of radioactive decay, such as beta decay. Nuclear particles like protons and neutrons are not destroyed in fission and fusion processes, but collections of them can have less mass than if they were individually free, in which case this mass difference can be liberated as heat and radiation in nuclear reactions. The process of hydrogen fusion occurring in the Sun is an example of this form of energy release – 600 million tonnes of hydrogen nuclei are fused into helium nuclei, with a loss of about 4 million tonnes of mass per second. This energy, now in the form of kinetic energy and gamma rays, keeps the solar core hot even as electromagnetic radiation carries electromagnetic energy into space. == Forces and potential energy == Potential energy is closely linked with forces. If the work done by a force on a body that moves from A to B does not depend on the path between these points, then the work of this force measured from A assigns a scalar value to every other point in space and defines a scalar potential field. In this case, the force can be defined as the negative of the vector gradient of the potential field. For example, gravity is a conservative force. The associated potential is the gravitational potential, often denoted by ϕ {\displaystyle \phi } or V {\displaystyle V} , corresponding to the energy per unit mass as a function of position. The gravitational potential energy of two particles of mass M and m separated by a distance r is U = − G M m r . {\displaystyle U=-{\frac {GMm}{r}}.} The gravitational potential (specific energy) of the two bodies is ϕ = − ( G M r + G m r ) = − G ( M + m ) r = − G M m μ r = U μ {\displaystyle \phi =-\left({\frac {GM}{r}}+{\frac {Gm}{r}}\right)=-{\frac {G(M+m)}{r}}=-{\frac {GMm}{\mu r}}={\frac {U}{\mu }}} where μ {\displaystyle \mu } is the reduced mass. The work done against gravity by moving an infinitesimal mass from point A with U = a {\displaystyle U=a} to point B with U = b {\displaystyle U=b} is ( b − a ) {\displaystyle (b-a)} and the work done going back the other way is ( a − b ) {\displaystyle (a-b)} so that the total work done in moving from A to B and returning to A is U A → B → A = ( b − a ) + ( a − b ) = 0. {\displaystyle U_{A\to B\to A}=(b-a)+(a-b)=0.} If the potential is redefined at A to be a + c {\displaystyle a+c} and the potential at B to be b + c {\displaystyle b+c} , where c {\displaystyle c} is a constant (i.e. c {\displaystyle c} can be any number, positive or negative, but it must be the same at A as it is at B) then the work done going from A to B is U A → B = ( b + c ) − ( a + c ) = b − a {\displaystyle U_{A\to B}=(b+c)-(a+c)=b-a} as before. In practical terms, this means that one can set the zero of U {\displaystyle U} and ϕ {\displaystyle \phi } anywhere one likes. One may set it to be zero at the surface of the Earth, or may find it more convenient to set zero at infinity (as in the expressions given earlier in this section). A conservative force can be expressed in the language of differential geometry as a closed form. As Euclidean space is contractible, its de Rham cohomology vanishes, so every closed form is also an exact form, and can be expressed as the gradient of a scalar field. This gives a mathematical justification of the fact that all conservative forces are gradients of a potential field. == Notes == == References == Serway, Raymond A.; Jewett, John W. (2010). Physics for Scientists and Engineers (8th ed.). Brooks/Cole cengage. ISBN 978-1-4390-4844-3. Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W. H. Freeman. ISBN 0-7167-0809-4. == External links == What is potential energy?
Wikipedia/Nuclear_potential_energy
Basal metabolic rate (BMR) is the rate of energy expenditure per unit time by endothermic animals at rest. It is reported in energy units per unit time ranging from watt (joule/second) to ml O2/min or joule per hour per kg body mass J/(h·kg). Proper measurement requires a strict set of criteria to be met. These criteria include being in a physically and psychologically undisturbed state and being in a thermally neutral environment while in the post-absorptive state (i.e., not actively digesting food). In bradymetabolic animals, such as fish and reptiles, the equivalent term standard metabolic rate (SMR) applies. It follows the same criteria as BMR, but requires the documentation of the temperature at which the metabolic rate was measured. This makes BMR a variant of standard metabolic rate measurement that excludes the temperature data, a practice that has led to problems in defining "standard" rates of metabolism for many mammals. Metabolism comprises the processes that the body needs to function. Basal metabolic rate is the amount of energy per unit of time that a person needs to keep the body functioning at rest. Some of those processes are breathing, blood circulation, controlling body temperature, cell growth, brain and nerve function, and contraction of muscles. Basal metabolic rate affects the rate that a person burns calories and ultimately whether that individual maintains, gains, or loses weight. The basal metabolic rate accounts for about 70% of the daily calorie expenditure by individuals. It is influenced by several factors. In humans, BMR typically declines by 1–2% per decade after age 20, mostly due to loss of fat-free mass, although the variability between individuals is high. == Description == The body's generation of heat is known as thermogenesis and it can be measured to determine the amount of energy expended. BMR generally decreases with age, and with the decrease in lean body mass (as may happen with aging). Increasing muscle mass has the effect of increasing BMR. Aerobic (resistance) fitness level, a product of cardiovascular exercise, while previously thought to have effect on BMR, has been shown in the 1990s not to correlate with BMR when adjusted for fat-free body mass. But anaerobic exercise does increase resting energy consumption (see "aerobic vs. anaerobic exercise"). Illness, previously consumed food and beverages, environmental temperature, and stress levels can affect one's overall energy expenditure as well as one's BMR. BMR is measured under very restrictive circumstances when a person is awake. An accurate BMR measurement requires that the person's sympathetic nervous system not be stimulated, a condition which requires complete rest. A more common measurement, which uses less strict criteria, is resting metabolic rate (RMR). BMR may be measured by gas analysis through either direct or indirect calorimetry, though a rough estimation can be acquired through an equation using age, sex, height, and weight. Studies of energy metabolism using both methods provide convincing evidence for the validity of the respiratory quotient (RQ), which measures the inherent composition and utilization of carbohydrates, fats and proteins as they are converted to energy substrate units that can be used by the body as energy. == Phenotypic flexibility == BMR is a flexible trait (it can be reversibly adjusted within individuals), with, for example, lower temperatures generally resulting in higher basal metabolic rates for both birds and rodents. There are two models to explain how BMR changes in response to temperature: the variable maximum model (VMM) and variable fraction model (VFM). The VMM states that the summit metabolism (or the maximum metabolic rate in response to the cold) increases during the winter, and that the sustained metabolism (or the metabolic rate that can be indefinitely sustained) remains a constant fraction of the former. The VFM says that the summit metabolism does not change, but that the sustained metabolism is a larger fraction of it. The VMM is supported in mammals, and, when using whole-body rates, passerine birds. The VFM is supported in studies of passerine birds using mass-specific metabolic rates (or metabolic rates per unit of mass). This latter measurement has been criticized by Eric Liknes, Sarah Scott, and David Swanson, who say that mass-specific metabolic rates are inconsistent seasonally. In addition to adjusting to temperature, BMR also may adjust before annual migration cycles. The red knot (ssp. islandica) increases its BMR by about 40% before migrating northward. This is because of the energetic demand of long-distance flights. The increase is likely primarily due to increased mass in organs related to flight. The end destination of migrants affects their BMR: yellow-rumped warblers migrating northward were found to have a 31% higher BMR than those migrating southward. In humans, BMR is directly proportional to a person's lean body mass. In other words, the more lean body mass a person has, the higher their BMR; but BMR is also affected by acute illnesses and increases with conditions like burns, fractures, infections, fevers, etc. In menstruating females, BMR varies to some extent with the phases of their menstrual cycle. Due to the increase in progesterone, BMR rises at the start of the luteal phase and stays at its highest until this phase ends. There are different findings in research how much of an increase usually occurs. Small sample, early studies, found various figures, such as; a 6% higher postovulatory sleep metabolism, a 7% to 15% higher 24 hour expenditure following ovulation, and an increase and a luteal phase BMR increase by up to 12%. A study by the American Society of Clinical Nutrition found that an experimental group of female volunteers had an 11.5% average increase in 24 hour energy expenditure in the two weeks following ovulation, with a range of 8% to 16%. This group was measured via simultaneously direct and indirect calorimetry and had standardized daily meals and sedentary schedule in order to prevent the increase from being manipulated by change in food intake or activity level. A 2011 study conducted by the Mandya Institute of Medical Sciences found that during a woman's follicular phase and menstrual cycle is no significant difference in BMR, however the calories burned per hour is significantly higher, up to 18%, during the luteal phase. Increased state anxiety (stress level) also temporarily increased BMR. == Physiology == The early work of the scientists J. Arthur Harris and Francis G. Benedict showed that approximate values for BMR could be derived using body surface area (computed from height and weight), age, and sex, along with the oxygen and carbon dioxide measures taken from calorimetry. Studies also showed that by eliminating the sex differences that occur with the accumulation of adipose tissue by expressing metabolic rate per unit of "fat-free" or lean body mass, the values between sexes for basal metabolism are essentially the same. Exercise physiology textbooks have tables to show the conversion of height and body surface area as they relate to weight and basal metabolic values. The primary organ responsible for regulating metabolism is the hypothalamus. The hypothalamus is located on the diencephalon and forms the floor and part of the lateral walls of the third ventricle of the cerebrum. The chief functions of the hypothalamus are: control and integration of activities of the autonomic nervous system (ANS) The ANS regulates contraction of smooth muscle and cardiac muscle, along with secretions of many endocrine organs such as the thyroid gland (associated with many metabolic disorders). Through the ANS, the hypothalamus is the main regulator of visceral activities, such as heart rate, movement of food through the gastrointestinal tract, and contraction of the urinary bladder. production and regulation of feelings of rage and aggression regulation of body temperature regulation of food intake, through two centers: The feeding center or hunger center is responsible for the sensations that cause us to seek food. When sufficient food or substrates have been received and leptin is high, then the satiety center is stimulated and sends impulses that inhibit the feeding center. When insufficient food is present in the stomach and ghrelin levels are high, receptors in the hypothalamus initiate the sense of hunger. The thirst center operates similarly when certain cells in the hypothalamus are stimulated by the rising osmotic pressure of the extracellular fluid. If thirst is satisfied, osmotic pressure decreases. All of these functions taken together form a survival mechanism that causes us to sustain the body processes that BMR measures. === BMR estimation formulas === Several equations to predict the number of calories required by humans have been published from the early 20th–21st centuries. In each of the formulas below: P is total heat production at complete rest, m is mass (kg), h is height (cm), a is age (years). The original Harris–Benedict equation Historically, the most notable formula was the Harris–Benedict equation, which was published in 1919: for men, P = ( 13.7516 m 1 kg + 5.0033 h 1 cm − 6.7550 a 1 year + 66.4730 ) kcal day , {\displaystyle P=\left({\frac {13.7516m}{1~{\text{kg}}}}+{\frac {5.0033h}{1~{\text{cm}}}}-{\frac {6.7550a}{1~{\text{year}}}}+66.4730\right){\frac {\text{kcal}}{\text{day}}},} for women, P = ( 9.5634 m 1 kg + 1.8496 h 1 cm − 4.6756 a 1 year + 655.0955 ) kcal day . {\displaystyle P=\left({\frac {9.5634m}{1~{\text{kg}}}}+{\frac {1.8496h}{1~{\text{cm}}}}-{\frac {4.6756a}{1~{\text{year}}}}+655.0955\right){\frac {\text{kcal}}{\text{day}}}.} The difference in BMR for men and women is mainly due to differences in body mass. For example, a 55-year-old woman weighing 130 pounds (59 kg) and 66 inches (168 cm) tall would have a BMR of 1,272 kilocalories (5,320 kJ) per day. The revised Harris–Benedict equation In 1984, the original Harris–Benedict equations were revised using new data. In comparisons with actual expenditure, the revised equations were found to be more accurate: for men, P = ( 13.397 m 1 kg + 4.799 h 1 cm − 5.677 a 1 year + 88.362 ) kcal day , {\displaystyle P=\left({\frac {13.397m}{1~{\text{kg}}}}+{\frac {4.799h}{1~{\text{cm}}}}-{\frac {5.677a}{1~{\text{year}}}}+88.362\right){\frac {\text{kcal}}{\text{day}}},} for women, P = ( 9.247 m 1 kg + 3.098 h 1 cm − 4.330 a 1 year + 447.593 ) kcal day . {\displaystyle P=\left({\frac {9.247m}{1~{\text{kg}}}}+{\frac {3.098h}{1~{\text{cm}}}}-{\frac {4.330a}{1~{\text{year}}}}+447.593\right){\frac {\text{kcal}}{\text{day}}}.} It was the best prediction equation until 1990, when Mifflin et al. introduced the equation: The Mifflin St Jeor equation P = ( 10.0 m 1 kg + 6.25 h 1 cm − 5.0 a 1 year + s ) kcal day , {\displaystyle P=\left({\frac {10.0m}{1~{\text{kg}}}}+{\frac {6.25h}{1~{\text{cm}}}}-{\frac {5.0a}{1~{\text{year}}}}+s\right){\frac {\text{kcal}}{\text{day}}},} where s is +5 for males and −161 for females. According to this formula, the woman in the example above has a BMR of 1,204 kilocalories (5,040 kJ) per day. During the last 100 years, lifestyles have changed, and Frankenfield et al. showed it to be about 5% more accurate. These formulas are based on body mass, which does not take into account the difference in metabolic activity between lean body mass and body fat. Other formulas exist which take into account lean body mass, two of which are the Katch–McArdle formula and Cunningham formula. The Katch–McArdle formula (resting daily energy expenditure) The Katch–McArdle formula is used to predict resting daily energy expenditure (RDEE). The Cunningham formula is commonly cited to predict RMR instead of BMR; however, the formulas provided by Katch–McArdle and Cunningham are the same. P = 370 + 21.6 ⋅ ℓ , {\displaystyle P=370+21.6\cdot \ell ,} where ℓ is the lean body mass (LBM in kg): ℓ = m ( 1 − f 100 ) , {\displaystyle \ell =m\left(1-{\frac {f}{100}}\right),} where f is the body fat percentage. According to this formula, if the woman in the example has a body fat percentage of 30%, her resting daily energy expenditure (the authors use the term of basal and resting metabolism interchangeably) would be 1262 kcal per day. === Research on individual differences in BMR === The basic metabolic rate varies between individuals. One study of 150 adults representative of the population in Scotland reported basal metabolic rates from as low as 1,027 kilocalories (4,300 kJ) per day to as high as 2,499 kilocalories (10,460 kJ), with a mean BMR of 1,500 kilocalories (6,300 kJ) per day. Statistically, the researchers calculated that 62% of this variation was explained by differences in fat free mass. Other factors explaining the variation included fat mass (7%), age (2%), and experimental error including within-subject difference (2%). The rest of the variation (27%) was unexplained. This remaining difference was not explained by sex nor by differing tissue size of highly energetic organs such as the brain. A cross-sectional study of more than 1400 subjects in Europe and the US showed that once adjusted for differences in body composition (lean and fat mass) and age, BMR has fallen over the past 35 years. The decline was also observed in a meta-analysis of more than 150 studies dating back to the early 1920s, translating into a decline in total energy expenditure of about 6%. == Biochemistry == About 70% of a human's total energy expenditure is due to the basal life processes taking place in the organs of the body (see table). About 20% of one's energy expenditure comes from physical activity and another 10% from thermogenesis, or digestion of food (postprandial thermogenesis). All of these processes require an intake of oxygen along with coenzymes to provide energy for survival (usually from macronutrients like carbohydrates, fats, and proteins) and expel carbon dioxide, due to processing by the Krebs cycle. For the BMR, most of the energy is consumed in maintaining fluid levels in tissues through osmoregulation, and only about one-tenth is consumed for mechanical work, such as digestion, heartbeat, and breathing. What enables the Krebs cycle to perform metabolic changes to fats, carbohydrates, and proteins is energy, which can be defined as the ability or capacity to do work. The breakdown of large molecules into smaller molecules—associated with release of energy—is catabolism. The building up process is termed anabolism. The breakdown of proteins into amino acids is an example of catabolism, while the formation of proteins from amino acids is an anabolic process. Exergonic reactions are energy-releasing reactions and are generally catabolic. Endergonic reactions require energy and include anabolic reactions and the contraction of muscle. Metabolism is the total of all catabolic, exergonic, anabolic, and endergonic reactions. Adenosine triphosphate (ATP) is the intermediate molecule that drives the exergonic transfer of energy to switch to endergonic anabolic reactions used in muscle contraction. This is what causes muscles to work which can require a breakdown, and also to build in the rest period, which occurs during the strengthening phase associated with muscular contraction. ATP is composed of adenine, a nitrogen containing base, ribose, a five carbon sugar (collectively called adenosine), and three phosphate groups. ATP is a high energy molecule because it stores large amounts of energy in the chemical bonds of the two terminal phosphate groups. The breaking of these chemical bonds in the Krebs Cycle provides the energy needed for muscular contraction. === Glucose === Because the ratio of hydrogen to oxygen atoms in all carbohydrates is always the same as that in water—that is, 2 to 1—all of the oxygen consumed by the cells is used to oxidize the carbon in the carbohydrate molecule to form carbon dioxide. Consequently, during the complete oxidation of a glucose molecule, six molecules of carbon dioxide and six molecules of water are produced and six molecules of oxygen are consumed. The overall equation for this reaction is C 6 H 12 O 6 + 6 O 2 ⟶ 6 CO 2 + 6 H 2 O {\displaystyle {\ce {C6H12O6 + 6 O2 -> 6 CO2 + 6 H2O}}} (30–32 ATP molecules produced depending on type of mitochondrial shuttle, 5–5.33 ATP molecules per molecule of oxygen.) Because the gas exchange in this reaction is equal, the respiratory quotient (R.Q.) for carbohydrate is unity or 1.0: R.Q. = 6 CO 2 6 O 2 = 1.0. {\displaystyle {\text{R.Q.}}={\frac {{\ce {6 CO2}}}{{\ce {6 O2}}}}=1.0.} === Fats === The chemical composition for fats differs from that of carbohydrates in that fats contain considerably fewer oxygen atoms in proportion to atoms of carbon and hydrogen. When listed on nutritional information tables, fats are generally divided into six categories: total fats, saturated fatty acid, polyunsaturated fatty acid, monounsaturated fatty acid, dietary cholesterol, and trans fatty acid. From a basal metabolic or resting metabolic perspective, more energy is needed to burn a saturated fatty acid than an unsaturated fatty acid. The fatty acid molecule is broken down and categorized based on the number of carbon atoms in its molecular structure. The chemical equation for metabolism of the twelve to sixteen carbon atoms in a saturated fatty acid molecule shows the difference between metabolism of carbohydrates and fatty acids. Palmitic acid is a commonly studied example of the saturated fatty acid molecule. The overall equation for the substrate utilization of palmitic acid is C 16 H 32 O 2 + 23 O 2 ⟶ 16 CO 2 + 16 H 2 O {\displaystyle {\ce {C16H32O2 + 23 O2 -> 16 CO2 + 16 H2O}}} (106 ATP molecules produced, 4.61 ATP molecules per molecule of oxygen.) Thus the R.Q. for palmitic acid is 0.696: R.Q. = 16 CO 2 23 O 2 = 0.696. {\displaystyle {\text{R.Q.}}={\frac {{\ce {16 CO2}}}{{\ce {23 O2}}}}=0.696.} === Proteins === Proteins are composed of carbon, hydrogen, oxygen, and nitrogen arranged in a variety of ways to form a large combination of amino acids. Unlike fat the body has no storage deposits of protein. All of it is contained in the body as important parts of tissues, blood hormones, and enzymes. The structural components of the body that contain these amino acids are continually undergoing a process of breakdown and replacement. The respiratory quotient for protein metabolism can be demonstrated by the chemical equation for oxidation of albumin: C 72 H 112 N 18 O 22 S + 77 O 2 ⟶ 63 CO 2 + 38 H 2 O + SO 3 + 9 CO ( NH 2 ) 2 {\displaystyle {\ce {C72H112N18O22S + 77 O2 -> 63 CO2 + 38 H2O + SO3 + 9 CO(NH2)2}}} The R.Q. for albumin is 0.818: R.Q. = 63 CO 2 77 O 2 = 0.818. {\displaystyle {\text{R.Q.}}={\frac {{\ce {63 CO2}}}{{\ce {77 O2}}}}=0.818.} The reason this is important in the process of understanding protein metabolism is that the body can blend the three macronutrients and based on the mitochondrial density, a preferred ratio can be established which determines how much fuel is utilized in which packets for work accomplished by the muscles. Protein catabolism (breakdown) has been estimated to supply 10% to 15% of the total energy requirement during a two-hour aerobic training session. This process could severely degrade the protein structures needed to maintain survival such as contractile properties of proteins in the heart, cellular mitochondria, myoglobin storage, and metabolic enzymes within muscles. The oxidative system (aerobic) is the primary source of ATP supplied to the body at rest and during low intensity activities and uses primarily carbohydrates and fats as substrates. Protein is not normally metabolized significantly, except during long term starvation and long bouts of exercise (greater than 90 minutes.) At rest approximately 70% of the ATP produced is derived from fats and 30% from carbohydrates. Following the onset of activity, as the intensity of the exercise increases, there is a shift in substrate preference from fats to carbohydrates. During high intensity aerobic exercise, almost 100% of the energy is derived from carbohydrates, if an adequate supply is available. === Aerobic vs. anaerobic exercise === Studies published in 1992 and 1997 indicate that the level of aerobic fitness of an individual does not have any correlation with the level of resting metabolism. Both studies find that aerobic fitness levels do not improve the predictive power of fat free mass for resting metabolic rate. However, recent research from the Journal of Applied Physiology, published in 2012, compared resistance training and aerobic training on body mass and fat mass in overweight adults (STRRIDE AT/RT). When time commitment is evaluated against health benefit, aerobic training is the optimal mode of exercise for reducing fat mass and body mass as a primary consideration, and resistance training is good as a secondary factor when aging and lean mass are a concern. Resistance training causes injuries at a much higher rate than aerobic training. Compared to resistance training, it was found that aerobic training resulted in a significantly more pronounced reduction of body weight by enhancing the cardiovascular system which is what is the principal factor in metabolic utilization of fat substrates. Resistance training if time is available is also helpful in post-exercise metabolism, but it is an adjunctive factor because the body needs to heal sufficiently between resistance training episodes, whereas the body can accept aerobic training every day. RMR and BMR are measurements of daily consumption of calories. The majority of studies that are published on this topic look at aerobic exercise because of its efficacy for health and weight management. Anaerobic exercise, such as weight lifting, builds additional muscle mass. Muscle contributes to the fat-free mass of an individual and therefore effective results from anaerobic exercise will increase BMR. However, the actual effect on BMR is controversial and difficult to enumerate. Various studies suggest that the resting metabolic rate of trained muscle is around 55 kJ/kg per day; it then follows that even a substantial increase in muscle mass — say 5 kg — would make only a minor impact on BMR. == Longevity == In 1926, Raymond Pearl proposed that longevity varies inversely with basal metabolic rate (the "rate of living hypothesis"). Support for this hypothesis comes from the fact that mammals with larger body size have longer maximum life spans (large animals do have higher total metabolic rates, but the metabolic rate at the cellular level is much lower, and the breathing rate and heartbeat are slower in larger animals) and the fact that the longevity of fruit flies varies inversely with ambient temperature. Additionally, the life span of houseflies can be extended by preventing physical activity. This theory has been bolstered by several new studies linking lower basal metabolic rate to increased life expectancy, across the animal kingdom—including humans. Calorie restriction and reduced thyroid hormone levels, both of which decrease the metabolic rate, have been associated with higher longevity in animals. However, the ratio of total daily energy expenditure to resting metabolic rate can vary between 1.6 and 8.0 between species of mammals. Animals also vary in the degree of coupling between oxidative phosphorylation and ATP production, the amount of saturated fat in mitochondrial membranes, the amount of DNA repair, and many other factors that affect maximum life span. One problem with understanding the associations of lifespan and metabolism is that changes in metabolism are often confounded by other factors that may affect lifespan. For example under calorie restriction whole body metabolic rate goes down with increasing levels of restriction, but body temperature also follows the same pattern. By manipulating the ambient temperature and exposure to wind it was shown in mice and hamsters that body temperature is a more important modulator of lifespan than metabolic rate. == Medical considerations == A person's metabolism varies with their physical condition and activity. Weight training can have a longer impact on metabolism than aerobic training, but there are no known mathematical formulas that can exactly predict the length and duration of a raised metabolism from trophic changes with anabolic neuromuscular training. A decrease in food intake will typically lower the metabolic rate as the body tries to conserve energy. Researcher Gary Foster estimates that a very low calorie diet of fewer than 800 calories a day would reduce the metabolic rate by more than 10 percent. The metabolic rate can be affected by some drugs: antithyroid agents (drugs used to treat hyper­thyroid­ism) such as propylthiouracil and methimazole bring the metabolic rate down to normal, restoring euthyroidism. Some research has focused on developing antiobesity drugs to raise the metabolic rate, such as drugs to stimulate thermogenesis in skeletal muscle. The metabolic rate may be elevated in stress, illness, and diabetes. Menopause may also affect metabolism. == See also == == References == == Further reading == Tsai AG, Wadden TA (2005). "Systematic review: An evaluation of major commercial weight loss programs in the United States". Annals of Internal Medicine. 142 (1): 56–66. doi:10.7326/0003-4819-142-1-200501040-00012. PMID 15630109. S2CID 2589699. Gustafson D, Rothenberg E, Blennow K, Steen B, Skoog I (2003). "An 18-Year Follow-up of Overweight and Risk of Alzheimer Disease". Archives of Internal Medicine. 163 (13): 1524–8. doi:10.1001/archinte.163.13.1524. PMID 12860573. "Clinical guidelines on the identification, evaluation, and treatment of overweight and obesity in adults: Executive summary. Expert Panel on the Identification, Evaluation, and Treatment of Overweight in Adults". The American Journal of Clinical Nutrition. 68 (4): 899–917. 1998. doi:10.1093/ajcn/68.4.899. PMID 9771869. Segal AC (1987). "Linear Diet Model". College Mathematics Journal. 18 (1): 44–5. doi:10.2307/2686315. JSTOR 2686315. Pike RL, Brown ML (1975). Nutrition: An Integrated Approach (2nd ed.). New York: Wiley. OCLC 474842663. Sahlin K, Tonkonogi M, Soderlund K (1998). "Energy supply and muscle fatigue in humans". Acta Physiologica Scandinavica. 162 (3): 261–6. doi:10.1046/j.1365-201X.1998.0298f.x. PMID 9578371. Saltin B, Gollnick PD (1983). "Skeletal muscle adaptability: Significance for metabolism and performance". In Peachey LD, Adrian RH, Geiger SR (eds.). Handbook of Physiology. Baltimore: Williams & Wilkins. pp. 540–55. OCLC 314567389. Republished as: Saltin B, Gollnick PD (2011). "Skeletal Muscle Adaptability: Significance for Metabolism and Performance". Comprehensive Physiology. doi:10.1002/cphy.cp100119. ISBN 978-0-470-65071-4. Thorstensson (1976). "Muscle strength, fibre types and enzyme activities in man". Acta Physiologica Scandinavica. Supplementum. 443: 1–45. PMID 189574. Thorstensson A, Sjödin B, Tesch P, Karlsson J (1977). "Actomyosin ATPase, Myokinase, CPK and LDH in Human Fast and Slow Twitch Muscle Fibres". Acta Physiologica Scandinavica. 99 (2): 225–9. doi:10.1111/j.1748-1716.1977.tb10373.x. PMID 190869. Vanhelder WP, Radomski MW, Goode RC, Casey K (1985). "Hormonal and metabolic response to three types of exercise of equal duration and external work output". European Journal of Applied Physiology and Occupational Physiology. 54 (4): 337–42. doi:10.1007/BF02337175. PMID 3905393. S2CID 39715173. Wells JG, Balke B, Van Fossan DD (1957). "Lactic acid accumulation during work; a suggested standardization of work classification". Journal of Applied Physiology. 10 (1): 51–5. doi:10.1152/jappl.1957.10.1.51. PMID 13405829. McArdle WD, Katch FI, Katch VL (1986). Exercise Physiology: Energy, Nutrition, and Human Performance. Philadelphia: Lea & Febiger. ISBN 978-0-8121-0991-7. OCLC 646595478. Harris JA, Benedict FG (1918). "A Biometric Study of Human Basal Metabolism". Proceedings of the National Academy of Sciences of the United States of America. 4 (12): 370–3. Bibcode:1918PNAS....4..370H. doi:10.1073/pnas.4.12.370. PMC 1091498. PMID 16576330.
Wikipedia/Basal_metabolic_rate
World energy resources are the estimated maximum capacity for energy production given all available resources on Earth. They can be divided by type into fossil fuel, nuclear fuel and renewable resources. == Fossil fuel == Remaining reserves of fossil fuel are estimated as: These are the proven energy reserves; real reserves may be four or more times larger. These numbers are very uncertain. Estimating the remaining fossil fuels on the planet depends on a detailed understanding of Earth's crust. With modern drilling technology, wells can be drilled in up to 3 km of water to verify the exact composition of the geology; but half of the ocean is deeper than 3 km, leaving about a third of the planet beyond the reach of detailed analysis. There is uncertainty in the total amount of reserves, but also in how much of these can be recovered gainfully, for technological, economic and political reasons, such as the accessibility of fossil deposits, the levels of sulfur and other pollutants in the oil and the coal, transportation costs, and societal instability in producing regions. In general the easiest to reach deposits are the first extracted. === Coal === Coal is the most abundant and burned fossil fuel. This was the fuel that launched the industrial revolution and continued to grow in use; China, which already has many of the world's most polluted cities, was in 2007 building about two coal-fired power plants every week. Coal's large reserves would make it a popular candidate to meet the energy demand of the global community, short of global warming concerns and other pollutants. === Natural gas === Natural gas is a widely available fossil fuel with estimated 850 000 km3 in recoverable reserves and at least that much more using enhanced methods to release shale gas. Improvements in technology and wide exploration led to a major increase in recoverable natural gas reserves as shale fracking methods were developed. At present usage rates, natural gas could supply most of the world's energy needs for between 100 and 250 years, depending on increase in consumption over time. === Oil === It is estimated that there may be 57 zettajoule (ZJ) of oil reserves on Earth (although estimates vary from a low of 8 ZJ, consisting of currently proven and recoverable reserves, to a maximum of 110 ZJ) consisting of available, but not necessarily recoverable reserves, and including optimistic estimates for unconventional sources such as oil sands and oil shale. Current consensus among the 18 recognized estimates of supply profiles is that the peak of extraction will occur in 2020 at the rate of 93-million barrels per day (mbd). Current oil consumption is at the rate of 0.18 ZJ per year (31.1 billion barrels) or 85 mbd. There is growing concern that peak oil production may be reached in the near future, resulting in severe oil price increases. A 2005 French Economics, Industry and Finance Ministry report suggested a worst-case scenario that could occur as early as 2013. There are also theories that peak of the global oil production may occur in as little as 2–3 years. The ASPO predicts peak year to be in 2010. Some other theories present the view that it has already taken place in 2005. World crude oil production (including lease condensates) according to US EIA data decreased from a peak of 73.720 mbd in 2005 to 73.437 in 2006, 72.981 in 2007, and 73.697 in 2008. According to peak oil theory, increasing production will lead to a more rapid collapse of production in the future, while decreasing production will lead to a slower decrease, as the bell-shaped curve will be spread out over more years. In a stated goal of increasing oil prices to $75/barrel, which had fallen from a high of $147 to a low of $40, OPEC announced decreasing production by 2.2 mbd beginning 1 January 2009. === Sustainability === Political considerations over the security of supplies, environmental concerns related to global warming and sustainability are expected to move the world's energy consumption away from fossil fuels. The concept of peak oil shows that about half of the available petroleum resources have been produced, and predicts a decrease of production. A government moving away from fossil fuels would most likely create economic pressure through carbon emissions and green taxation. Some countries are taking action as a result of the Kyoto Protocol, and further steps in this direction are proposed. For example, the European Commission has proposed that the energy policy of the European Union should set a binding target of increasing the level of renewable energy in the EU's overall mix from less than 7% in 2007 to 20% by 2020. The antithesis of sustainability is a disregard for limits, commonly referred to as the Easter Island Effect, which is the concept of being unable to develop sustainability, resulting in the depletion of natural resources. Some estimate that, assuming current consumption rates, current oil reserves could be completely depleted by 2050. == Nuclear energy == === Nuclear energy === The International Atomic Energy Agency estimates the remaining uranium resources to be equal to 2500 zettajoule (ZJ). This assumes the use of breeder reactors, which are able to create more fissile material than they consume. IPCC estimated currently proved economically recoverable uranium deposits for once-through fuel cycles reactors to be only 2 ZJ. The ultimately recoverable uranium is estimated to be 17 ZJ for once-through reactors and 1000 ZJ with reprocessing and fast breeder reactors. Resources and technology do not constrain the capacity of nuclear power to contribute to meeting the energy demand for the 21st century. However, political and environmental concerns about nuclear safety and radioactive waste started to limit the growth of this energy supply at the end of last century, particularly due to a number of nuclear accidents. Concerns about nuclear proliferation (especially with plutonium produced by breeder reactors) mean that the development of nuclear power by countries such as Iran and Syria is being actively discouraged by the international community. Although at the beginning of the 21st century uranium is the primary nuclear fuel worldwide, others such as thorium and hydrogen had been under investigation since the middle of the 20th century. Thorium reserves significantly exceed those of uranium, and of course hydrogen is abundant. It is also considered by many to be easier to obtain than uranium. While uranium mines are enclosed underground and thus very dangerous for the miners, thorium is taken from open pits, and is estimated to be roughly three times as abundant as uranium in the Earth's crust. Since the 1960s, numerous facilities throughout the world have burned Thorium. === Nuclear fusion === Alternatives for energy production through fusion of hydrogen have been under investigation since the 1950s. No materials can withstand the temperatures required to ignite the fuel, so it must be confined by methods which use no materials. Magnetic and inertial confinement are the main alternatives (Cadarache, Inertial confinement fusion) both of which are hot research topics in the early years of the 21st century. Nuclear fusion is the process powering the sun and other stars. It generates large quantities of heat by fusing the nuclei of hydrogen or helium isotopes, which may be derived from seawater. The heat can theoretically be harnessed to generate electricity. The temperatures and pressures needed to sustain fusion make it a very difficult process to control. Fusion is theoretically able to supply vast quantities of energy, with relatively little pollution. Although both the United States and the European Union, along with other countries, are supporting fusion research (such as investing in the ITER facility), according to one report, inadequate research has stalled progress in fusion research for the past 20 years. == Renewable resources == Renewable resources are available each year, unlike non-renewable resources, which are eventually depleted. A simple comparison is a coal mine and a forest. While the forest could be depleted, if it is managed it represents a continuous supply of energy, vs. the coal mine, which once has been exhausted is gone. Most of earth's available energy resources are renewable resources. Renewable resources account for more than 93 percent of total U.S. energy reserves. Annual renewable resources were multiplied times thirty years for comparison with non-renewable resources. In other words, if all non-renewable resources were uniformly exhausted in 30 years, they would only account for 7 percent of available resources each year, if all available renewable resources were developed. === Biomass === Production of biomass and biofuels are growing industries as interest in sustainable fuel sources is growing. Utilizing waste products avoids a food vs. fuel trade-off, and burning methane gas reduces greenhouse gas emissions, because even though it releases carbon dioxide, carbon dioxide is 23 times less of a greenhouse gas than is methane. Biofuels represent a sustainable partial replacement for fossil fuels, but their net impact on greenhouse gas emissions depends on the agricultural practices used to grow the plants used as feedstock to create the fuels. While it is widely believed that biofuels can be carbon neutral, there is evidence that biofuels produced by current farming methods are substantial net carbon emitters. Geothermal and biomass are the only two renewable energy sources that require careful management to avoid local depletion. === Geothermal === Estimates of exploitable worldwide geothermal energy resources vary considerably, depending on assumed investments in technology and exploration and guesses about geological formations. According to a 1998 study, this might amount to between 65 and 138 GW of electrical generation capacity 'using enhanced technology'. Other estimates range from 35 to 2000 GW of electrical generation capacity, with a further potential for 140 EJ/year of direct use. A 2006 report by the MIT that took into account the use of Enhanced Geothermal Systems (EGS) concluded that it would be affordable to generate 100 GWe (gigawatts of electricity) or more by 2050, just in the United States, for a maximum investment of 1 billion US dollars in research and development over 15 years. The MIT report calculated the world's total EGS resources to be over 13 YJ, of which over 0.2 YJ would be extractable, with the potential to increase this to over 2 YJ with technology improvements – sufficient to provide all the world's energy needs for several thousand years. The total heat content of the Earth is 13,000,000 YJ. === Hydropower === In 2005, hydroelectric power supplied 16.4% of world electricity, down from 21.0% in 1973, but only 2.2% of the world's energy. === Solar energy === Renewable energy sources are even larger than the traditional fossil fuels and in theory can easily supply the world's energy needs. 89 PW of solar power falls on the planet's surface. While it is not possible to capture all, or even most, of this energy, capturing less than 0.02% would be enough to meet the current energy needs. Barriers to further solar generation include the high price of making solar cells and reliance on weather patterns to generate electricity. Also, current solar generation does not produce electricity at night, which is a particular problem in high northern and southern latitude countries; energy demand is highest in winter, while availability of solar energy is lowest. This could be overcome by buying power from countries closer to the equator during winter months, and may also be addressed with technological developments such as the development of inexpensive energy storage. Globally, solar generation is the fastest growing source of energy, seeing an annual average growth of 35% over the past few years. China, Europe, India, Japan, and the United States are the major growing investors in solar energy. Solar power's share of worldwide electricity usage at the end of 2014 was 1%. === Wave and tidal power === At the end of 2005, 0.3 GW of electricity was produced by tidal power. Due to the tidal forces created by the Moon (68%) and the Sun (32%), and Earth's relative rotation with respect to Moon and Sun, there are fluctuating tides. These tidal fluctuations result in dissipation at an average rate of about 3.7 TW. Another physical limitation is the energy available in the tidal fluctuations of the oceans, which is about 0.6 EJ (exajoule). Note this is only a tiny fraction of the total rotational energy of Earth. Without forcing, this energy would be dissipated (at a dissipation rate of 3.7 TW) in about four semi-diurnal tide periods. So, dissipation plays a significant role in the tidal dynamics of the oceans. Therefore, this limits the available tidal energy to around 0.8 TW (20% of the dissipation rate) in order not to disturb the tidal dynamics too much. Waves are derived from wind, which is in turn derived from solar energy, and at each conversion there is a drop of about two orders of magnitude in available energy. The total power of waves that wash against Earth's shores adds up to 3 TW. === Wind power === The available wind energy estimates range from 300 TW to 870 TW. Using the lower estimate, just 5% of the available wind energy would supply the current worldwide energy needs. Most of this wind energy is available over the open ocean. The oceans cover 71% of the planet and wind tends to blow more strongly over open water because there are fewer obstructions. == References ==
Wikipedia/World_energy_resources
Energy Transfer LP is an American company engaged in the pipeline transportation, storage, and terminaling for natural gas, crude oil, NGLs, refined products and liquid natural gas. It is organized under Delaware state laws and headquartered in Dallas, Texas. It was founded in 1996 by Ray Davis and Kelcy Warren, who remains Executive Chairman. As of 2023, the company owns or operates more than 125,000 miles (201,000 km) of pipelines throughout the U.S., making it one of the largest midstream companies in the country. It is also one of the largest exporters of NGLs in the world. == Business structure == Energy Transfer owns controlling interests in Sunoco LP. It also owns 100% of Sunoco Logistics Partners Operations L.P., 46% non-economic general partner interest in USA Compression Partners L.P., and 100% of Lake Charles LNG which consists of an LNG import terminal and regasification facility near Lake Charles, Louisiana. Energy Transfer's natural gas business includes nearly 90,000 miles (140,000 km) of natural gas transportation pipelines that receive natural gas from other mainline transportation pipelines, storage facilities and gathering systems and deliver the natural gas to industrial end-users, storage facilities, utilities and other pipelines. Energy Transfer owns: 36.4% of the Dakota Access Pipeline and the Energy Transfer Crude Oil Pipeline. 60% of the Bayou Bridge Pipeline, 50% of the Florida Gas Transmission pipeline, 100% of the Trunkline Pipeline, 100% of the Transwestern Pipeline, 100% of the Panhandle Eastern, 100% of the Sea Robin Pipeline, the Revolution Pipeline, the Mariner East pipelines, and 32.6% of the Rover pipeline. As of 2022, it controlled 11,600 miles of pipelines and two storage facilities in the state of Texas. == History == The company was founded by Kelcy Warren and Ray Davis in 1996. In 2011, Energy Transfer and Regency Energy Partners formed a joint venture to purchase midstream assets from Louis Dreyfus Highbridge Energy for $2 billion, now known as Castleton Commodities International. In October 2012, Sunoco, Inc., became a wholly owned subsidiary of the company. It acquired the general partner interests, 100% of the incentive distribution rights, and a 32.4% limited partnership interest in Sunoco Logistics Partners L.P., which operates a geographically diverse portfolio of crude oil and refined products pipelines, terminating and crude oil acquisition and marketing assets. The same year it acquired Southern Union Company which added more than 20,000 miles of gathering and transportation pipeline to its portfolio. In August 2014, the company acquired Susser Holdings Corporation, which operated Stripes Convenience Stores, a chain of 580 stores located in Texas, New Mexico, and Oklahoma, which were re-branded under the Sunoco and A-Plus names. In January 2015, the company acquired Regency Energy Partners for $11 billion. During the same year, the company also agreed to purchase Williams Cos. for around $32.6 billion. The acquisition expanded Energy Transfer Partners' U.S. network of natural-gas pipelines. In October 2018, Energy Transfer Equity completed its acquisition of Energy Transfer Partners, simplifying the partnership as one operating entity known as Energy Transfer LP. In September 2019, the company acquired SemGroup for $5 billion. In January 2020, former Energy Secretary Rick Perry rejoined the company's board. In August 2023, it was announced Energy Transfer had signed a definitive agreement to acquire its Houston-headquartered rival, Crestwood Equity Partners for approximately $7.1 billion. === Dakota Access Pipeline === Dakota Access, LLC is owned 36.4% by the company and built the Bakken pipeline, also known as the Dakota Access Pipeline. In April 2016, the United States Environmental Protection Agency, United States Department of the Interior, and Advisory Council on Historic Preservation requested a full Environmental Impact Statement of the pipeline. In July 2016, the Standing Rock Sioux Tribe filed an injunction against the U.S. Army Corps of Engineers to stop building the pipeline. A group of young activists from Standing Rock ran from North Dakota to Washington, D.C., to present a petition in protest of the construction of the pipeline and launched an international campaign called ReZpect Our Water. In October 2016, Dakota Access Pipeline protests erupted at a construction site near the Cannonball River in North Dakota, resulting in the arrest of hundreds and the use of force by a private security company, North Dakota State and county police, and the North Dakota National Guard. In August 2017, Energy Transfer sued environmental groups Greenpeace USA, BankTrack and Earth First! under the Patriot Act. Energy Transfer accused these activists of attempting to profit via eco-terrorism. Banktrack responded that the case is a strategic lawsuit against public participation without merit, and that it is legal to inform the public and banks about projects that are with "actual negative social, environmental and human rights impacts." In 2019 a federal court in North Dakota dismissed the racketeering and defamation lawsuit filed by Energy Transfer Partners LP, the builder of the 1,000-mile Dakota Access Pipeline, against Greenpeace USA, EarthFirst and BankTrack for their pipeline protests. The lawsuit alleged Greenpeace USA misled the public with false claims about the Standing Rock Sioux tribes' sacred sites and the likelihood the pipeline would contaminate the Missouri River in North Dakota. In contrast, a 2018 Greenpeace report said Energy Transfer pipelines and those owned by the company's subsidiaries "spilled over 500 times in the last decade." A second civil suit in 2025 sought $300 million in damages from Greenpeace, alleging a leadership role by the organization in the Dakota Access Pipeline protests which Greenpeace denies. A Greenpeace legal advisor called the lawsuit "an attack on the broader movement and all of our First Amendment rights to free speech and peaceful protest." == References == == External links == Official website
Wikipedia/Energy_Transfer_Partners
In physics, Hamiltonian mechanics is a reformulation of Lagrangian mechanics that emerged in 1833. Introduced by Sir William Rowan Hamilton, Hamiltonian mechanics replaces (generalized) velocities q ˙ i {\displaystyle {\dot {q}}^{i}} used in Lagrangian mechanics with (generalized) momenta. Both theories provide interpretations of classical mechanics and describe the same physical phenomena. Hamiltonian mechanics has a close relationship with geometry (notably, symplectic geometry and Poisson structures) and serves as a link between classical and quantum mechanics. == Overview == === Phase space coordinates (p, q) and Hamiltonian H === Let ( M , L ) {\displaystyle (M,{\mathcal {L}})} be a mechanical system with configuration space M {\displaystyle M} and smooth Lagrangian L . {\displaystyle {\mathcal {L}}.} Select a standard coordinate system ( q , q ˙ ) {\displaystyle ({\boldsymbol {q}},{\boldsymbol {\dot {q}}})} on M . {\displaystyle M.} The quantities p i ( q , q ˙ , t ) = def ∂ L / ∂ q ˙ i {\displaystyle \textstyle p_{i}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)~{\stackrel {\text{def}}{=}}~{\partial {\mathcal {L}}}/{\partial {\dot {q}}^{i}}} are called momenta. (Also generalized momenta, conjugate momenta, and canonical momenta). For a time instant t , {\displaystyle t,} the Legendre transformation of L {\displaystyle {\mathcal {L}}} is defined as the map ( q , q ˙ ) → ( p , q ) {\displaystyle ({\boldsymbol {q}},{\boldsymbol {\dot {q}}})\to \left({\boldsymbol {p}},{\boldsymbol {q}}\right)} which is assumed to have a smooth inverse ( p , q ) → ( q , q ˙ ) . {\displaystyle ({\boldsymbol {p}},{\boldsymbol {q}})\to ({\boldsymbol {q}},{\boldsymbol {\dot {q}}}).} For a system with n {\displaystyle n} degrees of freedom, the Lagrangian mechanics defines the energy function E L ( q , q ˙ , t ) = def ∑ i = 1 n q ˙ i ∂ L ∂ q ˙ i − L . {\displaystyle E_{\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)\,{\stackrel {\text{def}}{=}}\,\sum _{i=1}^{n}{\dot {q}}^{i}{\frac {\partial {\mathcal {L}}}{\partial {\dot {q}}^{i}}}-{\mathcal {L}}.} The Legendre transform of L {\displaystyle {\mathcal {L}}} turns E L {\displaystyle E_{\mathcal {L}}} into a function H ( p , q , t ) {\displaystyle {\mathcal {H}}({\boldsymbol {p}},{\boldsymbol {q}},t)} known as the Hamiltonian. The Hamiltonian satisfies H ( ∂ L ∂ q ˙ , q , t ) = E L ( q , q ˙ , t ) {\displaystyle {\mathcal {H}}\left({\frac {\partial {\mathcal {L}}}{\partial {\boldsymbol {\dot {q}}}}},{\boldsymbol {q}},t\right)=E_{\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)} which implies that H ( p , q , t ) = ∑ i = 1 n p i q ˙ i − L ( q , q ˙ , t ) , {\displaystyle {\mathcal {H}}({\boldsymbol {p}},{\boldsymbol {q}},t)=\sum _{i=1}^{n}p_{i}{\dot {q}}^{i}-{\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t),} where the velocities q ˙ = ( q ˙ 1 , … , q ˙ n ) {\displaystyle {\boldsymbol {\dot {q}}}=({\dot {q}}^{1},\ldots ,{\dot {q}}^{n})} are found from the ( n {\displaystyle n} -dimensional) equation p = ∂ L / ∂ q ˙ {\displaystyle \textstyle {\boldsymbol {p}}={\partial {\mathcal {L}}}/{\partial {\boldsymbol {\dot {q}}}}} which, by assumption, is uniquely solvable for ⁠ q ˙ {\displaystyle {\boldsymbol {\dot {q}}}} ⁠. The ( 2 n {\displaystyle 2n} -dimensional) pair ( p , q ) {\displaystyle ({\boldsymbol {p}},{\boldsymbol {q}})} is called phase space coordinates. (Also canonical coordinates). === From Euler–Lagrange equation to Hamilton's equations === In phase space coordinates ⁠ ( p , q ) {\displaystyle ({\boldsymbol {p}},{\boldsymbol {q}})} ⁠, the ( n {\displaystyle n} -dimensional) Euler–Lagrange equation ∂ L ∂ q − d d t ∂ L ∂ q ˙ = 0 {\displaystyle {\frac {\partial {\mathcal {L}}}{\partial {\boldsymbol {q}}}}-{\frac {d}{dt}}{\frac {\partial {\mathcal {L}}}{\partial {\dot {\boldsymbol {q}}}}}=0} becomes Hamilton's equations in 2 n {\displaystyle 2n} dimensions === From stationary action principle to Hamilton's equations === Let P ( a , b , x a , x b ) {\displaystyle {\mathcal {P}}(a,b,{\boldsymbol {x}}_{a},{\boldsymbol {x}}_{b})} be the set of smooth paths q : [ a , b ] → M {\displaystyle {\boldsymbol {q}}:[a,b]\to M} for which q ( a ) = x a {\displaystyle {\boldsymbol {q}}(a)={\boldsymbol {x}}_{a}} and q ( b ) = x b . {\displaystyle {\boldsymbol {q}}(b)={\boldsymbol {x}}_{b}.} The action functional S : P ( a , b , x a , x b ) → R {\displaystyle {\mathcal {S}}:{\mathcal {P}}(a,b,{\boldsymbol {x}}_{a},{\boldsymbol {x}}_{b})\to \mathbb {R} } is defined via S [ q ] = ∫ a b L ( t , q ( t ) , q ˙ ( t ) ) d t = ∫ a b ( ∑ i = 1 n p i q ˙ i − H ( p , q , t ) ) d t , {\displaystyle {\mathcal {S}}[{\boldsymbol {q}}]=\int _{a}^{b}{\mathcal {L}}(t,{\boldsymbol {q}}(t),{\dot {\boldsymbol {q}}}(t))\,dt=\int _{a}^{b}\left(\sum _{i=1}^{n}p_{i}{\dot {q}}^{i}-{\mathcal {H}}({\boldsymbol {p}},{\boldsymbol {q}},t)\right)\,dt,} where ⁠ q = q ( t ) {\displaystyle {\boldsymbol {q}}={\boldsymbol {q}}(t)} ⁠, and p = ∂ L / ∂ q ˙ {\displaystyle {\boldsymbol {p}}=\partial {\mathcal {L}}/\partial {\boldsymbol {\dot {q}}}} (see above). A path q ∈ P ( a , b , x a , x b ) {\displaystyle {\boldsymbol {q}}\in {\mathcal {P}}(a,b,{\boldsymbol {x}}_{a},{\boldsymbol {x}}_{b})} is a stationary point of S {\displaystyle {\mathcal {S}}} (and hence is an equation of motion) if and only if the path ( p ( t ) , q ( t ) ) {\displaystyle ({\boldsymbol {p}}(t),{\boldsymbol {q}}(t))} in phase space coordinates obeys the Hamilton equations. === Basic physical interpretation === A simple interpretation of Hamiltonian mechanics comes from its application on a one-dimensional system consisting of one nonrelativistic particle of mass m. The value H ( p , q ) {\displaystyle H(p,q)} of the Hamiltonian is the total energy of the system, in this case the sum of kinetic and potential energy, traditionally denoted T and V, respectively. Here p is the momentum mv and q is the space coordinate. Then H = T + V , T = p 2 2 m , V = V ( q ) {\displaystyle {\mathcal {H}}=T+V,\qquad T={\frac {p^{2}}{2m}},\qquad V=V(q)} T is a function of p alone, while V is a function of q alone (i.e., T and V are scleronomic). In this example, the time derivative of q is the velocity, and so the first Hamilton equation means that the particle's velocity equals the derivative of its kinetic energy with respect to its momentum. The time derivative of the momentum p equals the Newtonian force, and so the second Hamilton equation means that the force equals the negative gradient of potential energy. == Example == A spherical pendulum consists of a mass m moving without friction on the surface of a sphere. The only forces acting on the mass are the reaction from the sphere and gravity. Spherical coordinates are used to describe the position of the mass in terms of (r, θ, φ), where r is fixed, r = ℓ. The Lagrangian for this system is L = 1 2 m ℓ 2 ( θ ˙ 2 + sin 2 ⁡ θ φ ˙ 2 ) + m g ℓ cos ⁡ θ . {\displaystyle L={\frac {1}{2}}m\ell ^{2}\left({\dot {\theta }}^{2}+\sin ^{2}\theta \ {\dot {\varphi }}^{2}\right)+mg\ell \cos \theta .} Thus the Hamiltonian is H = P θ θ ˙ + P φ φ ˙ − L {\displaystyle H=P_{\theta }{\dot {\theta }}+P_{\varphi }{\dot {\varphi }}-L} where P θ = ∂ L ∂ θ ˙ = m ℓ 2 θ ˙ {\displaystyle P_{\theta }={\frac {\partial L}{\partial {\dot {\theta }}}}=m\ell ^{2}{\dot {\theta }}} and P φ = ∂ L ∂ φ ˙ = m ℓ 2 sin 2 θ φ ˙ . {\displaystyle P_{\varphi }={\frac {\partial L}{\partial {\dot {\varphi }}}}=m\ell ^{2}\sin ^{2}\!\theta \,{\dot {\varphi }}.} In terms of coordinates and momenta, the Hamiltonian reads H = [ 1 2 m ℓ 2 θ ˙ 2 + 1 2 m ℓ 2 sin 2 θ φ ˙ 2 ] ⏟ T + [ − m g ℓ cos ⁡ θ ] ⏟ V = P θ 2 2 m ℓ 2 + P φ 2 2 m ℓ 2 sin 2 ⁡ θ − m g ℓ cos ⁡ θ . {\displaystyle H=\underbrace {\left[{\frac {1}{2}}m\ell ^{2}{\dot {\theta }}^{2}+{\frac {1}{2}}m\ell ^{2}\sin ^{2}\!\theta \,{\dot {\varphi }}^{2}\right]} _{T}+\underbrace {{\Big [}-mg\ell \cos \theta {\Big ]}} _{V}={\frac {P_{\theta }^{2}}{2m\ell ^{2}}}+{\frac {P_{\varphi }^{2}}{2m\ell ^{2}\sin ^{2}\theta }}-mg\ell \cos \theta .} Hamilton's equations give the time evolution of coordinates and conjugate momenta in four first-order differential equations, θ ˙ = P θ m ℓ 2 φ ˙ = P φ m ℓ 2 sin 2 ⁡ θ P θ ˙ = P φ 2 m ℓ 2 sin 3 ⁡ θ cos ⁡ θ − m g ℓ sin ⁡ θ P φ ˙ = 0. {\displaystyle {\begin{aligned}{\dot {\theta }}&={P_{\theta } \over m\ell ^{2}}\\[6pt]{\dot {\varphi }}&={P_{\varphi } \over m\ell ^{2}\sin ^{2}\theta }\\[6pt]{\dot {P_{\theta }}}&={P_{\varphi }^{2} \over m\ell ^{2}\sin ^{3}\theta }\cos \theta -mg\ell \sin \theta \\[6pt]{\dot {P_{\varphi }}}&=0.\end{aligned}}} Momentum ⁠ P φ {\displaystyle P_{\varphi }} ⁠, which corresponds to the vertical component of angular momentum ⁠ L z = ℓ sin ⁡ θ × m ℓ sin ⁡ θ φ ˙ {\displaystyle L_{z}=\ell \sin \theta \times m\ell \sin \theta \,{\dot {\varphi }}} ⁠, is a constant of motion. That is a consequence of the rotational symmetry of the system around the vertical axis. Being absent from the Hamiltonian, azimuth φ {\displaystyle \varphi } is a cyclic coordinate, which implies conservation of its conjugate momentum. == Deriving Hamilton's equations == Hamilton's equations can be derived by a calculation with the Lagrangian ⁠ L {\displaystyle {\mathcal {L}}} ⁠, generalized positions qi, and generalized velocities ⋅qi, where ⁠ i = 1 , … , n {\displaystyle i=1,\ldots ,n} ⁠. Here we work off-shell, meaning ⁠ q i {\displaystyle q^{i}} ⁠, ⁠ q ˙ i {\displaystyle {\dot {q}}^{i}} ⁠, ⁠ t {\displaystyle t} ⁠ are independent coordinates in phase space, not constrained to follow any equations of motion (in particular, q ˙ i {\displaystyle {\dot {q}}^{i}} is not a derivative of ⁠ q i {\displaystyle q^{i}} ⁠). The total differential of the Lagrangian is: d L = ∑ i ( ∂ L ∂ q i d q i + ∂ L ∂ q ˙ i d q ˙ i ) + ∂ L ∂ t d t . {\displaystyle \mathrm {d} {\mathcal {L}}=\sum _{i}\left({\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\mathrm {d} q^{i}+{\frac {\partial {\mathcal {L}}}{\partial {\dot {q}}^{i}}}\,\mathrm {d} {\dot {q}}^{i}\right)+{\frac {\partial {\mathcal {L}}}{\partial t}}\,\mathrm {d} t\ .} The generalized momentum coordinates were defined as ⁠ p i = ∂ L / ∂ q ˙ i {\displaystyle p_{i}=\partial {\mathcal {L}}/\partial {\dot {q}}^{i}} ⁠, so we may rewrite the equation as: d L = ∑ i ( ∂ L ∂ q i d q i + p i d q ˙ i ) + ∂ L ∂ t d t = ∑ i ( ∂ L ∂ q i d q i + d ( p i q ˙ i ) − q ˙ i d p i ) + ∂ L ∂ t d t . {\displaystyle {\begin{aligned}\mathrm {d} {\mathcal {L}}=&\sum _{i}\left({\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\,\mathrm {d} q^{i}+p_{i}\mathrm {d} {\dot {q}}^{i}\right)+{\frac {\partial {\mathcal {L}}}{\partial t}}\mathrm {d} t\\=&\sum _{i}\left({\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\,\mathrm {d} q^{i}+\mathrm {d} (p_{i}{\dot {q}}^{i})-{\dot {q}}^{i}\,\mathrm {d} p_{i}\right)+{\frac {\partial {\mathcal {L}}}{\partial t}}\,\mathrm {d} t\,.\end{aligned}}} After rearranging, one obtains: d ( ∑ i p i q ˙ i − L ) = ∑ i ( − ∂ L ∂ q i d q i + q ˙ i d p i ) − ∂ L ∂ t d t . {\displaystyle \mathrm {d} \!\left(\sum _{i}p_{i}{\dot {q}}^{i}-{\mathcal {L}}\right)=\sum _{i}\left(-{\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\,\mathrm {d} q^{i}+{\dot {q}}^{i}\mathrm {d} p_{i}\right)-{\frac {\partial {\mathcal {L}}}{\partial t}}\,\mathrm {d} t\ .} The term in parentheses on the left-hand side is just the Hamiltonian H = ∑ p i q ˙ i − L {\textstyle {\mathcal {H}}=\sum p_{i}{\dot {q}}^{i}-{\mathcal {L}}} defined previously, therefore: d H = ∑ i ( − ∂ L ∂ q i d q i + q ˙ i d p i ) − ∂ L ∂ t d t . {\displaystyle \mathrm {d} {\mathcal {H}}=\sum _{i}\left(-{\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\,\mathrm {d} q^{i}+{\dot {q}}^{i}\,\mathrm {d} p_{i}\right)-{\frac {\partial {\mathcal {L}}}{\partial t}}\,\mathrm {d} t\ .} One may also calculate the total differential of the Hamiltonian H {\displaystyle {\mathcal {H}}} with respect to coordinates ⁠ q i {\displaystyle q^{i}} ⁠, ⁠ p i {\displaystyle p_{i}} ⁠, ⁠ t {\displaystyle t} ⁠ instead of ⁠ q i {\displaystyle q^{i}} ⁠, ⁠ q ˙ i {\displaystyle {\dot {q}}^{i}} ⁠, ⁠ t {\displaystyle t} ⁠, yielding: d H = ∑ i ( ∂ H ∂ q i d q i + ∂ H ∂ p i d p i ) + ∂ H ∂ t d t . {\displaystyle \mathrm {d} {\mathcal {H}}=\sum _{i}\left({\frac {\partial {\mathcal {H}}}{\partial q^{i}}}\mathrm {d} q^{i}+{\frac {\partial {\mathcal {H}}}{\partial p_{i}}}\mathrm {d} p_{i}\right)+{\frac {\partial {\mathcal {H}}}{\partial t}}\,\mathrm {d} t\ .} One may now equate these two expressions for ⁠ d H {\displaystyle d{\mathcal {H}}} ⁠, one in terms of ⁠ L {\displaystyle {\mathcal {L}}} ⁠, the other in terms of ⁠ H {\displaystyle {\mathcal {H}}} ⁠: ∑ i ( − ∂ L ∂ q i d q i + q ˙ i d p i ) − ∂ L ∂ t d t = ∑ i ( ∂ H ∂ q i d q i + ∂ H ∂ p i d p i ) + ∂ H ∂ t d t . {\displaystyle \sum _{i}\left(-{\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\mathrm {d} q^{i}+{\dot {q}}^{i}\mathrm {d} p_{i}\right)-{\frac {\partial {\mathcal {L}}}{\partial t}}\,\mathrm {d} t\ =\ \sum _{i}\left({\frac {\partial {\mathcal {H}}}{\partial q^{i}}}\mathrm {d} q^{i}+{\frac {\partial {\mathcal {H}}}{\partial p_{i}}}\mathrm {d} p_{i}\right)+{\frac {\partial {\mathcal {H}}}{\partial t}}\,\mathrm {d} t\ .} Since these calculations are off-shell, one can equate the respective coefficients of ⁠ d q i {\displaystyle \mathrm {d} q^{i}} ⁠, ⁠ d p i {\displaystyle \mathrm {d} p_{i}} ⁠, ⁠ d t {\displaystyle \mathrm {d} t} ⁠ on the two sides: ∂ H ∂ q i = − ∂ L ∂ q i , ∂ H ∂ p i = q ˙ i , ∂ H ∂ t = − ∂ L ∂ t . {\displaystyle {\frac {\partial {\mathcal {H}}}{\partial q^{i}}}=-{\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\quad ,\quad {\frac {\partial {\mathcal {H}}}{\partial p_{i}}}={\dot {q}}^{i}\quad ,\quad {\frac {\partial {\mathcal {H}}}{\partial t}}=-{\partial {\mathcal {L}} \over \partial t}\ .} On-shell, one substitutes parametric functions q i = q i ( t ) {\displaystyle q^{i}=q^{i}(t)} which define a trajectory in phase space with velocities ⁠ q ˙ i = d d t q i ( t ) {\displaystyle {\dot {q}}^{i}={\tfrac {d}{dt}}q^{i}(t)} ⁠, obeying Lagrange's equations: d d t ∂ L ∂ q ˙ i − ∂ L ∂ q i = 0 . {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial {\mathcal {L}}}{\partial {\dot {q}}^{i}}}-{\frac {\partial {\mathcal {L}}}{\partial q^{i}}}=0\ .} Rearranging and writing in terms of the on-shell p i = p i ( t ) {\displaystyle p_{i}=p_{i}(t)} gives: ∂ L ∂ q i = p ˙ i . {\displaystyle {\frac {\partial {\mathcal {L}}}{\partial q^{i}}}={\dot {p}}_{i}\ .} Thus Lagrange's equations are equivalent to Hamilton's equations: ∂ H ∂ q i = − p ˙ i , ∂ H ∂ p i = q ˙ i , ∂ H ∂ t = − ∂ L ∂ t . {\displaystyle {\frac {\partial {\mathcal {H}}}{\partial q^{i}}}=-{\dot {p}}_{i}\quad ,\quad {\frac {\partial {\mathcal {H}}}{\partial p_{i}}}={\dot {q}}^{i}\quad ,\quad {\frac {\partial {\mathcal {H}}}{\partial t}}=-{\frac {\partial {\mathcal {L}}}{\partial t}}\,.} In the case of time-independent H {\displaystyle {\mathcal {H}}} and ⁠ L {\displaystyle {\mathcal {L}}} ⁠, i.e. ⁠ ∂ H / ∂ t = − ∂ L / ∂ t = 0 {\displaystyle \partial {\mathcal {H}}/\partial t=-\partial {\mathcal {L}}/\partial t=0} ⁠, Hamilton's equations consist of 2n first-order differential equations, while Lagrange's equations consist of n second-order equations. Hamilton's equations usually do not reduce the difficulty of finding explicit solutions, but important theoretical results can be derived from them, because coordinates and momenta are independent variables with nearly symmetric roles. Hamilton's equations have another advantage over Lagrange's equations: if a system has a symmetry, so that some coordinate q i {\displaystyle q_{i}} does not occur in the Hamiltonian (i.e. a cyclic coordinate), the corresponding momentum coordinate p i {\displaystyle p_{i}} is conserved along each trajectory, and that coordinate can be reduced to a constant in the other equations of the set. This effectively reduces the problem from n coordinates to (n − 1) coordinates: this is the basis of symplectic reduction in geometry. In the Lagrangian framework, the conservation of momentum also follows immediately, however all the generalized velocities q ˙ i {\displaystyle {\dot {q}}_{i}} still occur in the Lagrangian, and a system of equations in n coordinates still has to be solved. The Lagrangian and Hamiltonian approaches provide the groundwork for deeper results in classical mechanics, and suggest analogous formulations in quantum mechanics: the path integral formulation and the Schrödinger equation. == Properties of the Hamiltonian == The value of the Hamiltonian H {\displaystyle {\mathcal {H}}} is the total energy of the system if and only if the energy function E L {\displaystyle E_{\mathcal {L}}} has the same property. (See definition of ⁠ H {\displaystyle {\mathcal {H}}} ⁠). d H d t = ∂ H ∂ t {\displaystyle {\frac {d{\mathcal {H}}}{dt}}={\frac {\partial {\mathcal {H}}}{\partial t}}} when ⁠ p ( t ) {\displaystyle \mathbf {p} (t)} ⁠, ⁠ q ( t ) {\displaystyle \mathbf {q} (t)} ⁠ form a solution of Hamilton's equations. Indeed, d H d t = ∂ H ∂ p ⋅ p ˙ + ∂ H ∂ q ⋅ q ˙ + ∂ H ∂ t , {\textstyle {\frac {d{\mathcal {H}}}{dt}}={\frac {\partial {\mathcal {H}}}{\partial {\boldsymbol {p}}}}\cdot {\dot {\boldsymbol {p}}}+{\frac {\partial {\mathcal {H}}}{\partial {\boldsymbol {q}}}}\cdot {\dot {\boldsymbol {q}}}+{\frac {\partial {\mathcal {H}}}{\partial t}},} and everything but the final term cancels out. H {\displaystyle {\mathcal {H}}} does not change under point transformations, i.e. smooth changes q ↔ q ′ {\displaystyle {\boldsymbol {q}}\leftrightarrow {\boldsymbol {q'}}} of space coordinates. (Follows from the invariance of the energy function E L {\displaystyle E_{\mathcal {L}}} under point transformations. The invariance of E L {\displaystyle E_{\mathcal {L}}} can be established directly). ∂ H ∂ t = − ∂ L ∂ t . {\displaystyle {\frac {\partial {\mathcal {H}}}{\partial t}}=-{\frac {\partial {\mathcal {L}}}{\partial t}}.} (See § Deriving Hamilton's equations). ⁠ − ∂ H ∂ q i = p ˙ i = ∂ L ∂ q i {\displaystyle -{\frac {\partial {\mathcal {H}}}{\partial q^{i}}}={\dot {p}}_{i}={\frac {\partial {\mathcal {L}}}{\partial q^{i}}}} ⁠. (Compare Hamilton's and Euler-Lagrange equations or see § Deriving Hamilton's equations). ∂ H ∂ q i = 0 {\displaystyle {\frac {\partial {\mathcal {H}}}{\partial q^{i}}}=0} if and only if ⁠ ∂ L ∂ q i = 0 {\displaystyle {\frac {\partial {\mathcal {L}}}{\partial q^{i}}}=0} ⁠.A coordinate for which the last equation holds is called cyclic (or ignorable). Every cyclic coordinate q i {\displaystyle q^{i}} reduces the number of degrees of freedom by ⁠ 1 {\displaystyle 1} ⁠, causes the corresponding momentum p i {\displaystyle p_{i}} to be conserved, and makes Hamilton's equations easier to solve. == Hamiltonian as the total system energy == In its application to a given system, the Hamiltonian is often taken to be H = T + V {\displaystyle {\mathcal {H}}=T+V} where T {\displaystyle T} is the kinetic energy and V {\displaystyle V} is the potential energy. Using this relation can be simpler than first calculating the Lagrangian, and then deriving the Hamiltonian from the Lagrangian. However, the relation is not true for all systems. The relation holds true for nonrelativistic systems when all of the following conditions are satisfied ∂ V ( q , q ˙ , t ) ∂ q ˙ i = 0 , ∀ i {\displaystyle {\frac {\partial V({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}{\partial {\dot {q}}_{i}}}=0\;,\quad \forall i} ∂ T ( q , q ˙ , t ) ∂ t = 0 {\displaystyle {\frac {\partial T({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}{\partial t}}=0} T ( q , q ˙ ) = ∑ i = 1 n ∑ j = 1 n ( c i j ( q ) q ˙ i q ˙ j ) {\displaystyle T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})=\sum _{i=1}^{n}\sum _{j=1}^{n}{\biggl (}c_{ij}({\boldsymbol {q}}){\dot {q}}_{i}{\dot {q}}_{j}{\biggr )}} where t {\displaystyle t} is time, n {\displaystyle n} is the number of degrees of freedom of the system, and each c i j ( q ) {\displaystyle c_{ij}({\boldsymbol {q}})} is an arbitrary scalar function of q {\displaystyle {\boldsymbol {q}}} . In words, this means that the relation H = T + V {\displaystyle {\mathcal {H}}=T+V} holds true if T {\displaystyle T} does not contain time as an explicit variable (it is scleronomic), V {\displaystyle V} does not contain generalised velocity as an explicit variable, and each term of T {\displaystyle T} is quadratic in generalised velocity. === Proof === Preliminary to this proof, it is important to address an ambiguity in the related mathematical notation. While a change of variables can be used to equate L ( p , q , t ) = L ( q , q ˙ , t ) {\displaystyle {\mathcal {L}}({\boldsymbol {p}},{\boldsymbol {q}},t)={\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)} , it is important to note that ∂ L ( q , q ˙ , t ) ∂ q ˙ i ≠ ∂ L ( p , q , t ) ∂ q ˙ i {\displaystyle {\frac {\partial {\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}{\partial {\dot {q}}_{i}}}\neq {\frac {\partial {\mathcal {L}}({\boldsymbol {p}},{\boldsymbol {q}},t)}{\partial {\dot {q}}_{i}}}} . In this case, the right hand side always evaluates to 0. To perform a change of variables inside of a partial derivative, the multivariable chain rule should be used. Hence, to avoid ambiguity, the function arguments of any term inside of a partial derivative should be stated. Additionally, this proof uses the notation f ( a , b , c ) = f ( a , b ) {\displaystyle f(a,b,c)=f(a,b)} to imply that ∂ f ( a , b , c ) ∂ c = 0 {\displaystyle {\frac {\partial f(a,b,c)}{\partial c}}=0} . === Application to systems of point masses === For a system of point masses, the requirement for T {\displaystyle T} to be quadratic in generalised velocity is always satisfied for the case where T ( q , q ˙ , t ) = T ( q , q ˙ ) {\displaystyle T({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)=T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})} , which is a requirement for H = T + V {\displaystyle {\mathcal {H}}=T+V} anyway. === Conservation of energy === If the conditions for H = T + V {\displaystyle {\mathcal {H}}=T+V} are satisfied, then conservation of the Hamiltonian implies conservation of energy. This requires the additional condition that V {\displaystyle V} does not contain time as an explicit variable. ∂ V ( q , q ˙ , t ) ∂ t = 0 {\displaystyle {\frac {\partial V({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}{\partial t}}=0} In summary, the requirements for H = T + V = constant of time {\displaystyle {\mathcal {H}}=T+V={\text{constant of time}}} to be satisfied for a nonrelativistic system are V = V ( q ) {\displaystyle V=V({\boldsymbol {q}})} T = T ( q , q ˙ ) {\displaystyle T=T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})} T {\displaystyle T} is a homogeneous quadratic function in q ˙ {\displaystyle {\boldsymbol {\dot {q}}}} Regarding extensions to the Euler-Lagrange formulation which use dissipation functions (See Lagrangian mechanics § Extensions to include non-conservative forces), e.g. the Rayleigh dissipation function, energy is not conserved when a dissipation function has effect. It is possible to explain the link between this and the former requirements by relating the extended and conventional Euler-Lagrange equations: grouping the extended terms into the potential function produces a velocity dependent potential. Hence, the requirements are not satisfied when a dissipation function has effect. == Hamiltonian of a charged particle in an electromagnetic field == A sufficient illustration of Hamiltonian mechanics is given by the Hamiltonian of a charged particle in an electromagnetic field. In Cartesian coordinates the Lagrangian of a non-relativistic classical particle in an electromagnetic field is (in SI Units): L = ∑ i 1 2 m x ˙ i 2 + ∑ i q x ˙ i A i − q φ , {\displaystyle {\mathcal {L}}=\sum _{i}{\tfrac {1}{2}}m{\dot {x}}_{i}^{2}+\sum _{i}q{\dot {x}}_{i}A_{i}-q\varphi ,} where q is the electric charge of the particle, φ is the electric scalar potential, and the Ai are the components of the magnetic vector potential that may all explicitly depend on x i {\displaystyle x_{i}} and ⁠ t {\displaystyle t} ⁠. This Lagrangian, combined with Euler–Lagrange equation, produces the Lorentz force law m x ¨ = q E + q x ˙ × B , {\displaystyle m{\ddot {\mathbf {x} }}=q\mathbf {E} +q{\dot {\mathbf {x} }}\times \mathbf {B} \,,} and is called minimal coupling. The canonical momenta are given by: p i = ∂ L ∂ x ˙ i = m x ˙ i + q A i . {\displaystyle p_{i}={\frac {\partial {\mathcal {L}}}{\partial {\dot {x}}_{i}}}=m{\dot {x}}_{i}+qA_{i}.} The Hamiltonian, as the Legendre transformation of the Lagrangian, is therefore: H = ∑ i x ˙ i p i − L = ∑ i ( p i − q A i ) 2 2 m + q φ . {\displaystyle {\mathcal {H}}=\sum _{i}{\dot {x}}_{i}p_{i}-{\mathcal {L}}=\sum _{i}{\frac {\left(p_{i}-qA_{i}\right)^{2}}{2m}}+q\varphi .} This equation is used frequently in quantum mechanics. Under gauge transformation: A → A + ∇ f , φ → φ − f ˙ , {\displaystyle \mathbf {A} \rightarrow \mathbf {A} +\nabla f\,,\quad \varphi \rightarrow \varphi -{\dot {f}}\,,} where f(r, t) is any scalar function of space and time. The aforementioned Lagrangian, the canonical momenta, and the Hamiltonian transform like: L → L ′ = L + q d f d t , p → p ′ = p + q ∇ f , H → H ′ = H − q ∂ f ∂ t , {\displaystyle L\rightarrow L'=L+q{\frac {df}{dt}}\,,\quad \mathbf {p} \rightarrow \mathbf {p'} =\mathbf {p} +q\nabla f\,,\quad H\rightarrow H'=H-q{\frac {\partial f}{\partial t}}\,,} which still produces the same Hamilton's equation: ∂ H ′ ∂ x i | p i ′ = ∂ ∂ x i | p i ′ ( x ˙ i p i ′ − L ′ ) = − ∂ L ′ ∂ x i | p i ′ = − ∂ L ∂ x i | p i ′ − q ∂ ∂ x i | p i ′ d f d t = − d d t ( ∂ L ∂ x ˙ i | p i ′ + q ∂ f ∂ x i | p i ′ ) = − p ˙ i ′ {\displaystyle {\begin{aligned}\left.{\frac {\partial H'}{\partial {x_{i}}}}\right|_{p'_{i}}&=\left.{\frac {\partial }{\partial {x_{i}}}}\right|_{p'_{i}}({\dot {x}}_{i}p'_{i}-L')=-\left.{\frac {\partial L'}{\partial {x_{i}}}}\right|_{p'_{i}}\\&=-\left.{\frac {\partial L}{\partial {x_{i}}}}\right|_{p'_{i}}-q\left.{\frac {\partial }{\partial {x_{i}}}}\right|_{p'_{i}}{\frac {df}{dt}}\\&=-{\frac {d}{dt}}\left(\left.{\frac {\partial L}{\partial {{\dot {x}}_{i}}}}\right|_{p'_{i}}+q\left.{\frac {\partial f}{\partial {x_{i}}}}\right|_{p'_{i}}\right)\\&=-{\dot {p}}'_{i}\end{aligned}}} In quantum mechanics, the wave function will also undergo a local U(1) group transformation during the Gauge Transformation, which implies that all physical results must be invariant under local U(1) transformations. === Relativistic charged particle in an electromagnetic field === The relativistic Lagrangian for a particle (rest mass m {\displaystyle m} and charge ⁠ q {\displaystyle q} ⁠) is given by: L ( t ) = − m c 2 1 − x ˙ ( t ) 2 c 2 + q x ˙ ( t ) ⋅ A ( x ( t ) , t ) − q φ ( x ( t ) , t ) {\displaystyle {\mathcal {L}}(t)=-mc^{2}{\sqrt {1-{\frac {{{\dot {\mathbf {x} }}(t)}^{2}}{c^{2}}}}}+q{\dot {\mathbf {x} }}(t)\cdot \mathbf {A} \left(\mathbf {x} (t),t\right)-q\varphi \left(\mathbf {x} (t),t\right)} Thus the particle's canonical momentum is p ( t ) = ∂ L ∂ x ˙ = m x ˙ 1 − x ˙ 2 c 2 + q A {\displaystyle \mathbf {p} (t)={\frac {\partial {\mathcal {L}}}{\partial {\dot {\mathbf {x} }}}}={\frac {m{\dot {\mathbf {x} }}}{\sqrt {1-{\frac {{\dot {\mathbf {x} }}^{2}}{c^{2}}}}}}+q\mathbf {A} } that is, the sum of the kinetic momentum and the potential momentum. Solving for the velocity, we get x ˙ ( t ) = p − q A m 2 + 1 c 2 ( p − q A ) 2 {\displaystyle {\dot {\mathbf {x} }}(t)={\frac {\mathbf {p} -q\mathbf {A} }{\sqrt {m^{2}+{\frac {1}{c^{2}}}{\left(\mathbf {p} -q\mathbf {A} \right)}^{2}}}}} So the Hamiltonian is H ( t ) = x ˙ ⋅ p − L = c m 2 c 2 + ( p − q A ) 2 + q φ {\displaystyle {\mathcal {H}}(t)={\dot {\mathbf {x} }}\cdot \mathbf {p} -{\mathcal {L}}=c{\sqrt {m^{2}c^{2}+{\left(\mathbf {p} -q\mathbf {A} \right)}^{2}}}+q\varphi } This results in the force equation (equivalent to the Euler–Lagrange equation) p ˙ = − ∂ H ∂ x = q x ˙ ⋅ ( ∇ A ) − q ∇ φ = q ∇ ( x ˙ ⋅ A ) − q ∇ φ {\displaystyle {\dot {\mathbf {p} }}=-{\frac {\partial {\mathcal {H}}}{\partial \mathbf {x} }}=q{\dot {\mathbf {x} }}\cdot ({\boldsymbol {\nabla }}\mathbf {A} )-q{\boldsymbol {\nabla }}\varphi =q{\boldsymbol {\nabla }}({\dot {\mathbf {x} }}\cdot \mathbf {A} )-q{\boldsymbol {\nabla }}\varphi } from which one can derive d d t ( m x ˙ 1 − x ˙ 2 c 2 ) = d d t ( p − q A ) = p ˙ − q ∂ A ∂ t − q ( x ˙ ⋅ ∇ ) A = q ∇ ( x ˙ ⋅ A ) − q ∇ φ − q ∂ A ∂ t − q ( x ˙ ⋅ ∇ ) A = q E + q x ˙ × B {\displaystyle {\begin{aligned}{\frac {\mathrm {d} }{\mathrm {d} t}}\left({\frac {m{\dot {\mathbf {x} }}}{\sqrt {1-{\frac {{\dot {\mathbf {x} }}^{2}}{c^{2}}}}}}\right)&={\frac {\mathrm {d} }{\mathrm {d} t}}(\mathbf {p} -q\mathbf {A} )={\dot {\mathbf {p} }}-q{\frac {\partial \mathbf {A} }{\partial t}}-q({\dot {\mathbf {x} }}\cdot \nabla )\mathbf {A} \\&=q{\boldsymbol {\nabla }}({\dot {\mathbf {x} }}\cdot \mathbf {A} )-q{\boldsymbol {\nabla }}\varphi -q{\frac {\partial \mathbf {A} }{\partial t}}-q({\dot {\mathbf {x} }}\cdot \nabla )\mathbf {A} \\&=q\mathbf {E} +q{\dot {\mathbf {x} }}\times \mathbf {B} \end{aligned}}} The above derivation makes use of the vector calculus identity: 1 2 ∇ ( A ⋅ A ) = A ⋅ J A = A ⋅ ( ∇ A ) = ( A ⋅ ∇ ) A + A × ( ∇ × A ) . {\displaystyle {\tfrac {1}{2}}\nabla \left(\mathbf {A} \cdot \mathbf {A} \right)=\mathbf {A} \cdot \mathbf {J} _{\mathbf {A} }=\mathbf {A} \cdot (\nabla \mathbf {A} )=(\mathbf {A} \cdot \nabla )\mathbf {A} +\mathbf {A} \times (\nabla \times \mathbf {A} ).} An equivalent expression for the Hamiltonian as function of the relativistic (kinetic) momentum, ⁠ P = γ m x ˙ ( t ) = p − q A {\displaystyle \mathbf {P} =\gamma m{\dot {\mathbf {x} }}(t)=\mathbf {p} -q\mathbf {A} } ⁠, is H ( t ) = x ˙ ( t ) ⋅ P ( t ) + m c 2 γ + q φ ( x ( t ) , t ) = γ m c 2 + q φ ( x ( t ) , t ) = E + V {\displaystyle {\mathcal {H}}(t)={\dot {\mathbf {x} }}(t)\cdot \mathbf {P} (t)+{\frac {mc^{2}}{\gamma }}+q\varphi (\mathbf {x} (t),t)=\gamma mc^{2}+q\varphi (\mathbf {x} (t),t)=E+V} This has the advantage that kinetic momentum P {\displaystyle \mathbf {P} } can be measured experimentally whereas canonical momentum p {\displaystyle \mathbf {p} } cannot. Notice that the Hamiltonian (total energy) can be viewed as the sum of the relativistic energy (kinetic+rest), ⁠ E = γ m c 2 {\displaystyle E=\gamma mc^{2}} ⁠, plus the potential energy, ⁠ V = q φ {\displaystyle V=q\varphi } ⁠. == From symplectic geometry to Hamilton's equations == === Geometry of Hamiltonian systems === The Hamiltonian can induce a symplectic structure on a smooth even-dimensional manifold M2n in several equivalent ways, the best known being the following: As a closed nondegenerate symplectic 2-form ω. According to Darboux's theorem, in a small neighbourhood around any point on M there exist suitable local coordinates p 1 , ⋯ , p n , q 1 , ⋯ , q n {\displaystyle p_{1},\cdots ,p_{n},\ q_{1},\cdots ,q_{n}} (canonical or symplectic coordinates) in which the symplectic form becomes: ω = ∑ i = 1 n d p i ∧ d q i . {\displaystyle \omega =\sum _{i=1}^{n}dp_{i}\wedge dq_{i}\,.} The form ω {\displaystyle \omega } induces a natural isomorphism of the tangent space with the cotangent space: ⁠ T x M ≅ T x ∗ M {\displaystyle T_{x}M\cong T_{x}^{*}M} ⁠. This is done by mapping a vector ξ ∈ T x M {\displaystyle \xi \in T_{x}M} to the 1-form ⁠ ω ξ ∈ T x ∗ M {\displaystyle \omega _{\xi }\in T_{x}^{*}M} ⁠, where ω ξ ( η ) = ω ( η , ξ ) {\displaystyle \omega _{\xi }(\eta )=\omega (\eta ,\xi )} for all ⁠ η ∈ T x M {\displaystyle \eta \in T_{x}M} ⁠. Due to the bilinearity and non-degeneracy of ⁠ ω {\displaystyle \omega } ⁠, and the fact that ⁠ dim ⁡ T x M = dim ⁡ T x ∗ M {\displaystyle \dim T_{x}M=\dim T_{x}^{*}M} ⁠, the mapping ξ → ω ξ {\displaystyle \xi \to \omega _{\xi }} is indeed a linear isomorphism. This isomorphism is natural in that it does not change with change of coordinates on M . {\displaystyle M.} Repeating over all ⁠ x ∈ M {\displaystyle x\in M} ⁠, we end up with an isomorphism J − 1 : Vect ( M ) → Ω 1 ( M ) {\displaystyle J^{-1}:{\text{Vect}}(M)\to \Omega ^{1}(M)} between the infinite-dimensional space of smooth vector fields and that of smooth 1-forms. For every f , g ∈ C ∞ ( M , R ) {\displaystyle f,g\in C^{\infty }(M,\mathbb {R} )} and ⁠ ξ , η ∈ Vect ( M ) {\displaystyle \xi ,\eta \in {\text{Vect}}(M)} ⁠, J − 1 ( f ξ + g η ) = f J − 1 ( ξ ) + g J − 1 ( η ) . {\displaystyle J^{-1}(f\xi +g\eta )=fJ^{-1}(\xi )+gJ^{-1}(\eta ).} (In algebraic terms, one would say that the C ∞ ( M , R ) {\displaystyle C^{\infty }(M,\mathbb {R} )} -modules Vect ( M ) {\displaystyle {\text{Vect}}(M)} and Ω 1 ( M ) {\displaystyle \Omega ^{1}(M)} are isomorphic). If ⁠ H ∈ C ∞ ( M × R t , R ) {\displaystyle H\in C^{\infty }(M\times \mathbb {R} _{t},\mathbb {R} )} ⁠, then, for every fixed ⁠ t ∈ R t {\displaystyle t\in \mathbb {R} _{t}} ⁠, ⁠ d H ∈ Ω 1 ( M ) {\displaystyle dH\in \Omega ^{1}(M)} ⁠, and ⁠ J ( d H ) ∈ Vect ( M ) {\displaystyle J(dH)\in {\text{Vect}}(M)} ⁠. J ( d H ) {\displaystyle J(dH)} is known as a Hamiltonian vector field. The respective differential equation on M {\displaystyle M} x ˙ = J ( d H ) ( x ) {\displaystyle {\dot {x}}=J(dH)(x)} is called Hamilton's equation. Here x = x ( t ) {\displaystyle x=x(t)} and J ( d H ) ( x ) ∈ T x M {\displaystyle J(dH)(x)\in T_{x}M} is the (time-dependent) value of the vector field J ( d H ) {\displaystyle J(dH)} at ⁠ x ∈ M {\displaystyle x\in M} ⁠. A Hamiltonian system may be understood as a fiber bundle E over time R, with the fiber Et being the position space at time t ∈ R. The Lagrangian is thus a function on the jet bundle J over E; taking the fiberwise Legendre transform of the Lagrangian produces a function on the dual bundle over time whose fiber at t is the cotangent space T∗Et, which comes equipped with a natural symplectic form, and this latter function is the Hamiltonian. The correspondence between Lagrangian and Hamiltonian mechanics is achieved with the tautological one-form. Any smooth real-valued function H on a symplectic manifold can be used to define a Hamiltonian system. The function H is known as "the Hamiltonian" or "the energy function." The symplectic manifold is then called the phase space. The Hamiltonian induces a special vector field on the symplectic manifold, known as the Hamiltonian vector field. The Hamiltonian vector field induces a Hamiltonian flow on the manifold. This is a one-parameter family of transformations of the manifold (the parameter of the curves is commonly called "the time"); in other words, an isotopy of symplectomorphisms, starting with the identity. By Liouville's theorem, each symplectomorphism preserves the volume form on the phase space. The collection of symplectomorphisms induced by the Hamiltonian flow is commonly called "the Hamiltonian mechanics" of the Hamiltonian system. The symplectic structure induces a Poisson bracket. The Poisson bracket gives the space of functions on the manifold the structure of a Lie algebra. If F and G are smooth functions on M then the smooth function ω(J(dF), J(dG)) is properly defined; it is called a Poisson bracket of functions F and G and is denoted {F, G}. The Poisson bracket has the following properties: bilinearity antisymmetry Leibniz rule: { F 1 ⋅ F 2 , G } = F 1 { F 2 , G } + F 2 { F 1 , G } {\displaystyle \{F_{1}\cdot F_{2},G\}=F_{1}\{F_{2},G\}+F_{2}\{F_{1},G\}} Jacobi identity: { { H , F } , G } + { { F , G } , H } + { { G , H } , F } ≡ 0 {\displaystyle \{\{H,F\},G\}+\{\{F,G\},H\}+\{\{G,H\},F\}\equiv 0} non-degeneracy: if the point x on M is not critical for F then a smooth function G exists such that ⁠ { F , G } ( x ) ≠ 0 {\displaystyle \{F,G\}(x)\neq 0} ⁠. Given a function f d d t f = ∂ ∂ t f + { f , H } , {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}f={\frac {\partial }{\partial t}}f+\left\{f,{\mathcal {H}}\right\},} if there is a probability distribution ρ, then (since the phase space velocity ( p ˙ i , q ˙ i ) {\displaystyle ({\dot {p}}_{i},{\dot {q}}_{i})} has zero divergence and probability is conserved) its convective derivative can be shown to be zero and so ∂ ∂ t ρ = − { ρ , H } {\displaystyle {\frac {\partial }{\partial t}}\rho =-\left\{\rho ,{\mathcal {H}}\right\}} This is called Liouville's theorem. Every smooth function G over the symplectic manifold generates a one-parameter family of symplectomorphisms and if {G, H} = 0, then G is conserved and the symplectomorphisms are symmetry transformations. A Hamiltonian may have multiple conserved quantities Gi. If the symplectic manifold has dimension 2n and there are n functionally independent conserved quantities Gi which are in involution (i.e., {Gi, Gj} = 0), then the Hamiltonian is Liouville integrable. The Liouville–Arnold theorem says that, locally, any Liouville integrable Hamiltonian can be transformed via a symplectomorphism into a new Hamiltonian with the conserved quantities Gi as coordinates; the new coordinates are called action–angle coordinates. The transformed Hamiltonian depends only on the Gi, and hence the equations of motion have the simple form G ˙ i = 0 , φ ˙ i = F i ( G ) {\displaystyle {\dot {G}}_{i}=0\quad ,\quad {\dot {\varphi }}_{i}=F_{i}(G)} for some function F. There is an entire field focusing on small deviations from integrable systems governed by the KAM theorem. The integrability of Hamiltonian vector fields is an open question. In general, Hamiltonian systems are chaotic; concepts of measure, completeness, integrability and stability are poorly defined. === Riemannian manifolds === An important special case consists of those Hamiltonians that are quadratic forms, that is, Hamiltonians that can be written as H ( q , p ) = 1 2 ⟨ p , p ⟩ q {\displaystyle {\mathcal {H}}(q,p)={\tfrac {1}{2}}\langle p,p\rangle _{q}} where ⟨ , ⟩q is a smoothly varying inner product on the fibers T∗qQ, the cotangent space to the point q in the configuration space, sometimes called a cometric. This Hamiltonian consists entirely of the kinetic term. If one considers a Riemannian manifold or a pseudo-Riemannian manifold, the Riemannian metric induces a linear isomorphism between the tangent and cotangent bundles. (See Musical isomorphism). Using this isomorphism, one can define a cometric. (In coordinates, the matrix defining the cometric is the inverse of the matrix defining the metric.) The solutions to the Hamilton–Jacobi equations for this Hamiltonian are then the same as the geodesics on the manifold. In particular, the Hamiltonian flow in this case is the same thing as the geodesic flow. The existence of such solutions, and the completeness of the set of solutions, are discussed in detail in the article on geodesics. See also Geodesics as Hamiltonian flows. === Sub-Riemannian manifolds === When the cometric is degenerate, then it is not invertible. In this case, one does not have a Riemannian manifold, as one does not have a metric. However, the Hamiltonian still exists. In the case where the cometric is degenerate at every point q of the configuration space manifold Q, so that the rank of the cometric is less than the dimension of the manifold Q, one has a sub-Riemannian manifold. The Hamiltonian in this case is known as a sub-Riemannian Hamiltonian. Every such Hamiltonian uniquely determines the cometric, and vice versa. This implies that every sub-Riemannian manifold is uniquely determined by its sub-Riemannian Hamiltonian, and that the converse is true: every sub-Riemannian manifold has a unique sub-Riemannian Hamiltonian. The existence of sub-Riemannian geodesics is given by the Chow–Rashevskii theorem. The continuous, real-valued Heisenberg group provides a simple example of a sub-Riemannian manifold. For the Heisenberg group, the Hamiltonian is given by H ( x , y , z , p x , p y , p z ) = 1 2 ( p x 2 + p y 2 ) . {\displaystyle {\mathcal {H}}\left(x,y,z,p_{x},p_{y},p_{z}\right)={\tfrac {1}{2}}\left(p_{x}^{2}+p_{y}^{2}\right).} pz is not involved in the Hamiltonian. === Poisson algebras === Hamiltonian systems can be generalized in various ways. Instead of simply looking at the algebra of smooth functions over a symplectic manifold, Hamiltonian mechanics can be formulated on general commutative unital real Poisson algebras. A state is a continuous linear functional on the Poisson algebra (equipped with some suitable topology) such that for any element A of the algebra, A2 maps to a nonnegative real number. A further generalization is given by Nambu dynamics. === Generalization to quantum mechanics through Poisson bracket === Hamilton's equations above work well for classical mechanics, but not for quantum mechanics, since the differential equations discussed assume that one can specify the exact position and momentum of the particle simultaneously at any point in time. However, the equations can be further generalized to then be extended to apply to quantum mechanics as well as to classical mechanics, through the deformation of the Poisson algebra over p and q to the algebra of Moyal brackets. Specifically, the more general form of the Hamilton's equation reads d f d t = { f , H } + ∂ f ∂ t , {\displaystyle {\frac {\mathrm {d} f}{\mathrm {d} t}}=\left\{f,{\mathcal {H}}\right\}+{\frac {\partial f}{\partial t}},} where f is some function of p and q, and H is the Hamiltonian. To find out the rules for evaluating a Poisson bracket without resorting to differential equations, see Lie algebra; a Poisson bracket is the name for the Lie bracket in a Poisson algebra. These Poisson brackets can then be extended to Moyal brackets comporting to an inequivalent Lie algebra, as proven by Hilbrand J. Groenewold, and thereby describe quantum mechanical diffusion in phase space (See Phase space formulation and Wigner–Weyl transform). This more algebraic approach not only permits ultimately extending probability distributions in phase space to Wigner quasi-probability distributions, but, at the mere Poisson bracket classical setting, also provides more power in helping analyze the relevant conserved quantities in a system. == See also == == References == == Further reading == == External links == Binney, James J., Classical Mechanics (lecture notes) (PDF), University of Oxford, retrieved 27 October 2010 Tong, David, Classical Dynamics (Cambridge lecture notes), University of Cambridge, retrieved 27 October 2010 Hamilton, William Rowan, On a General Method in Dynamics, Trinity College Dublin Malham, Simon J.A. (2016), An introduction to Lagrangian and Hamiltonian mechanics (lecture notes) (PDF) Morin, David (2008), Introduction to Classical Mechanics (Additional material: The Hamiltonian method) (PDF)
Wikipedia/Hamilton's_equations
The imperial and US customary measurement systems are both derived from an earlier English system of measurement which in turn can be traced back to Ancient Roman units of measurement, and Carolingian and Saxon units of measure. The US Customary system of units was developed and used in the United States after the American Revolution, based on a subset of the English units used in the Thirteen Colonies; it is the predominant system of units in the United States and in U.S. territories (except for Puerto Rico and Guam, where the metric system, which was introduced when both territories were Spanish colonies, is also officially used and is predominant). The imperial system of units was developed and used in the United Kingdom and its empire beginning in 1824. The metric system has, to varying degrees, replaced the imperial system in the countries that once used it. Most of the units of measure have been adapted in one way or another since the Norman Conquest (1066). The units of linear measure have changed the least – the yard (which replaced the ell) and the chain were measures derived in England. The foot used by craftsmen supplanted the longer foot used in agriculture. The agricultural foot was reduced to 10⁄11 of its former size, causing the rod, pole or perch to become 16+1⁄2 (rather than the older 15) agricultural feet. The furlong and the acre, once it became a measure of the size of a piece of land rather than its value, remained relatively unchanged. In the last thousand years, three principal pounds were used in England. The troy pound (5760 grains) was used for precious metals, the apothecaries' pound, (also 5760 grains) was used by pharmacists and the avoirdupois pound (7000 grains) was used for general purposes. The apothecaries and troy pounds are divided into 12 ounces (of 480 grains) while the avoirdupois pound has 16 ounces (of 437.5 grains). The unit of volume, the gallon, has different values in the United States and in the United Kingdom, with the US gallon being 83.26742% of the imperial gallon: the US gallon is based on the wine gallon used in England prior to 1826. There was a US dry gallon, which was 96.8939% of an imperial gallon (and exactly ⁠1+15121/92400⁠ of a US gallon), but this is no longer used and is no longer listed in the relevant statute. After the United States Declaration of Independence the units of measurement in the United States developed into what is now known as customary units. The United Kingdom overhauled its system of measurement in 1826, when it introduced the imperial system of units. This resulted in the two countries having different gallons. Later in the century, efforts were made to align the definition of the pound and the yard in the two countries by using copies of the standards adopted by the British Parliament in 1855. However, these standards were of poor quality compared with those produced for the Convention of the Metre. In 1960, the two countries agreed to common definitions of the yard and the pound based on definitions of the metre and the kilogram. This change, which amounted to a few parts per million, had little effect in the United Kingdom, but resulted in the United States having two slightly different systems of linear measure, the international system and the surveyors system, until the latter was deprecated in 2023. == English units of measure == English units of measure were derived from a combination of Roman, Carolingian and Saxon units of measure. They were a precursor to both the imperial system of units (first defined in 1824, to take effect in 1826) and United States customary units which evolved from English Units from 1776 onwards. The earliest records of English units of measure involve the weight (and therefore the value) of Saxon coins. The penny introduced by Offa was about 20 grains (1.296 g). Edward the Elder increased the weight of the English penny to 26 grains (1.685 g), thereby aligning it with the penny of Charlemagne. By the time of the Norman Conquest (1066), it had decreased to 24 grains (1.555 g). This value was subsequently called the pennyweight and formed the basis of the Troy units of weight—the troy ounce used to this day for weighting precious metals.: 44–48  Edward I (1272–1307) broke the link between a coin's value and its weight when he debased the English coinage by introducing a groat (four pence) which weighed of 89 grains rather than the expected 96 grains. The groat was further devalued in the 1350s when its weight was reduced to 72 grains. During Saxon times land was measured both in terms of its economic value and in terms of its size. The Domesday Book used the hide, an economic unit of measure. In other references the furlong and the rood appear to be units related to ploughing procedures. Of particular interest was the rood which was 15 North German feet in length, the North German foot being equivalent to 335 mm (13.2 inches).: 50  Craftsmen, on the other hand used a shorter Roman foot. Standardization of weights and measures was a recurring issue for monarchs. In 965 AD, King Edgar decreed "that only one weight and one measure should pass throughout the King's dominion". In 1197 Richard I decreed that the measures of corn and pulse, and of wine and ale should be the same throughout all England. Magna Carta, signed by King John in 1215 extended this to include cloth. Some time between 1266 and 1303 the weights and measures of England were radically revised by a law known as the Composition of Yards and Perches (Compositio ulnarum et perticarum) often known as the Compositio for short. This law, attributed to either Henry III or his successor Edward I, instituted a new foot that was exactly 10⁄11 the length of the old foot, with corresponding reductions in the size of the yard, ell, inch, and barleycorn. (Furlongs remained the same, but the rod changed from 15 old feet to 161⁄2 new feet.) In 1324 Edward II systematized units of length by defining the inch as 3 barleycorns, the foot as 12 inches, the yard as 3 feet, the perch as 51⁄2 yards, and the acre as an area 4 by 40 perches. Apart from the ell (45 inches or 114.3 cm, which continued to be used in the cloth trade) and the chain (introduced by Edmund Gunter in 1620, and used in land surveying), these units formed the basis of the units of length of the English system of measurement. The units were however redefined many times – during Henry VIII's time standard yards and ells made of brass were manufactured, during Elizabeth I's time these were replaced with standards made of bronze and in 1742, after scientific comparisons showed a variation of up to 0.2% from the mean, a definitive standard yard was proposed (but not manufactured).: 122–123  During the medieval era, agricultural products other than wool were mostly sold by volume, with various bushels and gallons being introduced over the years for different commodities. In the early fourteenth century the wool trade traditionally used the avoirdupois system of weights, a process that was formalized by Edward III in 1340. At the same time, the stone, when used to weigh wool, was formalized as being 14 pounds.: 91–94  During the Tudor period, numerous reforms were made to English weights and measures. In 1496 Henry VII ordered that reference copies of the yard, pound and gallon should be made of brass and distributed to specified towns and cities throughout the kingdom.: 36  Many weights and measures that had crept into use were banned: in 1527 Henry VIII banned the Tower pound (5400 grains against the 5760 grains of the apothecaries and troy pounds) and the mercantile pound (6750 grains against the 7000 grains of the pound avoirdupois): 105  and in 1592 Elizabeth I ordered the use of the "statute mile" (5280 feet against the 5000 feet of the London or Old English mile).: 123  Under the Act of Union of 1707, Scotland, which had developed its own system of weights and measures independently of England, abandoned them in favour of English weights and measures.: 90–91  The Acts of Union 1800 which united Ireland with Great Britain had less of an effect on weights and measures—Irish weights and measures having been based on the English foot and pound avoirdupois since 1351, though the Irish acre and mile were based on a perch of 7 yards, not 5+1⁄2 yards as in England.: 116  By the early nineteenth century many commodities had their own set of units, the units of measure for the wool and cloth industries had units of measure specific to those commodities, albeit derived on the pound avoirdupois or the foot while wine and beer used units with the same names but different sizes – the wine gallon being 231 cubic inches and the beer or ale gallon being 282 cubic inches. Agricultural produce was sold by the bushel which was based on yet another gallon – the dry gallon of 268.8 cubic inches. Even though not explicitly permitted by statute, many markets used bushels based on weight rather than volume when selling wheat and barley.: 85–88  == Imperial units == The British Weights and Measures Act 1824 repealed all existing British weights and measures legislation, some dating back to the 1300s, and redefined existing units of measure. In particular, a new standard yard and troy pound were manufactured as the standards for length and weight respectively. A new measure, the imperial gallon, which replaced the many gallons in use, was defined as being the volume of 10 pounds of water at 62 °F (17 °C) which, after the authorized experiments, was found to be 277.274 cubic inches. The bushel, which like the gallon, had definitions reflecting the various gallons, was defined as 8 imperial gallons. The Weights and Measures Act 1824 also introduced some changes to the administration of the standards of weights and measures: previously Parliament had been given the custody of the standards but the act passed this responsibility on to the Exchequer. The act also set up an inspectorate for weights and measures. The standard yard and pound were lost in 1834 when a fire partially destroyed the Palace of Westminster. Following a report published in 1841 by a commission new standard yard and pound were manufactured using the best available secondary sources. Unlike the previous standard, the new pound standard was a pound avoirdupois. They were accepted by an act of Parliament as the standards for length and weight in 1855. Following the debacle over the different gallons that had been adopted by the United States and the United Kingdom thirty years earlier, one of the copies of the standard yard was offered to and accepted by the United States Government. The Weights and Measures Act 1835 tidied up a number of shortcomings in the 1825 Act. In response to representations from traders, the stone and the hundredweight were formally defined as being 14 pounds and 112 pounds respectively and the experiment of defining a "heaped" measure as outlined in the 1824 Act was abandoned. Not all trades followed the use of the 14 stone—Britten, in 1880 for example, catalogued a number of different values of the stone in various British towns and cities ranging from 4 lb to 26 lb The 1835 Act also restricted the use of Troy measure to precious metals and required that coal be sold by weight and not by volume. The Weights and Measures Act 1878 overhauled the inspection regime of weights and measures used in trade. The act also reaffirmed the use of the brass standard yard and platinum standard pound as the standards for use in the United Kingdom, reaffirmed the use of apothecaries measures in the pharmaceutical industry, reaffirmed the 1824 definition of the gallon, removed the Troy pound from the list of legal units of measure, added the fathom to the list of legal units and fixed the ratio of metric to imperial units at one metre being equal to 39.3708 inches and one kilogram being equal to 15432.3487 grains (1 lb = 0.453592654 kg). Subsequent to the passing of the act, the volume of the gallon which had been defined as being the volume of 10 lb distilled water at 62 °F (17 °C) was remeasured and set at 277.42 cubic inches though HM Customs and Excise continued to use the 1824 definition for excise purposes. The Weights and Measures Act 1878 effectively prohibited the use of metric weights for trade, the United Kingdom having declined to sign the Convention of the Metre three years previously. The standard imperial yard was not stable – in 1947 its rate of shrinkage was quantified and found to be one part per million every 23 years.: 154  In April 1884 HJ Chaney, Warden of Standards in London unofficially contacted the BIPM (custodians of the standard metre) inquiring whether the BIPM would calibrate some metre standards that had been manufactured in the United Kingdom. Broch, director of the BIPM replied that he was not authorised to perform any such calibrations for non-member states. On 17 September 1884, the British Government signed the convention on behalf of the United Kingdom. The Weights and Measures Act 1897 authorized the use of metric units for trade; a list of metric to imperial equivalents being published the following year. Under the Weights and Measures Act 1824 custody of the standard yard and pound and custody of the administration of weights and measures was entrusted to the Exchequer but verification was administered locally. The Weights and Measures Act 1835 formally described the office and duties of Inspectors of Weights and Measures and required every borough to appoint such officers and the Standards of Weights, Measures, and Coinage Act 1866 passed responsibility for weights and measures to the Board of Trade. In 1900 the Board of Trade established the National Physical Laboratory (NPL) to provide laboratory facilities for weights and measures. After the passage of the Weights and Measures (Metric System) Act 1897, weights and measures in the United Kingdom remained relatively unchanged until after the Second World War. By the middle of the century the difference of 2 parts per million between the British and US standard yards was causing problems—in 1900 a tolerance of 10 parts per million was adequate for science, but by 1950 this tolerance had shrunk to 0.25 parts per million.: 155  In 1960 representatives from the NPL and other national laboratories from the United States and Commonwealth agreed to redefine the yard as being exactly 0.9144 metres, an action that was ratified by the British Government as part of the Weights and Measures Act 1963. Metrication in the United Kingdom began in the mid-1960s. Initially this metrication was voluntary and by 1985 many traditional and imperial units of measure had been voluntarily removed from use in the retail trade. The Weights and Measures Act 1985 formalized their removal for use in trade, though imperial units were retained for use on road signs and the most common imperial weights such as the foot, inch, pound, ounce, gallon and pint continued to be used in the retail trade for the sale of loose goods or goods measured or weighed in front of the customer. Since 1 January 2000 it has been unlawful to use imperial units for weights and measures in retail trade in the United Kingdom except as supplementary units or for the sale of draught beer and cider by the pint or milk that is sold in returnable containers. === British Empire === When colonies attained dominion status, they also attained the right to control their own systems of weights and measures. Many adopted the imperial system of units with local variations. India and Hong Kong supplemented the imperial system of units with their own indigenous units of measure, parts of Canada and South Africa included land survey units of measure from earlier colonial masters in their systems of measure while many territories used only a subset of the units used in the United Kingdom—in particular the stone, quarter and cental were not catalogued in, amongst others, Australian, Canadian and Indian legislation. Furthermore, Canada aligned her ton with US measures by cataloguing the ton of 2000 lb as being legal for trade, but kept the imperial gallon. The standardization of the yard in 1960 required not only agreement between the United States and the United Kingdom, but also of Canada, Australia, New Zealand and South Africa, all of whom had their own standards laboratories. == United States customary units == Prior to the United States Declaration of Independence in 1776, the Thirteen Colonies that were to become the United States used the English system of measurement. The Articles of Confederation, which predated the Constitution, gave the central government "the sole and exclusive right and power of...fixing the Standard of Weights and Measures throughout the United States." Subsequent to the formation of the United States, the Constitution reaffirmed the power of Congress to "fix the Standard of Weights and Measures" but reserved the power to regulate intrastate commerce and weights and measures to the individual states. During the First Congress of the United States in 1789, Thomas Jefferson was detailed to draw up a plan for the currency and weights of measures that would be used in the new republic. In his 1790 response he noted that the existing system of measure was sound but that control of the base artefact was not under the control of the United States. His report suggested a means of manufacturing a local standard and also left the way open for an adoption of a decimal-based system should this be appropriate. In the event, the existing standards were retained. For many years no action was taken at the federal level to ensure harmony in units of measure – the units acquired by the early colonists appeared to serve their purpose. Congress did nothing, but Ferdinand Hassler, Superintendent of the East Coast survey, who in 1790 had met using contacts in his native Switzerland acquired a copy of the [French] mètre des Archives. In 1810 Ferdinand Hassler was dispatched to Europe by the Treasury to acquire measuring instruments and standards. In 1827 Albert Gallatin, United States minister at London acquired an "exact copy" of the troy pound held by the British Government which in 1828 was adopted as the reference copy of weight in the United States. In 1821 John Quincy Adams, then Secretary of State submitted a report based on research commissioned by the Senate in 1817 which recommended against adoption of the metric system. Congress did nothing and in 1832 the Treasury adopted the yard of 36 inches as the unit of length for customs purposes, the avoirdupois pound of 7000 grains as the unit of weight and the gallon of 231 cubic inches (the "Queen Anne gallon") and the bushel of 2150.42 cubic inches as the units of volume. Congress did little to promote standards across the United States other than fixing the size of the yard and the gallon. Throughout the nineteenth century individual states developed their own standards and in particular a variety of bushels based on weight (mass) rather than volume emerged, dependent on both commodity and state. This lack of uniformity crippled inter-state trade and in 1905 the National Bureau of Standards called a meeting of the states to discuss the lack of uniform standards and in many cases, a means of regulatory oversight. A meeting was held the following year and subsequently became an annual gathering known as the National Conference on Weights and Measures (NCWM). In 1915 the conference published its first model standards. The bushel was not fully standardized and the Chicago Mercantile Exchange still (May 2013) uses different bushels for different commodities—a bushel of corn being 56 lb, a bushel of oats 38 lb and a bushel of soybeans 60 lb and a bushel of red winter wheat (both hard and soft) also 60 lb. Other commodities at the exchange are reckoned in pounds, in short tons or in metric tons. One of the actions taken by Congress was to permit the use of the metric system in trade (1866), made at the height of the metrication process in Latin America. Other actions were to ratify the Metre Convention in 1875 and under the Mendenhall Order of 1897, to redefine the pound and the yard in terms of the international prototype of the kilogram and the international prototype of the metre respectively. In 1901 the administration of weights and measures was handed to a federal agency, the National Bureau of Standards, which in 1988 became the National Institute of Standards and Technology. Inactivity by Congress and the lack of uniformity of weights and measures which were crippling US economic growth in the nineteenth century led to the National Bureau of Standards to call a meeting of states in 1905 which resulted in the setting up of the National Conference on Weights and Measures (NCWM). This organisation is the de facto controlling body for weights and measures in the United States, though in respect of international relations such as membership of the General Conference on Weights and Measures (an intergovernmental organization) the US Government itself has to take the lead. During the twentieth century the principal change in the customary system of weights and measures was an agreement between NIST and the corresponding bodies in Australia, Canada, New Zealand, South Africa and the United Kingdom, signed in 1960, that redefined the yard and the pound in terms of the metre and the kilogram respectively. These new units became known as the international yard and pound. Congress has neither endorsed nor repudiated this action. (See § Metric equivalents). == Energy, power, and temperature == Imperial and US customary units have long been used in many branches of engineering. Two of the earliest such units of measure to come into use were the horsepower and the degree Fahrenheit. The horsepower was defined by James Watt in 1782 as the power required to raise 33,000 pounds of water through a height of one foot in one minute and the degree Fahrenheit was first defined by Daniel Fahrenheit in about 1713 as being a temperature scale having its lower calibration point (0 °F) at temperature where a supersaturated salt/ice mixture froze and its upper calibration point at body temperature (96 °F). In 1777 the Royal Society, under the chairmanship of Henry Cavendish, proposed the definition of the Fahrenheit scale be modified such that the temperature corresponding to the melting point of ice be 32 °F (0 °C) and the boiling point of water under standard atmospheric conditions be 212 °F (100 °C). The British thermal unit (Btu) is defined as the heat needed to raise the temperature of one pound of water by one degree Fahrenheit. It was in use before 1859 as a unit of heat based on imperial units rather than the metric units used by the French—Clément-Desormes having defined the calorie in terms of the kilogram and degrees Celsius ('centigrade') in 1824. In 1873 a committee of the British Association for the Advancement of Science under the chairmanship of William Thomson (Lord Kelvin) introduced the concept of coherence into units of measure and proposed the names dyne and erg as the units of force and work in the CGS system of units. Two years later James Thomson, older brother of William Thomson, introduced the term poundal as a coherent unit of force in the Foot–pound–second system (FPS) of measurement. The FPS unit of work is the foot-poundal. Other systems for the measurement of dynamic quantities that used imperial and US customary units are the British Gravitational System (BG) proposed by Arthur Mason Worthington and the English Engineering System (EE). Both systems depend on the gravitational acceleration, and use the pound-force as the unit of force but use different approaches when applying Newton's laws of motion. In the BG system, force, rather than mass has a base unit while the slug is a derived unit of inertia (rather than mass). On the other hand, the EE system uses a different approach and introduces the acceleration due to gravity (g) into its equations. Both these approaches led to slight variations in the meaning of the pound-force (and also of the kilogram-force) in different parts of the world. Various countries published standard values that should be used for g, and in 1901 the CGPM published a standard value for g that should be used in the "International Service of Weights and Measures", namely 32.174049 ft/s2 (9.80665 m/s2), which is equal to the value of g at 45° latitude. Newton's second law in these systems becomes: BG: Force (lbf) = inertia (slugs) × acceleration (ft/s2) EE: Force (lbf) = mass (lb) × acceleration (ft/s2) ÷ g AE: Force (poundals) = mass (lb) × acceleration (ft/s2) AE is ignored in many engineering courses and textbooks while some, such as Darby only uses EE (alongside SI), having described the BG and EE systems as "archaic". == Metric equivalents == The standard yard and [Troy] pound were lost in 1834 when a fire partially destroyed the Palace of Westminster. Following a report published in 1841 by a commission, a new standard yard and pound were manufactured using the best available secondary sources. Unlike the previous standard, the new pound standard, made of platinum, was a pound avoirdupois. The new yard, slightly longer than a yard to prevent wear as was experienced with the mètre des Archives, was made of brass and had two gold plugs close to its end. Scratch marks on the plugs denoted the length of the yard. They were accepted by an Act of Parliament as the standards for length and weight in 1855. Following the debacle over the different gallons that had been adopted by the United States and the United Kingdom thirty years earlier, one of the copies of the standard yard and avoirdupois pound (known in the United States as the "Mint pound") was offered to and accepted by the United States government. In the years that followed the passing of the 1878 act, the standard imperial yard was found to be shrinking at a rate, confirmed in 1950, to be nearly one part per million every 30 years. On the other hand, the international prototype metre, manufactured from a platinum-iridium alloy rather than brass by a British firm, which in 1889 replaced the mètre des Archives as the standard for the metre, was found to be more stable than the standard yard. Both the United States and the United Kingdom, as signatories of the Metre Convention, took delivery of copies of both the standard metre and the standard kilogram. The "Mint pound" was also found to be of poor workmanship. In 1866 the United States government legalised use of metric units in contract law, defining them in terms of the equivalent customary units to five significant figures, which was sufficient for purposes of trade. In 1893, under the Mendenhall Order the United States abandoned the 1855 yard as a standard of length and the "Mint pound" as a standard of mass, redefining them in terms of the metre and kilogram using the values of the 1866 legislation. In the United Kingdom fresh comparisons of the imperial and metric standards of length and mass were made and were used in the Weights and Measures (Metric System) Act 1897 (60 & 61 Vict. c. 46) to redefine the yard and pound in terms of the metre and kilogram respectively. In addition, the definitions of both the yard and the pound in terms of the artifacts held by the British government was reaffirmed giving both the yard and the pound two different definitions. The differences between the British and the US yard and pound was of the order of a few parts per million. By the end of the Second World War, the standards laboratories of Canada, Australia, New Zealand and South Africa also had their own copies of the pound and the yard. These legal and technical discrepancies, described by McGreevy (pg 290) as being "unsound" led to the Commonwealth Science Conference of 1946. proposing that the Commonwealth countries and the United States should all redefine the yard and the pound in terms of an agreed fraction of the metre and kilogram respectively. Agreement was reached by the standards laboratories in 1960 to redefine the yard and the pound as 1 international yard = 0.9144metres 1 international pound = 0.453592 37kilograms The final digit of the value chosen for the pound was chosen so as to make the number divisible by 7 without a repeating decimal, making the grain exactly 64.79891milligrams. This agreement was ratified by the United Kingdom in 1963 while Canada pre-empted the decision by adopting these values in 1951, nine years ahead of the full international agreement. The United States Congress has neither ratified nor repudiated the agreement. == Comparison of imperial and US customary systems == Prior to 1960 the imperial and customary yard and the pound were sufficiently close to each other that for most practical purposes the differences in the sizes of units of length, area, volume and mass could be disregarded, though there were differences in usage - for example, in the United States short road distances are specified in feet while in the United Kingdom they are specified in yards The introduction of the international yard in 1960 caused small but noticeable effects in surveying in the United States which resulted in some states retaining the original definitions of the customary units of measure which are now known as the survey mile, foot, while other states adopted the international foot. According to the National Institute of Standards and Technology, the survey foot is obsolete as of January 1, 2023, and its use is discouraged. The definition of units of weight above a pound differed between the customary and the imperial system - the imperial system employed the stone of 14 pounds, the hundredweight of 8 stone and the ton of 2240 pounds (20 hundredweight), while the customary system of units did not employ the stone but has a hundredweight of 100 pounds and a ton of 2000 pounds. In international trade, the ton of 2240 pounds was often referred to as the "long ton" and the ton of 2000 pounds as the "short ton". When using customary units, it is usual to express body weight in pounds, but when using imperial units, to use stones and pounds. In his Plan for Establishing Uniformity in the Coinage, Weights, and Measures of the United States, Thomas Jefferson, then secretary of state, identified 14 different gallons in English statutes varying in size from 224 to 282 cubic inches (3.67 to 4.62 litres). In 1832, in the absence of any direction by Congress, the United States Treasury chose the second smallest gallon, the "Queen Anne gallon" of 231 cubic inches (3.785 litres), to be the official gallon in the United States for fiscal purposes. Sixteen US fluid ounces make a US pint (8 pints equals 1 gallon in both customary and imperial systems). During the reform of weights and measures legislation in the United Kingdom in 1824, old gallons were replaced by the new imperial gallon, which was defined to be the volume of 10 pounds of water at 62 °F (17 °C), and was determined experimentally to be 277.42 cubic inches (4.54609 litres). Twenty imperial fluid ounces make an imperial pint, the imperial fluid ounce being 0.96 US fluid ounces. The US Customary system of units makes use of set of dry units of capacity that have a similar set of names to those of liquid capacity, though different volumes: the dry pint having a volume of 33.6 cubic inches (550 ml) against the US fluid pint's volume of 28.875 cubic inches (473 ml) and the imperial pint of 34.68 cubic inches (568 ml). The imperial system of measure does not have an equivalent to the US customary system of "dry measure". In the international commodities markets, the barrel (42USgallons, ≈159 litres) is used in both London and New York/Chicago for trading in crude oil and the troy ounce (≈31.10 grams) for trading in precious metals, except the London markets use metric units and the Chicago Board of Trade uses customary units. == Units in use == The tables below catalogue the imperial units of measure that were permitted for use in trade in the United Kingdom on the eve of metrication (1976) and the customary "units of measurement that have traditionally been used in the United States". In addition, named units of measure that are used in the engineering industry are also catalogued. Prior to metrication, the units of measure used in Ireland were the same as those used in the United Kingdom while those used in the British Commonwealth and in South Africa were in most cases a subset of those used in the United Kingdom with, in certain cases, local differences. Unless otherwise specified, the units of measure quoted below were used in both the United States, the United Kingdom. The SI equivalents are quoted to four significant figures. === Units of length === In 1893 the United States fixed the yard at 3600⁄3937 metres, making the yard 0.9144018 metres and 1896 the British authorities fixed the yard as being 0.9143993 metres. At the time the discrepancy of about two parts per million was considered to be insignificant. In 1960, the United Kingdom, United States, Australia, Canada and South Africa standardised their units of length by defining the "international yard" as being 0.9144 metres exactly. This change affected land surveyors in the United States and led to the old units being renamed "survey feet", "survey miles" etc. However the introduction of the metric-based Ordnance Survey National Grid in the United Kingdom in 1938 meant that British surveyors were unaffected by the change. Notes Notes === Units of area === The introduction of the international yard in 1960 had no effect on British measurements of area, however US measurements of land area, as opposed to other measurements of area (such as pounds per square inch) continued to be based on the US statute yard. Notes Notes === Volume of dry goods === Notes Notes === Volume of liquids === Several of the units of liquid volume have similar names, but have different volumes – and in the case of fluid ounces and pints, different relations. In addition the definitions of the imperial and US gallons are based on different concepts – the imperial gallon is defined in terms of the volume occupied by a specified mass of water, while the US gallon is specified in terms of cubic linear measurement. Notes Notes === Units of mass === Units of mass in both the imperial and US customary system have always used the same standard, though differences in multiples of the avoirdupois pound developed in the nineteenth century. Both systems used three scales – avoirdupois for general use, troy for precious metals, and apothecaries for medicine. ==== Avoirdupois ==== Notes ==== Troy ==== Notes ==== Apothecary ==== Apothecary mass were used in the pharmaceutical industry and have remained almost unchanged since the Middle Ages – the apothecary pound and ounce being the same as the troy pound and ounce, but each system having different divisions. In the United Kingdom, they were replaced by metric units in 1970. === Energy, power, and temperature === The names of most derived units of measure in the imperial and US customary systems are concatenations of the constituent parts of the unit of measure, for example the unit of pressure is the pounds [force] per square inch. Apart from the poundal, most of the named units of measure are non-coherent, but were adopted due to traditional working practice. === Other units === In addition to those catalogued above, there are hundreds of other units of measurement in both the imperial and the US customary system of measurement – many are specific to a particular industry of application. Such units could, in theory, be replaced by general units of the same dimension, for example the barrel (42 US gallons, 34.97 imperial gallons or 159.0 litres) used in the oil industry has the dimension of volume and could be replaced by the cubic metre or litre. The definitions of potential difference (volt), electric current (ampere), electrical resistance (ohm) were defined in terms of metric units, international agreement having been reached at a series of IEC Congress in Chicago between 1881 and 1906 when the electrical industry started. At that time, the metric system had become established in continental Europe while an issue in the United Kingdom. Similarly, the radiological units of measurement were defined in terms of metric units, agreement first having been reached at the second International Congress of Radiology at Stockholm (1928). == Current status == In the 1960s a metrication program was initiated in most English-speaking countries, resulting in either the partial or total displacement of the imperial system or the US customary system of measure in those countries. The current status of imperial and US customary units, as summarised by NIST, is that "the SI metric system is now the official system of units in the United Kingdom, while the customary units are still predominantly used in the United States". The situation is however not as clear-cut as this. In the United States, for example, the metric system is the predominant system of measure in certain fields such as automobile manufacture even though customary units are used in aircraft manufacture. In the United Kingdom, metric units are required for almost all regulated use of units of measure except for a few specifically exempt areas such as road signs, speedometers and draught beer. Metrication is also all but complete in the Commonwealth countries of Australia, India, New Zealand and South Africa; metrication in Canada has displaced the imperial system in many areas. The imperial and US customary systems of measurement use the SI for their formal definitions, the yard being defined as 0.9144 metres exactly, the pound avoirdupois as 0.45359237 kilograms exactly while both systems of measure share the definition of the second. == See also == Conversion of units Metrication == Notes == == References ==
Wikipedia/Imperial_and_US_customary_measurement_systems
In physics, energy is the quantitative property that must be transferred to an object in order to perform work on, or to heat, the object. Energy may also refer to: == Science and philosophy == Energy (Aristotle), "actuality" in Aristotelian philosophy Energy (esotericism), a concept in spirituality and alternative medicine Energy (psychological), a postulated principle underlying mental processes Energy (signal processing), the energy Es of a continuous-time signal x(t) Energy (journal), a scientific journal published by Elsevier Energies (journal), a scientific journal published by MDPI == Music == Energy (event), an annual techno-music event in Zurich, Switzerland Energy Rekords, a record label Trance Energy or Energy, an annual trance-music event in the Netherlands === Bands === Energy (American band), a punk rock band Energy (Taiwanese band), a Taiwanese boy group Energy, a fusion jazz-rock-blues band Energy featuring Tommy Bolin === Albums === Energy (Fourplay album), 2008 Energy (Disclosure album), 2020 Energy (Jeremy Steig album), 1971 Energy (Operation Ivy album), 1989 Energy (Pointer Sisters album), 1978 === Songs === "Energy" (Beyoncé song), featuring Beam, 2022 "Energy" (Disclosure song), 2020 "Energy" (Drake song), 2015 "Energy" (Keri Hilson song), 2009 "Energy" (Nuša Derenda song), entry for the 2001 Eurovision Song Contest "Energy" (Stace Cadet and KLP song), 2020 "Energy (Stay Far Away)", by Skepta and Wizkid, 2018 "Energy", by the Apples in Stereo from New Magnetic Wonder, 2007 "Energy", by Collective Soul from Seven Year Itch, 2001 "Energy", by Joe Satriani from What Happens Next, 2018 "Energy", by Krokus from Krokus, 1976 "Energy", by Melissa Manchester from Mathematics, 1985 "Energy", by the Pillows from Pantomime, 1990 "The Energy", by Audiovent from Dirty Sexy Knights in Paris, 2002 "The Energy (Feel the Vibe)", by Astro Trax and Shola Phillips, 1998 "The Energy", by Shinedown from The Sound of Madness, 2008 == Radio stations == NRJ Radio in Paris, France DWET-FM Energy FM 106.7 in the Philippines KZCE Energy 92.7 and 101.1 in Phoenix, Arizona KWFN, formerly Energy 97.3 in San Diego, California KSON (FM), formerly Energy FM 103.7 San Diego, California KREV (FM) Energy 92.7 in San Francisco, California WCPY formerly Energy 92.5 and 92.7 in Arlington Heights, Chicago, Illinois WCLR (FM) formerly Energy 92.5 and 92.7 in DeKalb, Illinois WVLI formerly Energy 92.5 and 92.7, in Kankakee, Illinois KBZD Energy 99.7 Amarillo, Texas CHWE-FM Energy 106 in Winnipeg, Canada == Place names == Energy, Illinois, a village in the United States Energy, Mississippi, a settlement in the United States Energy, Missouri, a settlement in the United States Energy, Texas, a settlement in the United States == Other uses == Energy (Dubai Metro), a metro station on the Red Line in Dubai, UAE Energy (TV channel), a Spanish TV channel owned by Mediaset España Energy (video gaming), a game mechanic in certain action, role-playing and mobile video games == See also == Conservation of energy Electric energy Energetic mood Energetics (disambiguation) Energeia, the general principle of "activity" as opposed to possibility, in Aristotelianism Energia (disambiguation) Energy development, the utilization of energy resources Energy distance, distances between statistical observations Energy industry, energy resources such as fuel and electricity Essence–Energies distinction, a concept in Eastern Orthodox theology Food energy Forms of energy History of energy Negative energy Energie (disambiguation) NRG (disambiguation) Waste-to-energy All pages with titles beginning with energy
Wikipedia/Energy_(disambiguation)
Industrial resources (minerals) are geological materials that are mined for their commercial value, which are not fuel (fuel minerals or mineral fuels) and are not sources of metals (metallic minerals) but are used in the industries based on their physical and/or chemical properties. They are used in their natural state or after beneficiation either as raw materials or as additives in a wide range of applications. == Examples and applications == Typical examples of industrial rocks and minerals are limestone, clays, sand, gravel, diatomite, kaolin, bentonite, silica, barite, gypsum, and talc. Some examples of applications for industrial minerals are construction, ceramics, paints, electronics, filtration, plastics, glass, detergents and paper. In some cases, even organic materials (peat) and industrial products or by-products (cement, slag, silica fume) are categorized under industrial minerals, as well as metallic compounds mainly utilized in non-metallic form (as an example most titanium is utilized as an oxide TiO2 rather than Ti metal). The evaluation of raw materials to determine their suitability for use as industrial minerals requires technical test-work, mineral processing trials and end-product evaluation; free to download evaluation manuals are available for the following industrial minerals: limestone, flake graphite, diatomite, kaolin, bentonite and construction materials. These are available from the British Geological Survey external link 'Industrial Minerals in BGS' with regular industry news and reports published in Industrial Minerals magazine. == List of industrial minerals by name == Aggregates Alunite Asbestos Asphalt, Natural Ball clays Baryte Bentonite / Diatomite / Fuller's earth Borates Brines Carbonatites Corundum Diamond Dimension stone Feldspar and Nepheline - Syenite Fluorspar Garnet Gem mineral Granite Graphite Gypsum Halite Kaolin Kyanite / Sillimanite / Andalusite Limestone / Dolomite Marble Mica Mirabilite Natron Nahcolite Novaculite Olivine Perlite Phosphate Potash –Potassium minerals Pumice Quartz Slate Silica sand / Tripoli Sulfur Talc Vermiculite Wollastonite Zeolites == See also == List of minerals List of minerals recognized by the International Mineralogical Association Industrial Minerals, magazine Mineral industry Minerals == References == == External links == Industrial Minerals Association - North America The "chessboard" classification scheme of mineral deposits (abstract) Archived 12 December 2013 at the Wayback Machine
Wikipedia/Industrial_mineral
In the Arrhenius model of reaction rates, activation energy is the minimum amount of energy that must be available to reactants for a chemical reaction to occur. The activation energy (Ea) of a reaction is measured in kilojoules per mole (kJ/mol) or kilocalories per mole (kcal/mol). Activation energy can be thought of as a magnitude of the potential barrier (sometimes called the energy barrier) separating minima of the potential energy surface pertaining to the initial and final thermodynamic state. For a chemical reaction to proceed at a reasonable rate, the temperature of the system should be high enough such that there exists an appreciable number of molecules with translational energy equal to or greater than the activation energy. The term "activation energy" was introduced in 1889 by the Swedish scientist Svante Arrhenius. == Other uses == Although less commonly used, activation energy also applies to nuclear reactions and various other physical phenomena. == Temperature dependence and the relation to the Arrhenius equation == The Arrhenius equation gives the quantitative basis of the relationship between the activation energy and the rate at which a reaction proceeds. From the equation, the activation energy can be found through the relation k = A e − E a / ( R T ) {\displaystyle k=Ae^{{-E_{\textrm {a}}}/{(RT)}}} where A is the pre-exponential factor for the reaction, R is the universal gas constant, T is the absolute temperature (usually in kelvins), and k is the reaction rate coefficient. Even without knowing A, Ea can be evaluated from the variation in reaction rate coefficients as a function of temperature (within the validity of the Arrhenius equation). At a more advanced level, the net Arrhenius activation energy term from the Arrhenius equation is best regarded as an experimentally determined parameter that indicates the sensitivity of the reaction rate to temperature. There are two objections to associating this activation energy with the threshold barrier for an elementary reaction. First, it is often unclear as to whether or not reaction does proceed in one step; threshold barriers that are averaged out over all elementary steps have little theoretical value. Second, even if the reaction being studied is elementary, a spectrum of individual collisions contributes to rate constants obtained from bulk ('bulb') experiments involving billions of molecules, with many different reactant collision geometries and angles, different translational and (possibly) vibrational energies—all of which may lead to different microscopic reaction rates. == Catalysts == A substance that modifies the transition state to lower the activation energy is termed a catalyst; a catalyst composed only of protein and (if applicable) small molecule cofactors is termed an enzyme. A catalyst increases the rate of reaction without being consumed in the reaction. In addition, the catalyst lowers the activation energy, but it does not change the energies of the original reactants or products, and so does not change equilibrium. Rather, the reactant energy and the product energy remain the same and only the activation energy is altered (lowered). A catalyst is able to reduce the activation energy by forming a transition state in a more favorable manner. Catalysts, by nature, create a more "comfortable" fit for the substrate of a reaction to progress to a transition state. This is possible due to a release of energy that occurs when the substrate binds to the active site of a catalyst. This energy is known as Binding Energy. Upon binding to a catalyst, substrates partake in numerous stabilizing forces while within the active site (e.g. hydrogen bonding or van der Waals forces). Specific and favorable bonding occurs within the active site until the substrate forms to become the high-energy transition state. Forming the transition state is more favorable with the catalyst because the favorable stabilizing interactions within the active site release energy. A chemical reaction is able to manufacture a high-energy transition state molecule more readily when there is a stabilizing fit within the active site of a catalyst. The binding energy of a reaction is this energy released when favorable interactions between substrate and catalyst occur. The binding energy released assists in achieving the unstable transition state. Reactions without catalysts need a higher input of energy to achieve the transition state. Non-catalyzed reactions do not have free energy available from active site stabilizing interactions, such as catalytic enzyme reactions. == Relationship with Gibbs energy of activation == In the Arrhenius equation, the term activation energy (Ea) is used to describe the energy required to reach the transition state, and the exponential relationship k = A exp(−Ea/RT) holds. In transition state theory, a more sophisticated model of the relationship between reaction rates and the transition state, a superficially similar mathematical relationship, the Eyring equation, is used to describe the rate constant of a reaction: k = (kBT / h) exp(−ΔG‡ / RT). However, instead of modeling the temperature dependence of reaction rate phenomenologically, the Eyring equation models individual elementary steps of a reaction. Thus, for a multistep process, there is no straightforward relationship between the two models. Nevertheless, the functional forms of the Arrhenius and Eyring equations are similar, and for a one-step process, simple and chemically meaningful correspondences can be drawn between Arrhenius and Eyring parameters. Instead of also using Ea, the Eyring equation uses the concept of Gibbs energy and the symbol ΔG‡ to denote the Gibbs energy of activation to achieve the transition state. In the equation, kB and h are the Boltzmann and Planck constants, respectively. Although the equations look similar, it is important to note that the Gibbs energy contains an entropic term in addition to the enthalpic one. In the Arrhenius equation, this entropic term is accounted for by the pre-exponential factor A. More specifically, we can write the Gibbs free energy of activation in terms of enthalpy and entropy of activation: ΔG‡ = ΔH‡ − T ΔS‡. Then, for a unimolecular, one-step reaction, the approximate relationships Ea = ΔH‡ + RT and A = (kBT/h) exp(1 + ΔS‡/R) hold. Note, however, that in Arrhenius theory proper, A is temperature independent, while here, there is a linear dependence on T. For a one-step unimolecular process whose half-life at room temperature is about 2 hours, ΔG‡ is approximately 23 kcal/mol. This is also the roughly the magnitude of Ea for a reaction that proceeds over several hours at room temperature. Due to the relatively small magnitude of TΔS‡ and RT at ordinary temperatures for most reactions, in sloppy discourse, Ea, ΔG‡, and ΔH‡ are often conflated and all referred to as the "activation energy". The enthalpy, entropy and Gibbs energy of activation are more correctly written as Δ‡Ho, Δ‡So and Δ‡Go respectively, where the o indicates a quantity evaluated between standard states. However, some authors omit the o in order to simplify the notation. The total free energy change of a reaction is independent of the activation energy however. Physical and chemical reactions can be either exergonic or endergonic, but the activation energy is not related to the spontaneity of a reaction. The overall reaction energy change is not altered by the activation energy. == Negative activation energy == In some cases, rates of reaction decrease with increasing temperature. When following an approximately exponential relationship so the rate constant can still be fit to an Arrhenius expression, this results in a negative value of Ea. Elementary reactions exhibiting negative activation energies are typically barrierless reactions, in which the reaction proceeding relies on the capture of the molecules in a potential well. Increasing the temperature leads to a reduced probability of the colliding molecules capturing one another (with more glancing collisions not leading to reaction as the higher momentum carries the colliding particles out of the potential well), expressed as a reaction cross section that decreases with increasing temperature. Such a situation no longer leads itself to direct interpretations as the height of a potential barrier. Some multistep reactions can also have apparent negative activation energies. For example, the overall rate constant k for a two-step reaction A ⇌ B, B → C is given by k = k2K1, where k2 is the rate constant of the rate-limiting slow second step and K1 is the equilibrium constant of the rapid first step. In some reactions, K1 decreases with temperature more rapidly than k2 increases, so that k actually decreases with temperature corresponding to a negative observed activation energy. An example is the oxidation of nitric oxide which is a termolecular reaction 2 NO + O 2 ⟶ 2 NO 2 {\displaystyle {\ce {2 NO + O2 -> 2 NO2}}} . The rate law is v = k [ N O ] 2 [ O 2 ] {\displaystyle v=k\,\left[{\rm {NO}}\right]^{2}\,\left[{\rm {O_{2}}}\right]} with a negative activation energy. This is explained by the two-step mechanism: 2 NO ↽ − − ⇀ N 2 O 2 {\displaystyle {\ce {2 NO <=> N2O2}}} and N 2 O 2 + O 2 ⟶ 2 NO 2 {\displaystyle {\ce {N2O2 + O2 -> 2 NO2}}} . Certain cationic polymerization reactions have negative activation energies so that the rate decreases with temperature. For chain-growth polymerization, the overall activation energy is E = E i + E p − E t {\displaystyle \textstyle E=E_{i}+E_{p}-E_{t}} , where i, p and t refer respectively to initiation, propagation and termination steps. The propagation step normally has a very small activation energy, so that the overall value is negative if the activation energy for termination is larger than that for initiation. The normal range of overall activation energies for cationic polymerization varies from 40 to 60 kJ/mol. == See also == Activation energy asymptotics Chemical kinetics Mean kinetic temperature Autoignition temperature Quantum tunnelling == References ==
Wikipedia/Activation_energy
Water is an inorganic compound with the chemical formula H2O. It is a transparent, tasteless, odorless, and nearly colorless chemical substance. It is the main constituent of Earth's hydrosphere and the fluids of all known living organisms (in which it acts as a solvent). It is vital for all known forms of life, despite not providing food energy or organic micronutrients. Its chemical formula, H2O, indicates that each of its molecules contains one oxygen and two hydrogen atoms, connected by covalent bonds. The hydrogen atoms are attached to the oxygen atom at an angle of 104.45°. In liquid form, H2O is also called "water" at standard temperature and pressure. Because Earth's environment is relatively close to water's triple point, water exists on Earth as a solid, a liquid, and a gas. It forms precipitation in the form of rain and aerosols in the form of fog. Clouds consist of suspended droplets of water and ice, its solid state. When finely divided, crystalline ice may precipitate in the form of snow. The gaseous state of water is steam or water vapor. Water covers about 71.0% of the Earth's surface, with seas and oceans making up most of the water volume (about 96.5%). Small portions of water occur as groundwater (1.7%), in the glaciers and the ice caps of Antarctica and Greenland (1.7%), and in the air as vapor, clouds (consisting of ice and liquid water suspended in air), and precipitation (0.001%). Water moves continually through the water cycle of evaporation, transpiration (evapotranspiration), condensation, precipitation, and runoff, usually reaching the sea. Water plays an important role in the world economy. Approximately 70% of the fresh water used by humans goes to agriculture. Fishing in salt and fresh water bodies has been, and continues to be, a major source of food for many parts of the world, providing 6.5% of global protein. Much of the long-distance trade of commodities (such as oil, natural gas, and manufactured products) is transported by boats through seas, rivers, lakes, and canals. Large quantities of water, ice, and steam are used for cooling and heating in industry and homes. Water is an excellent solvent for a wide variety of substances, both mineral and organic; as such, it is widely used in industrial processes and in cooking and washing. Water, ice, and snow are also central to many sports and other forms of entertainment, such as swimming, pleasure boating, boat racing, surfing, sport fishing, diving, ice skating, snowboarding, and skiing. == Etymology == The word water comes from Old English wæter, from Proto-Germanic *watar (source also of Old Saxon watar, Old Frisian wetir, Dutch water, Old High German wazzar, German Wasser, vatn, Gothic 𐍅𐌰𐍄𐍉 (wato)), from Proto-Indo-European *wod-or, suffixed form of root *wed- ('water'; 'wet'). Also cognate, through the Indo-European root, with Greek ύδωρ (ýdor; from Ancient Greek ὕδωρ (hýdōr), whence English 'hydro-'), Russian вода́ (vodá), Irish uisce, and Albanian ujë. == History == === On Earth === == Properties == Water (H2O) is a polar inorganic compound. At room temperature it is a tasteless and odorless liquid, nearly colorless with a hint of blue. The simplest hydrogen chalcogenide, it is by far the most studied chemical compound and is sometimes described as the "universal solvent" for its ability to dissolve more substances than any other liquid, though it is poor at dissolving nonpolar substances. This allows it to be the "solvent of life": indeed, water as found in nature almost always includes various dissolved substances, and special steps are required to obtain chemically pure water. Water is the only common substance to exist as a solid, liquid, and gas in normal terrestrial conditions. === States === Along with oxidane, water is one of the two official names for the chemical compound H2O; it is also the liquid phase of H2O. The other two common states of matter of water are the solid phase, ice, and the gaseous phase, water vapor or steam. The addition or removal of heat can cause phase transitions: freezing (water to ice), melting (ice to water), vaporization (water to vapor), condensation (vapor to water), sublimation (ice to vapor) and deposition (vapor to ice). ==== Density ==== Water is one of only a few common naturally occurring substances which, for some temperature ranges, become less dense as they cool, and the only known naturally occurring substance which does so while liquid. In addition it is unusual as it becomes significantly less dense as it freezes, though it is not unique in that respect. At 1 atm pressure, it reaches its maximum density of 999.972 kg/m3 (62.4262 lb/cu ft) at 3.98 °C (39.16 °F). Below that temperature, but above the freezing point of 0 °C (32 °F), it expands becoming less dense until it reaches freezing point, reaching a density in its liquid phase of 999.8 kg/m3 (62.4155 lb/cu ft). Once it freezes and becomes ice, it expands by about 9%, with a density of 917 kg/m3 (57.25 lb/cu ft). This expansion can exert enormous pressure, bursting pipes and cracking rocks. As a solid, it displays the usual behavior of contracting and becoming more dense as it cools. These unusual thermal properties have important consequences for life on earth. In a lake or ocean, water at 4 °C (39 °F) sinks to the bottom, and ice forms on the surface, floating on the liquid water. This ice insulates the water below, preventing it from freezing solid. Without this protection, most aquatic organisms residing in lakes would perish during the winter. In addition, this anomalous behavior is an important part of the thermohaline circulation which distributes heat around the planet's oceans. ==== Magnetism ==== Water is a diamagnetic material. Though interaction is weak, with superconducting magnets it can attain a notable interaction. ==== Phase transitions ==== At a pressure of one atmosphere (atm), ice melts or water freezes (solidifies) at 0 °C (32 °F) and water boils or vapor condenses at 100 °C (212 °F). However, even below the boiling point, water can change to vapor at its surface by evaporation (vaporization throughout the liquid is known as boiling). Sublimation and deposition also occur on surfaces. For example, frost is deposited on cold surfaces while snowflakes form by deposition on an aerosol particle or ice nucleus. In the process of freeze-drying, a food is frozen and then stored at low pressure so the ice on its surface sublimates. The melting and boiling points depend on pressure. A good approximation for the rate of change of the melting temperature with pressure is given by the Clausius–Clapeyron relation: d T d P = T ( v L − v S ) L f {\displaystyle {\frac {dT}{dP}}={\frac {T\left(v_{\text{L}}-v_{\text{S}}\right)}{L_{\text{f}}}}} where v L {\displaystyle v_{\text{L}}} and v S {\displaystyle v_{\text{S}}} are the molar volumes of the liquid and solid phases, and L f {\displaystyle L_{\text{f}}} is the molar latent heat of melting. In most substances, the volume increases when melting occurs, so the melting temperature increases with pressure. However, because ice is less dense than water, the melting temperature decreases. In glaciers, pressure melting can occur under sufficiently thick volumes of ice, resulting in subglacial lakes. The Clausius-Clapeyron relation also applies to the boiling point, but with the liquid/gas transition the vapor phase has a much lower density than the liquid phase, so the boiling point increases with pressure. Water can remain in a liquid state at high temperatures in the deep ocean or underground. For example, temperatures exceed 205 °C (401 °F) in Old Faithful, a geyser in Yellowstone National Park. In hydrothermal vents, the temperature can exceed 400 °C (752 °F). At sea level, the boiling point of water is 100 °C (212 °F). As atmospheric pressure decreases with altitude, the boiling point decreases by 1 °C every 274 meters. High-altitude cooking takes longer than sea-level cooking. For example, at 1,524 metres (5,000 ft), cooking time must be increased by a fourth to achieve the desired result. Conversely, a pressure cooker can be used to decrease cooking times by raising the boiling temperature. In a vacuum, water will boil at room temperature. ==== Triple and critical points ==== On a pressure/temperature phase diagram (see figure), there are curves separating solid from vapor, vapor from liquid, and liquid from solid. These meet at a single point called the triple point, where all three phases can coexist. The triple point is at a temperature of 273.16 K (0.01 °C; 32.02 °F) and a pressure of 611.657 pascals (0.00604 atm; 0.0887 psi); it is the lowest pressure at which liquid water can exist. Until 2019, the triple point was used to define the Kelvin temperature scale. The water/vapor phase curve terminates at 647.096 K (373.946 °C; 705.103 °F) and 22.064 megapascals (3,200.1 psi; 217.75 atm). This is known as the critical point. At higher temperatures and pressures the liquid and vapor phases form a continuous phase called a supercritical fluid. It can be gradually compressed or expanded between gas-like and liquid-like densities; its properties (which are quite different from those of ambient water) are sensitive to density. For example, for suitable pressures and temperatures it can mix freely with nonpolar compounds, including most organic compounds. This makes it useful in a variety of applications including high-temperature electrochemistry and as an ecologically benign solvent or catalyst in chemical reactions involving organic compounds. In Earth's mantle, it acts as a solvent during mineral formation, dissolution and deposition. ==== Phases of ice and water ==== The normal form of ice on the surface of Earth is ice Ih, a phase that forms crystals with hexagonal symmetry. Another with cubic crystalline symmetry, ice Ic, can occur in the upper atmosphere. As the pressure increases, ice forms other crystal structures. As of 2024, twenty have been experimentally confirmed and several more are predicted theoretically. The eighteenth form of ice, ice XVIII, a face-centred-cubic, superionic ice phase, was discovered when a droplet of water was subject to a shock wave that raised the water's pressure to millions of atmospheres and its temperature to thousands of degrees, resulting in a structure of rigid oxygen atoms in which hydrogen atoms flowed freely. When sandwiched between layers of graphene, ice forms a square lattice. The details of the chemical nature of liquid water are not well understood; some theories suggest that its unusual behavior is due to the existence of two liquid states. === Taste and odor === Pure water is usually described as tasteless and odorless, although humans have specific sensors that can feel the presence of water in their mouths, and frogs are known to be able to smell it. However, water from ordinary sources (including mineral water) usually has many dissolved substances that may give it varying tastes and odors. Humans and other animals have developed senses that enable them to evaluate the potability of water in order to avoid water that is too salty or putrid. === Color and appearance === Pure water is visibly blue due to absorption of light in the region c. 600–800 nm. The color can be easily observed in a glass of tap-water placed against a pure white background, in daylight. The principal absorption bands responsible for the color are overtones of the O–H stretching vibrations. The apparent intensity of the color increases with the depth of the water column, following Beer's law. This also applies, for example, with a swimming pool when the light source is sunlight reflected from the pool's white tiles. In nature, the color may also be modified from blue to green due to the presence of suspended solids or algae. In industry, near-infrared spectroscopy is used with aqueous solutions as the greater intensity of the lower overtones of water means that glass cuvettes with short path-length may be employed. To observe the fundamental stretching absorption spectrum of water or of an aqueous solution in the region around 3,500 cm−1 (2.85 μm) a path length of about 25 μm is needed. Also, the cuvette must be both transparent around 3500 cm−1 and insoluble in water; calcium fluoride is one material that is in common use for the cuvette windows with aqueous solutions. The Raman-active fundamental vibrations may be observed with, for example, a 1 cm sample cell. Aquatic plants, algae, and other photosynthetic organisms can live in water up to hundreds of meters deep, because sunlight can reach them. Practically no sunlight reaches the parts of the oceans below 1,000 metres (3,300 ft) of depth. The refractive index of liquid water (1.333 at 20 °C (68 °F)) is much higher than that of air (1.0), similar to those of alkanes and ethanol, but lower than those of glycerol (1.473), benzene (1.501), carbon disulfide (1.627), and common types of glass (1.4 to 1.6). The refraction index of ice (1.31) is lower than that of liquid water. === Molecular polarity === In a water molecule, the hydrogen atoms form a 104.5° angle with the oxygen atom. The hydrogen atoms are close to two corners of a tetrahedron centered on the oxygen. At the other two corners are lone pairs of valence electrons that do not participate in the bonding. In a perfect tetrahedron, the atoms would form a 109.5° angle, but the repulsion between the lone pairs is greater than the repulsion between the hydrogen atoms. The O–H bond length is about 0.096 nm. Other substances have a tetrahedral molecular structure, for example methane (CH4) and hydrogen sulfide (H2S). However, oxygen is more electronegative than most other elements, so the oxygen atom has a negative partial charge while the hydrogen atoms are partially positively charged. Along with the bent structure, this gives the molecule an electrical dipole moment and it is classified as a polar molecule. Water is a good polar solvent, dissolving many salts and hydrophilic organic molecules such as sugars and simple alcohols such as ethanol. Water also dissolves many gases, such as oxygen and carbon dioxide—the latter giving the fizz of carbonated beverages, sparkling wines and beers. In addition, many substances in living organisms, such as proteins, DNA and polysaccharides, are dissolved in water. The interactions between water and the subunits of these biomacromolecules shape protein folding, DNA base pairing, and other phenomena crucial to life (hydrophobic effect). Many organic substances (such as fats and oils and alkanes) are hydrophobic, that is, insoluble in water. Many inorganic substances are insoluble too, including most metal oxides, sulfides, and silicates. === Hydrogen bonding === Because of its polarity, a molecule of water in the liquid or solid state can form up to four hydrogen bonds with neighboring molecules. Hydrogen bonds are about ten times as strong as the Van der Waals force that attracts molecules to each other in most liquids. This is the reason why the melting and boiling points of water are much higher than those of other analogous compounds like hydrogen sulfide. They also explain its exceptionally high specific heat capacity (about 4.2 J/(g·K)), heat of fusion (about 333 J/g), heat of vaporization (2257 J/g), and thermal conductivity (between 0.561 and 0.679 W/(m·K)). These properties make water more effective at moderating Earth's climate, by storing heat and transporting it between the oceans and the atmosphere. The hydrogen bonds of water are around 23 kJ/mol (compared to a covalent O-H bond at 492 kJ/mol). Of this, it is estimated that 90% is attributable to electrostatics, while the remaining 10% is partially covalent. These bonds are the cause of water's high surface tension and capillary forces. The capillary action refers to the tendency of water to move up a narrow tube against the force of gravity. This property is relied upon by all vascular plants, such as trees. === Self-ionization === Water is a weak solution of hydronium hydroxide—there is an equilibrium 2H2O ⇌ H3O+ + OH−, in combination with solvation of the resulting hydronium and hydroxide ions. === Electrical conductivity and electrolysis === Pure water has a low electrical conductivity, which increases with the dissolution of a small amount of ionic material such as common salt. Liquid water can be split into the elements hydrogen and oxygen by passing an electric current through it—a process called electrolysis. The decomposition requires more energy input than the heat released by the inverse process (285.8 kJ/mol, or 15.9 MJ/kg). === Mechanical properties === Liquid water can be assumed to be incompressible for most purposes: its compressibility ranges from 4.4 to 5.1×10−10 Pa−1 in ordinary conditions. Even in oceans at 4 km depth, where the pressure is 400 atm, water suffers only a 1.8% decrease in volume. The viscosity of water is about 10−3 Pa·s or 0.01 poise at 20 °C (68 °F), and the speed of sound in liquid water ranges between 1,400 and 1,540 metres per second (4,600 and 5,100 ft/s) depending on temperature. Sound travels long distances in water with little attenuation, especially at low frequencies (roughly 0.03 dB/km for 1 kHz), a property that is exploited by cetaceans and humans for communication and environment sensing (sonar). === Reactivity === Metallic elements which are more electropositive than hydrogen, particularly the alkali metals and alkaline earth metals such as lithium, sodium, calcium, potassium and cesium displace hydrogen from water, forming hydroxides and releasing hydrogen. At high temperatures, carbon reacts with steam to form carbon monoxide and hydrogen. == On Earth == Hydrology is the study of the movement, distribution, and quality of water throughout the Earth. The study of the distribution of water is hydrography. The study of the distribution and movement of groundwater is hydrogeology, of glaciers is glaciology, of inland waters is limnology and distribution of oceans is oceanography. Ecological processes with hydrology are in the focus of ecohydrology. The collective mass of water found on, under, and over the surface of a planet is called the hydrosphere. Earth's approximate water volume (the total water supply of the world) is 1.386 billion cubic kilometres (333 million cubic miles). Liquid water is found in bodies of water, such as an ocean, sea, lake, river, stream, canal, pond, or puddle. The majority of water on Earth is seawater. Water is also present in the atmosphere in solid, liquid, and vapor states. It also exists as groundwater in aquifers. Water is important in many geological processes. Groundwater is present in most rocks, and the pressure of this groundwater affects patterns of faulting. Water in the mantle is responsible for the melt that produces volcanoes at subduction zones. On the surface of the Earth, water is important in both chemical and physical weathering processes. Water, and to a lesser but still significant extent, ice, are also responsible for a large amount of sediment transport that occurs on the surface of the earth. Deposition of transported sediment forms many types of sedimentary rocks, which make up the geologic record of Earth history. === Water cycle === The water cycle (known scientifically as the hydrologic cycle) is the continuous exchange of water within the hydrosphere, between the atmosphere, soil water, surface water, groundwater, and plants. Water moves perpetually through each of these regions in the water cycle consisting of the following transfer processes: evaporation from oceans and other water bodies into the air and transpiration from land plants and animals into the air. precipitation, from water vapor condensing from the air and falling to the earth or ocean. runoff from the land usually reaching the sea. Most water vapors found mostly in the ocean returns to it, but winds carry water vapor over land at the same rate as runoff into the sea, about 47 Tt per year while evaporation and transpiration happening in land masses also contribute another 72 Tt per year. Precipitation, at a rate of 119 Tt per year over land, has several forms: most commonly rain, snow, and hail, with some contribution from fog and dew. Dew is small drops of water that are condensed when a high density of water vapor meets a cool surface. Dew usually forms in the morning when the temperature is the lowest, just before sunrise and when the temperature of the earth's surface starts to increase. Condensed water in the air may also refract sunlight to produce rainbows. Water runoff often collects over watersheds flowing into rivers. Through erosion, runoff shapes the environment creating river valleys and deltas which provide rich soil and level ground for the establishment of population centers. A flood occurs when an area of land, usually low-lying, is covered with water which occurs when a river overflows its banks or a storm surge happens. On the other hand, drought is an extended period of months or years when a region notes a deficiency in its water supply. This occurs when a region receives consistently below average precipitation either due to its topography or due to its location in terms of latitude. === Water resources === Water resources are natural resources of water that are potentially useful for humans, for example as a source of drinking water supply or irrigation water. Water occurs as both "stocks" and "flows". Water can be stored as lakes, water vapor, groundwater or aquifers, and ice and snow. Of the total volume of global freshwater, an estimated 69 percent is stored in glaciers and permanent snow cover; 30 percent is in groundwater; and the remaining 1 percent in lakes, rivers, the atmosphere, and biota. The length of time water remains in storage is highly variable: some aquifers consist of water stored over thousands of years but lake volumes may fluctuate on a seasonal basis, decreasing during dry periods and increasing during wet ones. A substantial fraction of the water supply for some regions consists of water extracted from water stored in stocks, and when withdrawals exceed recharge, stocks decrease. By some estimates, as much as 30 percent of total water used for irrigation comes from unsustainable withdrawals of groundwater, causing groundwater depletion. === Seawater and tides === Seawater contains about 3.5% sodium chloride on average, plus smaller amounts of other substances. The physical properties of seawater differ from fresh water in some important respects. It freezes at a lower temperature (about −1.9 °C (28.6 °F)) and its density increases with decreasing temperature to the freezing point, instead of reaching maximum density at a temperature above freezing. The salinity of water in major seas varies from about 0.7% in the Baltic Sea to 4.0% in the Red Sea. (The Dead Sea, known for its ultra-high salinity levels of between 30 and 40%, is really a salt lake.) Tides are the cyclic rising and falling of local sea levels caused by the tidal forces of the Moon and the Sun acting on the oceans. Tides cause changes in the depth of the marine and estuarine water bodies and produce oscillating currents known as tidal streams. The changing tide produced at a given location is the result of the changing positions of the Moon and Sun relative to the Earth coupled with the effects of Earth rotation and the local bathymetry. The strip of seashore that is submerged at high tide and exposed at low tide, the intertidal zone, is an important ecological product of ocean tides. == Effects on life == From a biological standpoint, water has many distinct properties that are critical for the proliferation of life. It carries out this role by allowing organic compounds to react in ways that ultimately allow replication. All known forms of life depend on water. Water is vital both as a solvent in which many of the body's solutes dissolve and as an essential part of many metabolic processes within the body. Metabolism is the sum total of anabolism and catabolism. In anabolism, water is removed from molecules (through energy requiring enzymatic chemical reactions) in order to grow larger molecules (e.g., starches, triglycerides, and proteins for storage of fuels and information). In catabolism, water is used to break bonds in order to generate smaller molecules (e.g., glucose, fatty acids, and amino acids to be used for fuels for energy use or other purposes). Without water, these particular metabolic processes could not exist. Water is fundamental to both photosynthesis and respiration. Photosynthetic cells use the sun's energy to split off water's hydrogen from oxygen. In the presence of sunlight, hydrogen is combined with CO2 (absorbed from air or water) to form glucose and release oxygen. All living cells use such fuels and oxidize the hydrogen and carbon to capture the sun's energy and reform water and CO2 in the process (cellular respiration). Water is also central to acid-base neutrality and enzyme function. An acid, a hydrogen ion (H+, that is, a proton) donor, can be neutralized by a base, a proton acceptor such as a hydroxide ion (OH−) to form water. Water is considered to be neutral, with a pH (the negative log of the hydrogen ion concentration) of 7 in an ideal state. Acids have pH values less than 7 while bases have values greater than 7. === Aquatic life forms === Earth's surface waters are filled with life. The earliest life forms appeared in water; nearly all fish live exclusively in water, and there are many types of marine mammals, such as dolphins and whales. Some kinds of animals, such as amphibians, spend portions of their lives in water and portions on land. Plants such as kelp and algae grow in the water and are the basis for some underwater ecosystems. Plankton is generally the foundation of the ocean food chain. Aquatic vertebrates must obtain oxygen to survive, and they do so in various ways. Fish have gills instead of lungs, although some species of fish, such as the lungfish, have both. Marine mammals, such as dolphins, whales, otters, and seals need to surface periodically to breathe air. Some amphibians are able to absorb oxygen through their skin. Invertebrates exhibit a wide range of modifications to survive in poorly oxygenated waters including breathing tubes (see insect and mollusc siphons) and gills (Carcinus). However, as invertebrate life evolved in an aquatic habitat most have little or no specialization for respiration in water. == Effects on human civilization == Civilization has historically flourished around rivers and major waterways; Mesopotamia, one of the so-called cradles of civilization, was situated between the major rivers Tigris and Euphrates; the ancient society of the Egyptians depended entirely upon the Nile. The early Indus Valley civilization (c. 3300 BCE – c. 1300 BCE) developed along the Indus River and tributaries that flowed out of the Himalayas. Rome was also founded on the banks of the Italian river Tiber. Large metropolises like Rotterdam, London, Montreal, Paris, New York City, Buenos Aires, Shanghai, Tokyo, Chicago, and Hong Kong owe their success in part to their easy accessibility via water and the resultant expansion of trade. Islands with safe water ports, like Singapore, have flourished for the same reason. In places such as North Africa and the Middle East, where water is more scarce, access to clean drinking water was and is a major factor in human development. === Health and pollution === Water fit for human consumption is called drinking water or potable water. Water that is not potable may be made potable by filtration or distillation, or by a range of other methods. More than 660 million people do not have access to safe drinking water. Water that is not fit for drinking but is not harmful to humans when used for swimming or bathing is called by various names other than potable or drinking water, and is sometimes called safe water, or "safe for bathing". Chlorine is a skin and mucous membrane irritant that is used to make water safe for bathing or drinking. Its use is highly technical and is usually monitored by government regulations (typically 1 part per million (ppm) for drinking water, and 1–2 ppm of chlorine not yet reacted with impurities for bathing water). Water for bathing may be maintained in satisfactory microbiological condition using chemical disinfectants such as chlorine or ozone or by the use of ultraviolet light. Water reclamation is the process of converting wastewater (most commonly sewage, also called municipal wastewater) into water that can be reused for other purposes. There are 2.3 billion people who reside in nations with water scarcities, which means that each individual receives less than 1,700 cubic metres (60,000 cu ft) of water annually. 380 billion cubic metres (13×10^12 cu ft) of municipal wastewater are produced globally each year. Freshwater is a renewable resource, recirculated by the natural hydrologic cycle, but pressures over access to it result from the naturally uneven distribution in space and time, growing economic demands by agriculture and industry, and rising populations. Currently, nearly a billion people around the world lack access to safe, affordable water. In 2000, the United Nations established the Millennium Development Goals for water to halve by 2015 the proportion of people worldwide without access to safe water and sanitation. Progress toward that goal was uneven, and in 2015 the UN committed to the Sustainable Development Goals of achieving universal access to safe and affordable water and sanitation by 2030. Poor water quality and bad sanitation are deadly; some five million deaths a year are caused by water-related diseases. The World Health Organization estimates that safe water could prevent 1.4 million child deaths from diarrhea each year. In developing countries, 90% of all municipal wastewater still goes untreated into local rivers and streams. Some 50 countries, with roughly a third of the world's population, also suffer from medium or high water scarcity and 17 of these extract more water annually than is recharged through their natural water cycles. The strain not only affects surface freshwater bodies like rivers and lakes, but it also degrades groundwater resources. === Human uses === ==== Agriculture ==== The most substantial human use of water is for agriculture, including irrigated agriculture, which accounts for as much as 80 to 90 percent of total human water consumption. In the United States, 42% of freshwater withdrawn for use is for irrigation, but the vast majority of water "consumed" (used and not returned to the environment) goes to agriculture. Access to fresh water is often taken for granted, especially in developed countries that have built sophisticated water systems for collecting, purifying, and delivering water, and removing wastewater. But growing economic, demographic, and climatic pressures are increasing concerns about water issues, leading to increasing competition for fixed water resources, giving rise to the concept of peak water. As populations and economies continue to grow, consumption of water-thirsty meat expands, and new demands rise for biofuels or new water-intensive industries, new water challenges are likely. An assessment of water management in agriculture was conducted in 2007 by the International Water Management Institute in Sri Lanka to see if the world had sufficient water to provide food for its growing population. It assessed the current availability of water for agriculture on a global scale and mapped out locations suffering from water scarcity. It found that a fifth of the world's people, more than 1.2 billion, live in areas of physical water scarcity, where there is not enough water to meet all demands. A further 1.6 billion people live in areas experiencing economic water scarcity, where the lack of investment in water or insufficient human capacity make it impossible for authorities to satisfy the demand for water. The report found that it would be possible to produce the food required in the future, but that continuation of today's food production and environmental trends would lead to crises in many parts of the world. To avoid a global water crisis, farmers will have to strive to increase productivity to meet growing demands for food, while industries and cities find ways to use water more efficiently. Water scarcity is also caused by production of water intensive products. For example, cotton: 1 kg of cotton—equivalent of a pair of jeans—requires 10.9 cubic metres (380 cu ft) water to produce. While cotton accounts for 2.4% of world water use, the water is consumed in regions that are already at a risk of water shortage. Significant environmental damage has been caused: for example, the diversion of water by the former Soviet Union from the Amu Darya and Syr Darya rivers to produce cotton was largely responsible for the disappearance of the Aral Sea. ==== As a scientific standard ==== On 7 April 1795, the gram was defined in France to be equal to "the absolute weight of a volume of pure water equal to a cube of one-hundredth of a meter, and at the temperature of melting ice". For practical purposes though, a metallic reference standard was required, one thousand times more massive, the kilogram. Work was therefore commissioned to determine precisely the mass of one liter of water. In spite of the fact that the decreed definition of the gram specified water at 0 °C (32 °F)—a highly reproducible temperature—the scientists chose to redefine the standard and to perform their measurements at the temperature of highest water density, which was measured at the time as 4 °C (39 °F). The Kelvin temperature scale of the SI system was based on the triple point of water, defined as exactly 273.16 K (0.01 °C; 32.02 °F), but as of May 2019 is based on the Boltzmann constant instead. The scale is an absolute temperature scale with the same increment as the Celsius temperature scale, which was originally defined according to the boiling point (set to 100 °C (212 °F)) and melting point (set to 0 °C (32 °F)) of water. Natural water consists mainly of the isotopes hydrogen-1 and oxygen-16, but there is also a small quantity of heavier isotopes oxygen-18, oxygen-17, and hydrogen-2 (deuterium). The percentage of the heavier isotopes is very small, but it still affects the properties of water. Water from rivers and lakes tends to contain less heavy isotopes than seawater. Therefore, standard water is defined in the Vienna Standard Mean Ocean Water specification. ==== For drinking ==== The human body contains from 55% to 78% water, depending on body size. To function properly, the body requires between one and seven litres (0.22 and 1.54 imp gal; 0.26 and 1.85 US gal) of water per day to avoid dehydration; the precise amount depends on the level of activity, temperature, humidity, and other factors. Most of this is ingested through foods or beverages other than drinking straight water. It is not clear how much water intake is needed by healthy people, though the British Dietetic Association advises that 2.5 liters of total water daily is the minimum to maintain proper hydration, including 1.8 liters (6 to 7 glasses) obtained directly from beverages. Medical literature favors a lower consumption, typically 1 liter of water for an average male, excluding extra requirements due to fluid loss from exercise or warm weather. Healthy kidneys can excrete 0.8 to 1 liter of water per hour, but stress such as exercise can reduce this amount. People can drink far more water than necessary while exercising, putting them at risk of water intoxication (hyperhydration), which can be fatal. The popular claim that "a person should consume eight glasses of water per day" seems to have no real basis in science. Studies have shown that extra water intake, especially up to 500 millilitres (18 imp fl oz; 17 US fl oz) at mealtime, was associated with weight loss. Adequate fluid intake is helpful in preventing constipation. An original recommendation for water intake in 1945 by the Food and Nutrition Board of the U.S. National Research Council read: "An ordinary standard for diverse persons is 1 milliliter for each calorie of food. Most of this quantity is contained in prepared foods." The latest dietary reference intake report by the U.S. National Research Council in general recommended, based on the median total water intake from US survey data (including food sources): 3.7 litres (0.81 imp gal; 0.98 US gal) for men and 2.7 litres (0.59 imp gal; 0.71 US gal) of water total for women, noting that water contained in food provided approximately 19% of total water intake in the survey. Specifically, pregnant and breastfeeding women need additional fluids to stay hydrated. The US Institute of Medicine recommends that, on average, men consume 3 litres (0.66 imp gal; 0.79 US gal) and women 2.2 litres (0.48 imp gal; 0.58 US gal); pregnant women should increase intake to 2.4 litres (0.53 imp gal; 0.63 US gal) and breastfeeding women should get 3 liters (12 cups), since an especially large amount of fluid is lost during nursing. Also noted is that normally, about 20% of water intake comes from food, while the rest comes from drinking water and beverages (caffeinated included). Water is excreted from the body in multiple forms; through urine and feces, through sweating, and by exhalation of water vapor in the breath. With physical exertion and heat exposure, water loss will increase and daily fluid needs may increase as well. Humans require water with few impurities. Common impurities include metal salts and oxides, including copper, iron, calcium and lead, and harmful bacteria, such as Vibrio. Some solutes are acceptable and even desirable for taste enhancement and to provide needed electrolytes. The single largest (by volume) freshwater resource suitable for drinking is Lake Baikal in Siberia. ==== Washing ==== ==== Transportation ==== ==== Chemical uses ==== Water is widely used in chemical reactions as a solvent or reactant and less commonly as a solute or catalyst. In inorganic reactions, water is a common solvent, dissolving many ionic compounds, as well as other polar compounds such as ammonia and compounds closely related to water. In organic reactions, it is not usually used as a reaction solvent, because it does not dissolve the reactants well and is amphoteric (acidic and basic) and nucleophilic. Nevertheless, these properties are sometimes desirable. Also, acceleration of Diels-Alder reactions by water has been observed. Supercritical water has recently been a topic of research. Oxygen-saturated supercritical water combusts organic pollutants efficiently. ==== Heat exchange ==== Water and steam are a common fluid used for heat exchange, due to its availability and high heat capacity, both for cooling and heating. Cool water may even be naturally available from a lake or the sea. It is especially effective to transport heat through vaporization and condensation of water because of its large latent heat of vaporization. A disadvantage is that metals commonly found in industries such as steel and copper are oxidized faster by untreated water and steam. In almost all thermal power stations, water is used as the working fluid (used in a closed-loop between boiler, steam turbine, and condenser), and the coolant (used to exchange the waste heat to a water body or carry it away by evaporation in a cooling tower). In the United States, cooling power plants is the largest use of water. In the nuclear power industry, water can also be used as a neutron moderator. In most nuclear reactors, water is both a coolant and a moderator. This provides something of a passive safety measure, as removing the water from the reactor also slows the nuclear reaction down. However other methods are favored for stopping a reaction and it is preferred to keep the nuclear core covered with water so as to ensure adequate cooling. ==== Fire considerations ==== Water has a high heat of vaporization and is relatively inert, which makes it a good fire extinguishing fluid. The evaporation of water carries heat away from the fire. It is dangerous to use water on fires involving oils and organic solvents because many organic materials float on water and the water tends to spread the burning liquid. Use of water in fire fighting should also take into account the hazards of a steam explosion, which may occur when water is used on very hot fires in confined spaces, and of a hydrogen explosion, when substances which react with water, such as certain metals or hot carbon such as coal, charcoal, or coke graphite, decompose the water, producing water gas. The power of such explosions was seen in the Chernobyl disaster, although the water involved in this case did not come from fire-fighting but from the reactor's own water cooling system. A steam explosion occurred when the extreme overheating of the core caused water to flash into steam. A hydrogen explosion may have occurred as a result of a reaction between steam and hot zirconium. Some metallic oxides, most notably those of alkali metals and alkaline earth metals, produce so much heat in reaction with water that a fire hazard can develop. The alkaline earth oxide quicklime, also known as calcium oxide, is a mass-produced substance that is often transported in paper bags. If these are soaked through, they may ignite as their contents react with water. ==== Recreation ==== Humans use water for many recreational purposes, as well as for exercising and for sports. Some of these include swimming, waterskiing, boating, surfing and diving. In addition, some sports, like ice hockey and ice skating, are played on ice. Lakesides, beaches and water parks are popular places for people to go to relax and enjoy recreation. Many find the sound and appearance of flowing water to be calming, and fountains and other flowing water structures are popular decorations. Some keep fish and other flora and fauna inside aquariums or ponds for show, fun, and companionship. Humans also use water for snow sports such as skiing, sledding, snowmobiling or snowboarding, which require the water to be at a low temperature either as ice or crystallized into snow. ==== Water industry ==== The water industry provides drinking water and wastewater services (including sewage treatment) to households and industry. Water supply facilities include water wells, cisterns for rainwater harvesting, water supply networks, and water purification facilities, water tanks, water towers, water pipes including old aqueducts. Atmospheric water generators are in development. Drinking water is often collected at springs, extracted from artificial borings (wells) in the ground, or pumped from lakes and rivers. Building more wells in adequate places is thus a possible way to produce more water, assuming the aquifers can supply an adequate flow. Other water sources include rainwater collection. Water may require purification for human consumption. This may involve the removal of undissolved substances, dissolved substances and harmful microbes. Popular methods are filtering with sand which only removes undissolved material, while chlorination and boiling kill harmful microbes. Distillation does all three functions. More advanced techniques exist, such as reverse osmosis. Desalination of abundant seawater is a more expensive solution used in coastal arid climates. The distribution of drinking water is done through municipal water systems, tanker delivery or as bottled water. Governments in many countries have programs to distribute water to the needy at no charge. Reducing usage by using drinking (potable) water only for human consumption is another option. In some cities such as Hong Kong, seawater is extensively used for flushing toilets citywide in order to conserve freshwater resources. Polluting water may be the biggest single misuse of water; to the extent that a pollutant limits other uses of the water, it becomes a waste of the resource, regardless of benefits to the polluter. Like other types of pollution, this does not enter standard accounting of market costs, being conceived as externalities for which the market cannot account. Thus other people pay the price of water pollution, while the private firms' profits are not redistributed to the local population, victims of this pollution. Pharmaceuticals consumed by humans often end up in the waterways and can have detrimental effects on aquatic life if they bioaccumulate and if they are not biodegradable. Municipal and industrial wastewater are typically treated at wastewater treatment plants. Mitigation of polluted surface runoff is addressed through a variety of prevention and treatment techniques. ==== Industrial applications ==== Many industrial processes rely on reactions using chemicals dissolved in water, suspension of solids in water slurries or using water to dissolve and extract substances, or to wash products or process equipment. Processes such as mining, chemical pulping, pulp bleaching, paper manufacturing, textile production, dyeing, printing, and cooling of power plants use large amounts of water, requiring a dedicated water source, and often cause significant water pollution. Water is used in power generation. Hydroelectricity is electricity obtained from hydropower. Hydroelectric power comes from water driving a water turbine connected to a generator. Hydroelectricity is a low-cost, non-polluting, renewable energy source. The energy is supplied by the motion of water. Typically a dam is constructed on a river, creating an artificial lake behind it. Water flowing out of the lake is forced through turbines that turn generators. Pressurized water is used in water blasting and water jet cutters. High pressure water guns are used for precise cutting. It works very well, is relatively safe, and is not harmful to the environment. It is also used in the cooling of machinery to prevent overheating, or prevent saw blades from overheating. Water is also used in many industrial processes and machines, such as the steam turbine and heat exchanger, in addition to its use as a chemical solvent. Discharge of untreated water from industrial uses is pollution. Pollution includes discharged solutes (chemical pollution) and discharged coolant water (thermal pollution). Industry requires pure water for many applications and uses a variety of purification techniques both in water supply and discharge. ==== Food processing ==== Boiling, steaming, and simmering are popular cooking methods that often require immersing food in water or its gaseous state, steam. Water is also used for dishwashing. Water also plays many critical roles within the field of food science. Solutes such as salts and sugars found in water affect the physical properties of water. The boiling and freezing points of water are affected by solutes, as well as air pressure, which is in turn affected by altitude. Water boils at lower temperatures with the lower air pressure that occurs at higher elevations. One mole of sucrose (sugar) per kilogram of water raises the boiling point of water by 0.51 °C (0.918 °F), and one mole of salt per kg raises the boiling point by 1.02 °C (1.836 °F); similarly, increasing the number of dissolved particles lowers water's freezing point. Solutes in water also affect water activity that affects many chemical reactions and the growth of microbes in food. Water activity can be described as a ratio of the vapor pressure of water in a solution to the vapor pressure of pure water. Solutes in water lower water activity—this is important to know because most bacterial growth ceases at low levels of water activity. Not only does microbial growth affect the safety of food, but also the preservation and shelf life of food. Water hardness is also a critical factor in food processing and may be altered or treated by using a chemical ion exchange system. It can dramatically affect the quality of a product, as well as playing a role in sanitation. Water hardness is classified based on concentration of calcium carbonate the water contains. Water is classified as soft if it contains less than 100 mg/L (UK) or less than 60 mg/L (US). According to a report published by the Water Footprint organization in 2010, a single kilogram of beef requires 15 thousand litres (3.3×10^3 imp gal; 4.0×10^3 US gal) of water; however, the authors also make clear that this is a global average and circumstantial factors determine the amount of water used in beef production. ==== Medical use ==== Water for injection is on the World Health Organization's list of essential medicines. == Distribution in nature == === In the universe === Much of the universe's water is produced as a byproduct of star formation. The formation of stars is accompanied by a strong outward wind of gas and dust. When this outflow of material eventually impacts the surrounding gas, the shock waves that are created compress and heat the gas. The water observed is quickly produced in this warm dense gas. On 22 July 2011, a report described the discovery of a gigantic cloud of water vapor containing "140 trillion times more water than all of Earth's oceans combined" around a quasar located 12 billion light years from Earth. According to the researchers, the "discovery shows that water has been prevalent in the universe for nearly its entire existence". Water has been detected in interstellar clouds within the Milky Way. Water probably exists in abundance in other galaxies, too, because its components, hydrogen, and oxygen, are among the most abundant elements in the universe. Based on models of the formation and evolution of the Solar System and that of other star systems, most other planetary systems are likely to have similar ingredients. ==== Water vapor ==== Water is present as vapor in: Atmosphere of the Sun: in detectable trace amounts Atmosphere of Mercury: 3.4%, and large amounts of water in Mercury's exosphere Atmosphere of Venus: 0.002% Earth's atmosphere: ≈0.40% over full atmosphere, typically 1–4% at surface Atmosphere of the Moon: in trace amounts Atmosphere of Mars: 0.03% Atmosphere of Ceres Atmosphere of Jupiter: 0.0004% – in ices only; and that of its moon Europa Atmosphere of Saturn – in ices only; Enceladus: 91% and Dione (exosphere) Atmosphere of Uranus – in trace amounts below 50 bar Atmosphere of Neptune – found in the deeper layers Extrasolar planet atmospheres: including those of HD 189733 b and HD 209458 b, Tau Boötis b, HAT-P-11b, XO-1b, WASP-12b, WASP-17b, and WASP-19b. Stellar atmospheres: not limited to cooler stars and even detected in giant hot stars such as Betelgeuse, Mu Cephei, Antares and Arcturus. Circumstellar disks: including those of more than half of T Tauri stars such as AA Tauri as well as TW Hydrae, IRC +10216 and APM 08279+5255, VY Canis Majoris and S Persei. ==== Liquid water ==== Liquid water is present on Earth, covering 71% of its surface. Liquid water is also occasionally present in small amounts on Mars. Scientists believe liquid water is present in the Saturnian moons of Enceladus, as a 10-kilometre thick ocean approximately 30–40 kilometers below Enceladus' south polar surface, and Titan, as a subsurface layer, possibly mixed with ammonia. Jupiter's moon Europa has surface characteristics which suggest a subsurface liquid water ocean. Liquid water may also exist on Jupiter's moon Ganymede as a layer sandwiched between high pressure ice and rock. ==== Water ice ==== Water is present as ice on: Mars: under the regolith and at the poles. Earth–Moon system: mainly as ice sheets on Earth and in Lunar craters and volcanic rocks NASA reported the detection of water molecules by NASA's Moon Mineralogy Mapper aboard the Indian Space Research Organization's Chandrayaan-1 spacecraft in September 2009. Ceres Jupiter's moons: Europa's surface and also that of Ganymede and Callisto Saturn: in the planet's ring system and on the surface and mantle of Titan and Enceladus Pluto–Charon system Comets and other related Kuiper belt and Oort cloud objects And is also likely present on: Mercury's poles Tethys ==== Exotic forms ==== Water and other volatiles probably comprise much of the internal structures of Uranus and Neptune and the water in the deeper layers may be in the form of ionic water in which the molecules break down into a soup of hydrogen and oxygen ions, and deeper still as superionic water in which the oxygen crystallizes, but the hydrogen ions float about freely within the oxygen lattice. === Water and planetary habitability === The existence of liquid water, and to a lesser extent its gaseous and solid forms, on Earth are vital to the existence of life on Earth as we know it. The Earth is located in the habitable zone of the Solar System; if it were slightly closer to or farther from the Sun (about 5%, or about 8 million kilometers), the conditions which allow the three forms to be present simultaneously would be far less likely to exist. Earth's gravity allows it to hold an atmosphere. Water vapor and carbon dioxide in the atmosphere provide a temperature buffer (greenhouse effect) which helps maintain a relatively steady surface temperature. If Earth were smaller, a thinner atmosphere would allow temperature extremes, thus preventing the accumulation of water except in polar ice caps (as on Mars). The surface temperature of Earth has been relatively constant through geologic time despite varying levels of incoming solar radiation (insolation), indicating that a dynamic process governs Earth's temperature via a combination of greenhouse gases and surface or atmospheric albedo. This proposal is known as the Gaia hypothesis. The state of water on a planet depends on ambient pressure, which is determined by the planet's gravity. If a planet is sufficiently massive, the water on it may be solid even at high temperatures, because of the high pressure caused by gravity, as it was observed on exoplanets Gliese 436 b and GJ 1214 b. == Law, politics, and crisis == Water politics is politics affected by water and water resources. Water, particularly fresh water, is a strategic resource across the world and an important element in many political conflicts. It causes health impacts and damage to biodiversity. Access to safe drinking water has improved over the last decades in almost every part of the world, but approximately one billion people still lack access to safe water and over 2.5 billion lack access to adequate sanitation. However, some observers have estimated that by 2025 more than half of the world population will be facing water-based vulnerability. A report, issued in November 2009, suggests that by 2030, in some developing regions of the world, water demand will exceed supply by 50%. 1.6 billion people have gained access to a safe water source since 1990. The proportion of people in developing countries with access to safe water is calculated to have improved from 30% in 1970 to 71% in 1990, 79% in 2000, and 84% in 2004. A 2006 United Nations report stated that "there is enough water for everyone", but that access to it is hampered by mismanagement and corruption. In addition, global initiatives to improve the efficiency of aid delivery, such as the Paris Declaration on Aid Effectiveness, have not been taken up by water sector donors as effectively as they have in education and health, potentially leaving multiple donors working on overlapping projects and recipient governments without empowerment to act. The authors of the 2007 Comprehensive Assessment of Water Management in Agriculture cited poor governance as one reason for some forms of water scarcity. Water governance is the set of formal and informal processes through which decisions related to water management are made. Good water governance is primarily about knowing what processes work best in a particular physical and socioeconomic context. Mistakes have sometimes been made by trying to apply 'blueprints' that work in the developed world to developing world locations and contexts. The Mekong river is one example; a review by the International Water Management Institute of policies in six countries that rely on the Mekong river for water found that thorough and transparent cost-benefit analyses and environmental impact assessments were rarely undertaken. They also discovered that Cambodia's draft water law was much more complex than it needed to be. In 2004, the UK charity WaterAid reported that a child dies every 15 seconds from easily preventable water-related diseases, which are often tied to a lack of adequate sanitation. Since 2003, the UN World Water Development Report, produced by the UNESCO World Water Assessment Programme, has provided decision-makers with tools for developing sustainable water policies. The 2023 report states that two billion people (26% of the population) do not have access to drinking water and 3.6 billion (46%) lack access to safely managed sanitation. People in urban areas (2.4 billion) will face water scarcity by 2050. Water scarcity has been described as endemic, due to overconsumption and pollution. The report states that 10% of the world's population lives in countries with high or critical water stress. Yet over the past 40 years, water consumption has increased by around 1% per year, and is expected to grow at the same rate until 2050. Since 2000, flooding in the tropics has quadrupled, while flooding in northern mid-latitudes has increased by a factor of 2.5. The cost of these floods between 2000 and 2019 was 100,000 deaths and $650 million. Organizations concerned with water protection include the International Water Association (IWA), WaterAid, Water 1st, and the American Water Resources Association. The International Water Management Institute undertakes projects with the aim of using effective water management to reduce poverty. Water related conventions are United Nations Convention to Combat Desertification (UNCCD), International Convention for the Prevention of Pollution from Ships, United Nations Convention on the Law of the Sea and Ramsar Convention. World Day for Water takes place on 22 March and World Oceans Day on 8 June. == In culture == === Religion === Water is considered a purifier in most religions. Faiths that incorporate ritual washing (ablution) include Christianity, Hinduism, Islam, Judaism, the Rastafari movement, Shinto, Taoism, and Wicca. Immersion (or aspersion or affusion) of a person in water is a central Sacrament of Christianity (where it is called baptism); it is also a part of the practice of other religions, including Islam (Ghusl), Judaism (mikvah) and Sikhism (Amrit Sanskar). In addition, a ritual bath in pure water is performed for the dead in many religions including Islam and Judaism. In Islam, the five daily prayers can be done in most cases after washing certain parts of the body using clean water (wudu), unless water is unavailable (see Tayammum). In Shinto, water is used in almost all rituals to cleanse a person or an area (e.g., in the ritual of misogi). In Christianity, holy water is water that has been sanctified by a priest for the purpose of baptism, the blessing of persons, places, and objects, or as a means of repelling evil. In Zoroastrianism, water (āb) is respected as the source of life. === Philosophy === The Ancient Greek philosopher Empedocles saw water as one of the four classical elements (along with fire, earth, and air), and regarded it as an ylem, or basic substance of the universe. Thales, whom Aristotle portrayed as an astronomer and an engineer, theorized that the earth, which is denser than water, emerged from the water. Thales, a monist, believed further that all things are made from water. Plato believed that the shape of water is an icosahedron – flowing easily compared to the cube-shaped earth. The theory of the four bodily humors associated water with phlegm, as being cold and moist. The classical element of water was also one of the five elements in traditional Chinese philosophy (along with earth, fire, wood, and metal). Some traditional and popular Asian philosophical systems take water as a role-model. James Legge's 1891 translation of the Dao De Jing states, "The highest excellence is like (that of) water. The excellence of water appears in its benefiting all things, and in its occupying, without striving (to the contrary), the low place which all men dislike. Hence (its way) is near to (that of) the Tao" and "There is nothing in the world more soft and weak than water, and yet for attacking things that are firm and strong there is nothing that can take precedence of it—for there is nothing (so effectual) for which it can be changed." Guanzi in the "Shui di" 水地 chapter further elaborates on the symbolism of water, proclaiming that "man is water" and attributing natural qualities of the people of different Chinese regions to the character of local water resources. === Folklore === "Living water" features in Germanic and Slavic folktales as a means of bringing the dead back to life. Note the Grimm fairy-tale ("The Water of Life") and the Russian dichotomy of living and dead water. The Fountain of Youth represents a related concept of magical waters allegedly preventing aging. === Art and activism === In the significant modernist novel Ulysses (1922) by Irish writer James Joyce, the chapter "Ithaca" takes the form of a catechism of 309 questions and answers, one of which is known as the "water hymn".: 91  According to Richard E. Madtes, the hymn is not merely a "monotonous string of facts", rather, its phrases, like their subject, "ebb and flow, heave and swell, gather and break, until they subside into the calm quiescence of the concluding 'pestilential fens, faded flowerwater, stagnant pools in the waning moon.'": 79  The hymn is considered one of the most remarkable passages in Ithaca, and according to literary critic Hugh Kenner, achieves "the improbable feat of raising to poetry all the clutter of footling information that has accumulated in schoolbooks.": 91  The literary motif of water represents the novel's theme of "everlasting, everchanging life," and the hymn represents the culmination of the motif in the novel.: 91  The following is the hymn quoted in full. What in water did Bloom, waterlover, drawer of water, watercarrier returning to the range, admire?Its universality: its democratic equality and constancy to its nature in seeking its own level: its vastness in the ocean of Mercator’s projection: its unplumbed profundity in the Sundam trench of the Pacific exceeding 8,000 fathoms: the restlessness of its waves and surface particles visiting in turn all points of its seaboard: the independence of its units: the variability of states of sea: its hydrostatic quiescence in calm: its hydrokinetic turgidity in neap and spring tides: its subsidence after devastation: its sterility in the circumpolar icecaps, arctic and antarctic: its climatic and commercial significance: its preponderance of 3 to 1 over the dry land of the globe: its indisputable hegemony extending in square leagues over all the region below the subequatorial tropic of Capricorn: the multisecular stability of its primeval basin: its luteofulvous bed: its capacity to dissolve and hold in solution all soluble substances including millions of tons of the most precious metals: its slow erosions of peninsulas and downwardtending promontories: its alluvial deposits: its weight and volume and density: its imperturbability in lagoons and highland tarns: its gradation of colours in the torrid and temperate and frigid zones: its vehicular ramifications in continental lakecontained streams and confluent oceanflowing rivers with their tributaries and transoceanic currents: gulfstream, north and south equatorial courses: its violence in seaquakes, waterspouts, artesian wells, eruptions, torrents, eddies, freshets, spates, groundswells, watersheds, waterpartings, geysers, cataracts, whirlpools, maelstroms, inundations, deluges, cloudbursts: its vast circumterrestrial ahorizontal curve: its secrecy in springs, and latent humidity, revealed by rhabdomantic or hygrometric instruments and exemplified by the well by the hole in the wall at Ashtown gate, saturation of air, distillation of dew: the simplicity of its composition, two constituent parts of hydrogen with one constituent part of oxygen: its healing virtues: its buoyancy in the waters of the Dead Sea: its persevering penetrativeness in runnels, gullies, inadequate dams, leaks on shipboard: its properties for cleansing, quenching thirst and fire, nourishing vegetation: its infallibility as paradigm and paragon: its metamorphoses as vapour, mist, cloud, rain, sleet, snow, hail: its strength in rigid hydrants: its variety of forms in loughs and bays and gulfs and bights and guts and lagoons and atolls and archipelagos and sounds and fjords and minches and tidal estuaries and arms of sea: its solidity in glaciers, icebergs, icefloes: its docility in working hydraulic millwheels, turbines, dynamos, electric power stations, bleachworks, tanneries, scutchmills: its utility in canals, rivers, if navigable, floating and graving docks: its potentiality derivable from harnessed tides or watercourses falling from level to level: its submarine fauna and flora (anacoustic, photophobe) numerically, if not literally, the inhabitants of the globe: its ubiquity as constituting 90% of the human body: the noxiousness of its effluvia in lacustrine marshes, pestilential fens, faded flowerwater, stagnant pools in the waning moon. Painter and activist Fredericka Foster curated The Value of Water, at the Cathedral of St. John the Divine in New York City, which anchored a year-long initiative by the Cathedral on our dependence on water. The largest exhibition to ever appear at the Cathedral, it featured over forty artists, including Jenny Holzer, Robert Longo, Mark Rothko, William Kentridge, April Gornik, Kiki Smith, Pat Steir, Alice Dalton Brown, Teresita Fernandez and Bill Viola. Foster created Think About Water, an ecological collective of artists who use water as their subject or medium. Members include Basia Irland, Aviva Rahmani, Betsy Damon, Diane Burko, Leila Daw, Stacy Levy, Charlotte Coté, Meridel Rubenstein, and Anna Macleod. To mark the 10th anniversary of access to water and sanitation being declared a human right by the UN, the charity WaterAid commissioned ten visual artists to show the impact of clean water on people's lives. === Dihydrogen monoxide parody === 'Dihydrogen monoxide' is a technically correct but rarely used chemical name of water. This name has been used in a series of hoaxes and pranks that mock scientific illiteracy. This began in 1983, when an April Fools' Day article appeared in a newspaper in Durand, Michigan. The false story consisted of safety concerns about the substance. === Music === The word "Water" has been used by many Florida based rappers as a sort of catchphrase or adlib. Rappers who have done this include BLP Kosher and Ski Mask the Slump God. To go even further some rappers have made whole songs dedicated to the water in Florida, such as the 2023 Danny Towers song "Florida Water". Others have made whole songs dedicated to water as a whole, such as XXXTentacion, and Ski Mask the Slump God with their hit song "H2O". == See also == == Notes == == References == === Works cited === == Further reading == == External links == The World's Water Data Page FAO Comprehensive Water Database, AQUASTAT The Water Conflict Chronology: Water Conflict Database Archived 16 January 2013 at the Wayback Machine Water science school (USGS) Portal to The World Bank's strategy, work and associated publications on water resources America Water Resources Association Archived 24 March 2018 at the Wayback Machine Water on the web Water structure and science Archived 28 December 2014 at the Wayback Machine "Why water is one of the weirdest things in the universe", Ideas, BBC, Video, 3:16 minutes, 2019 The chemistry of water Archived 19 June 2020 at the Wayback Machine (NSF special report) The International Association for the Properties of Water and Steam H2O: The Molecule That Made Us, a 2020 PBS documentary
Wikipedia/Water_(molecule)
In quantum field theory, the Casimir effect (or Casimir force) is a physical force acting on the macroscopic boundaries of a confined space which arises from the quantum fluctuations of a field. The term Casimir pressure is sometimes used when it is described in units of force per unit area. It is named after the Dutch physicist Hendrik Casimir, who predicted the effect for electromagnetic systems in 1948. In the same year Casimir, together with Dirk Polder, described a similar effect experienced by a neutral atom in the vicinity of a macroscopic interface which is called the Casimir–Polder force. Their result is a generalization of the London–van der Waals force and includes retardation due to the finite speed of light. The fundamental principles leading to the London–van der Waals force, the Casimir force, and the Casimir–Polder force can be formulated on the same footing. In 1997 a direct experiment by Steven K. Lamoreaux quantitatively measured the Casimir force to be within 5% of the value predicted by the theory. The Casimir effect can be understood by the idea that the presence of macroscopic material interfaces, such as electrical conductors and dielectrics, alter the vacuum expectation value of the energy of the second-quantized electromagnetic field. Since the value of this energy depends on the shapes and positions of the materials, the Casimir effect manifests itself as a force between such objects. Any medium supporting oscillations has an analogue of the Casimir effect. For example, beads on a string as well as plates submerged in turbulent water or gas illustrate the Casimir force. In modern theoretical physics, the Casimir effect plays an important role in the chiral bag model of the nucleon; in applied physics it is significant in some aspects of emerging microtechnologies and nanotechnologies. == Physical properties == The typical example is of two uncharged conductive plates in a vacuum, placed a few nanometers apart. In a classical description, the lack of an external field means that no field exists between the plates, and no force connects them. When this field is instead studied using the quantum electrodynamic vacuum, it is seen that the plates do affect the virtual photons that constitute the field, and generate a net force – either an attraction or a repulsion depending on the plates' specific arrangement. Although the Casimir effect can be expressed in terms of virtual particles interacting with the objects, it is best described and more easily calculated in terms of the zero-point energy of a quantized field in the intervening space between the objects. This force has been measured and is a striking example of an effect captured formally by second quantization. The treatment of boundary conditions in these calculations is controversial. In fact, "Casimir's original goal was to compute the van der Waals force between polarizable molecules" of the conductive plates. Thus it can be interpreted without any reference to the zero-point energy (vacuum energy) of quantum fields. Because the strength of the force falls off rapidly with distance, it is measurable only when the distance between the objects is small. This force becomes so strong that it becomes the dominant force between uncharged conductors at submicron scales. In fact, at separations of 10 nm – about 100 times the typical size of an atom – the Casimir effect produces the equivalent of about 1 atmosphere of pressure (the precise value depends on surface geometry and other factors). == History == Dutch physicists Hendrik Casimir and Dirk Polder at Philips Research Labs proposed the existence of a force between two polarizable atoms and between such an atom and a conducting plate in 1947; this special form is called the Casimir–Polder force. After a conversation with Niels Bohr, who suggested it had something to do with zero-point energy, Casimir alone formulated the theory predicting a force between neutral conducting plates in 1948. This latter phenomenon is called the Casimir effect. Predictions of the force were later extended to finite-conductivity metals and dielectrics, while later calculations considered more general geometries. Experiments before 1997 observed the force qualitatively, and indirect validation of the predicted Casimir energy was made by measuring the thickness of liquid helium films. Finally, in 1997 Lamoreaux's direct experiment quantitatively measured the force to within 5% of the value predicted by the theory. Subsequent experiments approached an accuracy of a few percent. == Possible causes == === Vacuum energy === The causes of the Casimir effect are described by quantum field theory, which states that all of the various fundamental fields, such as the electromagnetic field, must be quantized at each and every point in space. In a simplified view, a "field" in physics may be envisioned as if space were filled with interconnected vibrating balls and springs, and the strength of the field can be visualized as the displacement of a ball from its rest position. Vibrations in this field propagate and are governed by the appropriate wave equation for the particular field in question. The second quantization of quantum field theory requires that each such ball-spring combination be quantized, that is, that the strength of the field be quantized at each point in space. At the most basic level, the field at each point in space is a simple harmonic oscillator, and its quantization places a quantum harmonic oscillator at each point. Excitations of the field correspond to the elementary particles of particle physics. However, even the vacuum has a vastly complex structure, so all calculations of quantum field theory must be made in relation to this model of the vacuum. The vacuum has, implicitly, all of the properties that a particle may have: spin, polarization in the case of light, energy, and so on. On average, most of these properties cancel out: the vacuum is, after all, "empty" in this sense. One important exception is the vacuum energy or the vacuum expectation value of the energy. The quantization of a simple harmonic oscillator states that the lowest possible energy or zero-point energy that such an oscillator may have is E = 1 2 ℏ ω . {\displaystyle {E}={\tfrac {1}{2}}\hbar \omega \,.} Summing over all possible oscillators at all points in space gives an infinite quantity. Since only differences in energy are physically measurable (with the notable exception of gravitation, which remains beyond the scope of quantum field theory), this infinity may be considered a feature of the mathematics rather than of the physics. This argument is the underpinning of the theory of renormalization. Dealing with infinite quantities in this way was a cause of widespread unease among quantum field theorists before the development in the 1970s of the renormalization group, a mathematical formalism for scale transformations that provides a natural basis for the process. When the scope of the physics is widened to include gravity, the interpretation of this formally infinite quantity remains problematic. There is currently no compelling explanation as to why it should not result in a cosmological constant that is many orders of magnitude larger than observed. However, since we do not yet have any fully coherent quantum theory of gravity, there is likewise no compelling reason as to why it should instead actually result in the value of the cosmological constant that we observe. The Casimir effect for fermions can be understood as the spectral asymmetry of the fermion operator (−1)F, where it is known as the Witten index. === Relativistic van der Waals force === Alternatively, a 2005 paper by Robert Jaffe of MIT states that "Casimir effects can be formulated and Casimir forces can be computed without reference to zero-point energies. They are relativistic, quantum forces between charges and currents. The Casimir force (per unit area) between parallel plates vanishes as alpha, the fine structure constant, goes to zero, and the standard result, which appears to be independent of alpha, corresponds to the alpha approaching infinity limit", and that "The Casimir force is simply the (relativistic, retarded) van der Waals force between the metal plates." Casimir and Polder's original paper used this method to derive the Casimir–Polder force. In 1978, Schwinger, DeRadd, and Milton published a similar derivation for the Casimir effect between two parallel plates. More recently, Nikolic proved from first principles of quantum electrodynamics that the Casimir force does not originate from the vacuum energy of the electromagnetic field, and explained in simple terms why the fundamental microscopic origin of Casimir force lies in van der Waals forces. == Effects == Casimir's observation was that the second-quantized quantum electromagnetic field, in the presence of bulk bodies such as metals or dielectrics, must obey the same boundary conditions that the classical electromagnetic field must obey. In particular, this affects the calculation of the vacuum energy in the presence of a conductor or dielectric. Consider, for example, the calculation of the vacuum expectation value of the electromagnetic field inside a metal cavity, such as, for example, a radar cavity or a microwave waveguide. In this case, the correct way to find the zero-point energy of the field is to sum the energies of the standing waves of the cavity. To each and every possible standing wave corresponds an energy; say the energy of the nth standing wave is En. The vacuum expectation value of the energy of the electromagnetic field in the cavity is then ⟨ E ⟩ = 1 2 ∑ n E n {\displaystyle \langle E\rangle ={\tfrac {1}{2}}\sum _{n}E_{n}} with the sum running over all possible values of n enumerating the standing waves. The factor of ⁠1/2⁠ is present because the zero-point energy of the nth mode is ⁠1/2⁠En, where En is the energy increment for the nth mode. (It is the same ⁠1/2⁠ as appears in the equation E = ⁠1/2⁠ħω.) Written in this way, this sum is clearly divergent; however, it can be used to create finite expressions. In particular, one may ask how the zero-point energy depends on the shape s of the cavity. Each energy level En depends on the shape, and so one should write En(s) for the energy level, and ⟨E(s)⟩ for the vacuum expectation value. At this point comes an important observation: The force at point p on the wall of the cavity is equal to the change in the vacuum energy if the shape s of the wall is perturbed a little bit, say by δs, at p. That is, one has F ( p ) = − δ ⟨ E ( s ) ⟩ δ s | p . {\displaystyle F(p)=-\left.{\frac {\delta \langle E(s)\rangle }{\delta s}}\right\vert _{p}\,.} This value is finite in many practical calculations. Attraction between the plates can be easily understood by focusing on the one-dimensional situation. Suppose that a moveable conductive plate is positioned at a short distance a from one of two widely separated plates (distance l apart). With a ≪ l, the states within the slot of width a are highly constrained so that the energy E of any one mode is widely separated from that of the next. This is not the case in the large region l where there is a large number of states (about ⁠l/a⁠) with energy evenly spaced between E and the next mode in the narrow slot, or in other words, all slightly larger than E. Now on shortening a by an amount da (which is negative), the mode in the narrow slot shrinks in wavelength and therefore increases in energy proportional to −⁠da/a⁠, whereas all the ⁠l/a⁠ states that lie in the large region lengthen and correspondingly decrease their energy by an amount proportional to −⁠da/l⁠ (note the different denominator). The two effects nearly cancel, but the net change is slightly negative, because the energy of all the ⁠l/a⁠ modes in the large region are slightly larger than the single mode in the slot. Thus the force is attractive: it tends to make a slightly smaller, the plates drawing each other closer, across the thin slot. == Derivation of Casimir effect assuming zeta-regularization == In the original calculation done by Casimir, he considered the space between a pair of conducting metal plates at distance a apart. In this case, the standing waves are particularly easy to calculate, because the transverse component of the electric field and the normal component of the magnetic field must vanish on the surface of a conductor. Assuming the plates lie parallel to the xy-plane, the standing waves are ψ n ( x , y , z ; t ) = e − i ω n t e i k x x + i k y y sin ⁡ ( k n z ) , {\displaystyle \psi _{n}(x,y,z;t)=e^{-i\omega _{n}t}e^{ik_{x}x+ik_{y}y}\sin(k_{n}z)\,,} where ψ stands for the electric component of the electromagnetic field, and, for brevity, the polarization and the magnetic components are ignored here. Here, kx and ky are the wavenumbers in directions parallel to the plates, and k n = n π a {\displaystyle k_{n}={\frac {n\pi }{a}}} is the wavenumber perpendicular to the plates. Here, n is an integer, resulting from the requirement that ψ vanish on the metal plates. The frequency of this wave is ω n = c k x 2 + k y 2 + n 2 π 2 a 2 , {\displaystyle \omega _{n}=c{\sqrt {{k_{x}}^{2}+{k_{y}}^{2}+{\frac {n^{2}\pi ^{2}}{a^{2}}}}}\,,} where c is the speed of light. The vacuum energy is then the sum over all possible excitation modes. Since the area of the plates is large, we may sum by integrating over two of the dimensions in k-space. The assumption of periodic boundary conditions yields, ⟨ E ⟩ = ℏ 2 ⋅ 2 ∫ A d k x d k y ( 2 π ) 2 ∑ n = 1 ∞ ω n , {\displaystyle \langle E\rangle ={\frac {\hbar }{2}}\cdot 2\int {\frac {A\,dk_{x}\,dk_{y}}{(2\pi )^{2}}}\sum _{n=1}^{\infty }\omega _{n}\,,} where A is the area of the metal plates, and a factor of 2 is introduced for the two possible polarizations of the wave. This expression is clearly infinite, and to proceed with the calculation, it is convenient to introduce a regulator (discussed in greater detail below). The regulator will serve to make the expression finite, and in the end will be removed. The zeta-regulated version of the energy per unit-area of the plate is ⟨ E ( s ) ⟩ A = ℏ ∫ d k x d k y ( 2 π ) 2 ∑ n = 1 ∞ ω n | ω n | − s . {\displaystyle {\frac {\langle E(s)\rangle }{A}}=\hbar \int {\frac {dk_{x}\,dk_{y}}{(2\pi )^{2}}}\sum _{n=1}^{\infty }\omega _{n}\left|\omega _{n}\right|^{-s}\,.} In the end, the limit s → 0 is to be taken. Here s is just a complex number, not to be confused with the shape discussed previously. This integral sum is finite for s real and larger than 3. The sum has a pole at s = 3, but may be analytically continued to s = 0, where the expression is finite. The above expression simplifies to: ⟨ E ( s ) ⟩ A = ℏ c 1 − s 4 π 2 ∑ n ∫ 0 ∞ 2 π q d q | q 2 + π 2 n 2 a 2 | 1 − s 2 , {\displaystyle {\frac {\langle E(s)\rangle }{A}}={\frac {\hbar c^{1-s}}{4\pi ^{2}}}\sum _{n}\int _{0}^{\infty }2\pi q\,dq\left|q^{2}+{\frac {\pi ^{2}n^{2}}{a^{2}}}\right|^{\frac {1-s}{2}}\,,} where polar coordinates q2 = kx2 + ky2 were introduced to turn the double integral into a single integral. The q in front is the Jacobian, and the 2π comes from the angular integration. The integral converges if Re(s) > 3, resulting in ⟨ E ( s ) ⟩ A = − ℏ c 1 − s π 2 − s 2 a 3 − s 1 3 − s ∑ n | n | 3 − s = − ℏ c 1 − s π 2 − s 2 a 3 − s ( 3 − s ) ∑ n 1 | n | s − 3 . {\displaystyle {\frac {\langle E(s)\rangle }{A}}=-{\frac {\hbar c^{1-s}\pi ^{2-s}}{2a^{3-s}}}{\frac {1}{3-s}}\sum _{n}\left|n\right|^{3-s}=-{\frac {\hbar c^{1-s}\pi ^{2-s}}{2a^{3-s}(3-s)}}\sum _{n}{\frac {1}{\left|n\right|^{s-3}}}\,.} The sum diverges at s in the neighborhood of zero, but if the damping of large-frequency excitations corresponding to analytic continuation of the Riemann zeta function to s = 0 is assumed to make sense physically in some way, then one has ⟨ E ⟩ A = lim s → 0 ⟨ E ( s ) ⟩ A = − ℏ c π 2 6 a 3 ζ ( − 3 ) . {\displaystyle {\frac {\langle E\rangle }{A}}=\lim _{s\to 0}{\frac {\langle E(s)\rangle }{A}}=-{\frac {\hbar c\pi ^{2}}{6a^{3}}}\zeta (-3)\,.} But ζ(−3) = ⁠1/120⁠ and so one obtains ⟨ E ⟩ A = − ℏ c π 2 720 a 3 . {\displaystyle {\frac {\langle E\rangle }{A}}=-{\frac {\hbar c\pi ^{2}}{720a^{3}}}\,.} The analytic continuation has evidently lost an additive positive infinity, somehow exactly accounting for the zero-point energy (not included above) outside the slot between the plates, but which changes upon plate movement within a closed system. The Casimir force per unit area ⁠Fc/A⁠ for idealized, perfectly conducting plates with vacuum between them is F c A = − d d a ⟨ E ⟩ A = − ℏ c π 2 240 a 4 {\displaystyle {\frac {F_{\mathrm {c} }}{A}}=-{\frac {d}{da}}{\frac {\langle E\rangle }{A}}=-{\frac {\hbar c\pi ^{2}}{240a^{4}}}} where ħ is the reduced Planck constant, c is the speed of light, a is the distance between the two plates The force is negative, indicating that the force is attractive: by moving the two plates closer together, the energy is lowered. The presence of ħ shows that the Casimir force per unit area ⁠Fc/A⁠ is very small, and that furthermore, the force is inherently of quantum-mechanical origin. By integrating the equation above it is possible to calculate the energy required to separate to infinity the two plates as: U E ( a ) = ∫ F ( a ) d a = ∫ − ℏ c π 2 A 240 a 4 d a = ℏ c π 2 A 720 a 3 {\displaystyle {\begin{aligned}U_{E}(a)&=\int F(a)\,da=\int -\hbar c\pi ^{2}{\frac {A}{240a^{4}}}\,da\\[4pt]&=\hbar c\pi ^{2}{\frac {A}{720a^{3}}}\end{aligned}}} where ħ is the reduced Planck constant, c is the speed of light, A is the area of one of the plates, a is the distance between the two plates In Casimir's original derivation, a moveable conductive plate is positioned at a short distance a from one of two widely separated plates (distance L apart). The zero-point energy on both sides of the plate is considered. Instead of the above ad hoc analytic continuation assumption, non-convergent sums and integrals are computed using Euler–Maclaurin summation with a regularizing function (e.g., exponential regularization) not so anomalous as |ωn|−s in the above. === More recent theory === Casimir's analysis of idealized metal plates was generalized to arbitrary dielectric and realistic metal plates by Evgeny Lifshitz and his students. Using this approach, complications of the bounding surfaces, such as the modifications to the Casimir force due to finite conductivity, can be calculated numerically using the tabulated complex dielectric functions of the bounding materials. Lifshitz's theory for two metal plates reduces to Casimir's idealized ⁠1/a4⁠ force law for large separations a much greater than the skin depth of the metal, and conversely reduces to the ⁠1/a3⁠ force law of the London dispersion force (with a coefficient called a Hamaker constant) for small a, with a more complicated dependence on a for intermediate separations determined by the dispersion of the materials. Lifshitz's result was subsequently generalized to arbitrary multilayer planar geometries as well as to anisotropic and magnetic materials, but for several decades the calculation of Casimir forces for non-planar geometries remained limited to a few idealized cases admitting analytical solutions. For example, the force in the experimental sphere–plate geometry was computed with an approximation (due to Derjaguin) that the sphere radius R is much larger than the separation a, in which case the nearby surfaces are nearly parallel and the parallel-plate result can be adapted to obtain an approximate ⁠R/a3⁠ force (neglecting both skin-depth and higher-order curvature effects). However, in the 2010s a number of authors developed and demonstrated a variety of numerical techniques, in many cases adapted from classical computational electromagnetics, that are capable of accurately calculating Casimir forces for arbitrary geometries and materials, from simple finite-size effects of finite plates to more complicated phenomena arising for patterned surfaces or objects of various shapes. == Measurement == One of the first experimental tests was conducted by Marcus Sparnaay at Philips in Eindhoven (Netherlands), in 1958, in a delicate and difficult experiment with parallel plates, obtaining results not in contradiction with the Casimir theory, but with large experimental errors. The Casimir effect was measured more accurately in 1997 by Steve K. Lamoreaux of Los Alamos National Laboratory, and by Umar Mohideen and Anushree Roy of the University of California, Riverside. In practice, rather than using two parallel plates, which would require phenomenally accurate alignment to ensure they were parallel, the experiments use one plate that is flat and another plate that is a part of a sphere with a very large radius. In 2001, a group (Giacomo Bressi, Gianni Carugno, Roberto Onofrio and Giuseppe Ruoso) at the University of Padua (Italy) finally succeeded in measuring the Casimir force between parallel plates using microresonators. Numerous variations of these experiments are summarized in the 2009 review by Klimchitskaya. In 2013, a conglomerate of scientists from Hong Kong University of Science and Technology, University of Florida, Harvard University, Massachusetts Institute of Technology, and Oak Ridge National Laboratory demonstrated a compact integrated silicon chip that can measure the Casimir force. The integrated chip defined by electron-beam lithography does not need extra alignment, making it an ideal platform for measuring Casimir force between complex geometries. In 2017 and 2021, the same group from Hong Kong University of Science and Technology demonstrated the non-monotonic Casimir force and distance-independent Casimir force, respectively, using this on-chip platform. == Regularization == In order to be able to perform calculations in the general case, it is convenient to introduce a regulator in the summations. This is an artificial device, used to make the sums finite so that they can be more easily manipulated, followed by the taking of a limit so as to remove the regulator. The heat kernel or exponentially regulated sum is ⟨ E ( t ) ⟩ = 1 2 ∑ n ℏ | ω n | exp ⁡ ( − t | ω n | ) , {\displaystyle \langle E(t)\rangle ={\frac {1}{2}}\sum _{n}\hbar |\omega _{n}|\exp {\bigl (}-t|\omega _{n}|{\bigr )}\,,} where the limit t → 0+ is taken in the end. The divergence of the sum is typically manifested as ⟨ E ( t ) ⟩ = C t 3 + finite {\displaystyle \langle E(t)\rangle ={\frac {C}{t^{3}}}+{\textrm {finite}}\,} for three-dimensional cavities. The infinite part of the sum is associated with the bulk constant C which does not depend on the shape of the cavity. The interesting part of the sum is the finite part, which is shape-dependent. The Gaussian regulator ⟨ E ( t ) ⟩ = 1 2 ∑ n ℏ | ω n | exp ⁡ ( − t 2 | ω n | 2 ) {\displaystyle \langle E(t)\rangle ={\frac {1}{2}}\sum _{n}\hbar |\omega _{n}|\exp \left(-t^{2}|\omega _{n}|^{2}\right)} is better suited to numerical calculations because of its superior convergence properties, but is more difficult to use in theoretical calculations. Other, suitably smooth, regulators may be used as well. The zeta function regulator ⟨ E ( s ) ⟩ = 1 2 ∑ n ℏ | ω n | | ω n | − s {\displaystyle \langle E(s)\rangle ={\frac {1}{2}}\sum _{n}\hbar |\omega _{n}||\omega _{n}|^{-s}} is completely unsuited for numerical calculations, but is quite useful in theoretical calculations. In particular, divergences show up as poles in the complex s plane, with the bulk divergence at s = 4. This sum may be analytically continued past this pole, to obtain a finite part at s = 0. Not every cavity configuration necessarily leads to a finite part (the lack of a pole at s = 0) or shape-independent infinite parts. In this case, it should be understood that additional physics has to be taken into account. In particular, at extremely large frequencies (above the plasma frequency), metals become transparent to photons (such as X-rays), and dielectrics show a frequency-dependent cutoff as well. This frequency dependence acts as a natural regulator. There are a variety of bulk effects in solid state physics, mathematically very similar to the Casimir effect, where the cutoff frequency comes into explicit play to keep expressions finite. (These are discussed in greater detail in Landau and Lifshitz, "Theory of Continuous Media".) == Generalities == The Casimir effect can also be computed using the mathematical mechanisms of functional integrals of quantum field theory, although such calculations are considerably more abstract, and thus difficult to comprehend. In addition, they can be carried out only for the simplest of geometries. However, the formalism of quantum field theory makes it clear that the vacuum expectation value summations are in a certain sense summations over so-called "virtual particles". More interesting is the understanding that the sums over the energies of standing waves should be formally understood as sums over the eigenvalues of a Hamiltonian. This allows atomic and molecular effects, such as the Van der Waals force, to be understood as a variation on the theme of the Casimir effect. Thus one considers the Hamiltonian of a system as a function of the arrangement of objects, such as atoms, in configuration space. The change in the zero-point energy as a function of changes of the configuration can be understood to result in forces acting between the objects. In the chiral bag model of the nucleon, the Casimir energy plays an important role in showing the mass of the nucleon is independent of the bag radius. In addition, the spectral asymmetry is interpreted as a non-zero vacuum expectation value of the baryon number, cancelling the topological winding number of the pion field surrounding the nucleon. A "pseudo-Casimir" effect can be found in liquid crystal systems, where the boundary conditions imposed through anchoring by rigid walls give rise to a long-range force, analogous to the force that arises between conducting plates. == Dynamical Casimir effect == The dynamical Casimir effect is the production of particles and energy from an accelerated moving mirror. This reaction was predicted by certain numerical solutions to quantum mechanics equations made in the 1970s. In May 2011 an announcement was made by researchers at the Chalmers University of Technology, in Gothenburg, Sweden, of the detection of the dynamical Casimir effect. In their experiment, microwave photons were generated out of the vacuum in a superconducting microwave resonator. These researchers used a modified SQUID to change the effective length of the resonator in time, mimicking a mirror moving at the required relativistic velocity. If confirmed this would be the first experimental verification of the dynamical Casimir effect. In March 2013 an article appeared on the PNAS scientific journal describing an experiment that demonstrated the dynamical Casimir effect in a Josephson metamaterial. In July 2019 an article was published describing an experiment providing evidence of optical dynamical Casimir effect in a dispersion-oscillating fibre. In 2020, Frank Wilczek et al., proposed a resolution to the information loss paradox associated with the moving mirror model of the dynamical Casimir effect. Constructed within the framework of quantum field theory in curved spacetime, the dynamical Casimir effect (moving mirror) has been used to help understand the Unruh effect. == Repulsive forces == There are a few instances where the Casimir effect can give rise to repulsive forces between uncharged objects. Evgeny Lifshitz showed (theoretically) that in certain circumstances (most commonly involving liquids), repulsive forces can arise. This has sparked interest in applications of the Casimir effect toward the development of levitating devices. An experimental demonstration of the Casimir-based repulsion predicted by Lifshitz was carried out by Munday et al. who described it as "quantum levitation". Other scientists have also suggested the use of gain media to achieve a similar levitation effect, though this is controversial because these materials seem to violate fundamental causality constraints and the requirement of thermodynamic equilibrium (Kramers–Kronig relations). Casimir and Casimir–Polder repulsion can in fact occur for sufficiently anisotropic electrical bodies; for a review of the issues involved with repulsion see Milton et al. A notable recent development on repulsive Casimir forces relies on using chiral materials. Q.-D. Jiang at Stockholm University and Nobel Laureate Frank Wilczek at MIT show that chiral "lubricant" can generate repulsive, enhanced, and tunable Casimir interactions. Timothy Boyer showed in his work published in 1968 that a conductor with spherical symmetry will also show this repulsive force, and the result is independent of radius. Further work shows that the repulsive force can be generated with materials of carefully chosen dielectrics. == Speculative applications == It has been suggested that the Casimir forces have application in nanotechnology, in particular silicon integrated circuit technology based micro- and nanoelectromechanical systems, and so-called Casimir oscillators. In 1995 and 1998 Maclay et al. published the first models of a microelectromechanical system (MEMS) with Casimir forces. While not exploiting the Casimir force for useful work, the papers drew attention from the MEMS community due to the revelation that Casimir effect needs to be considered as a vital factor in the future design of MEMS. In particular, Casimir effect might be the critical factor in the stiction failure of MEMS. In 2001, Capasso et al. showed how the force can be used to control the mechanical motion of a MEMS device, The researchers suspended a polysilicon plate from a torsional rod – a twisting horizontal bar just a few microns in diameter. When they brought a metallized sphere close up to the plate, the attractive Casimir force between the two objects made the plate rotate. They also studied the dynamical behaviour of the MEMS device by making the plate oscillate. The Casimir force reduced the rate of oscillation and led to nonlinear phenomena, such as hysteresis and bistability in the frequency response of the oscillator. According to the team, the system's behaviour agreed well with theoretical calculations. The Casimir effect shows that quantum field theory allows the energy density in very small regions of space to be negative relative to the ordinary vacuum energy, and the energy densities cannot be arbitrarily negative as the theory breaks down at atomic distances.: 175  Such prominent physicists such as Stephen Hawking and Kip Thorne, have speculated that such effects might make it possible to stabilize a traversable wormhole. == See also == Negative energy Scharnhorst effect Van der Waals force Squeezed vacuum == References == == Further reading == === Introductory readings === Casimir effect description from University of California, Riverside's version of the Usenet physics FAQ. A. Lambrecht, The Casimir effect: a force from nothing, Physics World, September 2002. NASA Astronomy Picture of the Day: Casimir effect (17 December 2006) Simpson, W. M. R; Leonhardt, U. (2015). Forces of the Quantum Vacuum: An introduction to Casimir physics. World Scientific. ISBN 978-981-4632-90-4. === Papers, books and lectures === Casimir, H. B. G.; Polder, D. (1948). "The Influence of Retardation on the London-van der Waals Forces". Physical Review. 73 (4): 360–372. Bibcode:1948PhRv...73..360C. doi:10.1103/PhysRev.73.360. Casimir, H. B. G. (1948). "On the attraction between two perfectly conducting plates" (PDF). Proceedings of the Koninklijke Nederlandse Akademie van Wetenschappen. B51: 793–795. Lamoreaux, S. K. (1997). "Demonstration of the Casimir Force in the 0.6 to 6 μm Range". Physical Review Letters. 78 (1): 5–8. Bibcode:1997PhRvL..78....5L. doi:10.1103/PhysRevLett.78.5. S2CID 25323874. Bordag, M.; Mohideen, U.; Mostepanenko, V. M. (October 2001). "New developments in the Casimir effect". Physics Reports. 353 (1–3): 1–205. arXiv:quant-ph/0106045. Bibcode:2001PhR...353....1B. doi:10.1016/S0370-1573(01)00015-1. S2CID 119352552. Milton, K. A. (2001). The Casimir Effect: Physical Manifestations of Zero-point Energy (Reprint ed.). World Scientific. ISBN 978-981-02-4397-5. Dalvit, Diego; Milonni, Peter; Roberts, David; Da Rosa, Felipe (2011). Dalvit, Diego; Milonni, Peter W.; Roberts, David; da Rosa, Felipe (eds.). Casimir Physics. Lecture Notes in Physics. Vol. 834. Bibcode:2011LNP...834.....D. doi:10.1007/978-3-642-20288-9. ISBN 978-3-642-20287-2. ISSN 0075-8450. OCLC 844922239. Bressi, G.; Carugno, G.; Onofrio, R.; Ruoso, G. (2002). "Measurement of the Casimir Force between Parallel Metallic Surfaces". Physical Review Letters. 88 (4): 041804. arXiv:quant-ph/0203002. Bibcode:2002PhRvL..88d1804B. doi:10.1103/PhysRevLett.88.041804. PMID 11801108. S2CID 43354557. Kenneth, O.; Klich, I.; Mann, A.; Revzen, M. (2002). "Repulsive Casimir Forces". Physical Review Letters. 89 (3): 033001. arXiv:quant-ph/0202114. Bibcode:2002PhRvL..89c3001K. doi:10.1103/PhysRevLett.89.033001. PMID 12144387. S2CID 20903628. Barrow, J. D. (2005). "Much Ado About Nothing". Lecture at Gresham College. Archived from the original on 30 September 2007. (Includes discussion of French naval analogy.) Barrow, J. D. (2000). The Book of Nothing: Vacuums, Voids, and the Latest Ideas About the Origins of the Universe. Pantheon Books. ISBN 978-0-09-928845-9. (Also includes discussion of French naval analogy.) Downling, J. P. (1989). "The Mathematics of the Casimir Effect". Mathematics Magazine. 62 (5): 324–331. doi:10.1080/0025570X.1989.11977464. Patent No. PCT/RU2011/000847 Author Urmatskih. === Temperature dependence === Measurements Recast Usual View of Elusive Force from NIST Nesterenko, V. V.; Lambiase, G.; Scarpetta, G. (2005). "Calculation of the Casimir energy at zero and finite temperature: Some recent results". Rivista del Nuovo Cimento. 27 (6): 1–74. arXiv:hep-th/0503100. Bibcode:2004NCimR..27f...1N. doi:10.1393/ncr/i2005-10002-2. S2CID 14693485. == External links == Casimir effect article search on arxiv.org G. Lang, The Casimir Force web site, 2002 J. Babb, bibliography on the Casimir Effect web site, 2009 H. Nikolic, The origin of Casimir effect; Vacuum energy or van der Waals force? presentation slides, 2018
Wikipedia/Casimir_force
Food energy is chemical energy that animals and humans derive from food to sustain their metabolism and muscular activity. Most animals derive most of their energy from aerobic respiration, namely combining the carbohydrates, fats, and proteins with oxygen from air or dissolved in water. Other smaller components of the diet, such as organic acids, polyols, and ethanol (drinking alcohol) may contribute to the energy input. Some diet components that provide little or no food energy, such as water, minerals, vitamins, cholesterol, and fiber, may still be necessary for health and survival for other reasons. Some organisms have instead anaerobic respiration, which extracts energy from food by reactions that do not require oxygen. The energy contents of a given mass of food is usually expressed in the metric (SI) unit of energy, the joule (J), and its multiple the kilojoule (kJ); or in the traditional unit of heat energy, the calorie (cal). In nutritional contexts, the latter is often (especially in US) the "large" variant of the unit, also written "Calorie" (with symbol Cal, both with capital "C") or "kilocalorie" (kcal), and equivalent to 4184 J or 4.184 kJ. Thus, for example, fats and ethanol have the greatest amount of food energy per unit mass, 37 and 29 kJ/g (9 and 7 kcal/g), respectively. Proteins and most carbohydrates have about 17 kJ/g (4 kcal/g), though there are differences between different kinds. For example, the values for glucose, sucrose, and starch are 15.57, 16.48 and 17.48 kilojoules per gram (3.72, 3.94 and 4.18 kcal/g) respectively. The differing energy density of foods (fat, alcohols, carbohydrates and proteins) lies mainly in their varying proportions of carbon, hydrogen, and oxygen atoms. Carbohydrates that are not easily absorbed, such as fibre, or lactose in lactose-intolerant individuals, contribute less food energy. Polyols (including sugar alcohols) and organic acids contribute 10 kJ/g (2.4 kcal/g) and 13 kJ/g (3.1 kcal/g) respectively. The energy contents of a complex dish or meal can be approximated by adding the energy contents of its components. == History and methods of measurement == === Direct calorimetry of combustion === The first determinations of the energy content of food were made by burning a dried sample in a bomb calorimeter and measuring the temperature change in the water surrounding the apparatus, a method known as direct calorimetry. === The Atwater system === However, the direct calorimetric method generally overestimates the actual energy that the body can obtain from the food, because it also counts the energy contents of dietary fiber and other indigestible components, and does not allow for partial absorption and/or incomplete metabolism of certain substances. For this reason, today the energy content of food is instead obtained indirectly, by using chemical analysis to determine the amount of each digestible dietary component (such as protein, carbohydrates, and fats), and adding the respective food energy contents, previously obtained by measurement of metabolic heat released by the body. In particular, the fibre content is excluded. This method is known as the Modified Atwater system, after Wilbur Atwater who pioneered these measurements in the late 19th century. The system was later improved by Annabel Merrill and Bernice Watt of the USDA, who derived a system whereby specific calorie conversion factors for different foods were proposed. == Dietary sources of energy == The typical human diet consists chiefly of carbohydrates, fats, proteins, water, ethanol, and indigestible components such as bones, seeds, and fibre (mostly cellulose). Carbohydrates, fats, and proteins typically comprise ninety percent of the dry weight of food. Ruminants can extract food energy from the respiration of cellulose because of bacteria in their rumens that decompose it into digestible carbohydrates. Other minor components of the human diet that contribute to its energy content are organic acids such as citric and tartaric, and polyols such as glycerol, xylitol, inositol, and sorbitol. Some nutrients have regulatory roles affected by cell signaling, in addition to providing energy for the body. For example, leucine plays an important role in the regulation of protein metabolism and suppresses an individual's appetite. Small amounts of essential fatty acids, constituents of some fats that cannot be synthesized by the human body, are used (and necessary) for other biochemical processes. The approximate food energy contents of various human diet components, to be used in package labeling according to the EU regulations and UK regulations, are: (1) Some polyols, like erythritol, are not digested and should be excluded from the count. (2) This entry exists in the EU regulations of 2008, but not in the UK regulations, according to which fibre shall not be counted. More detailed tables for specific foods have been published by many organizations, such as the United Nations Food and Agriculture Organization also has published a similar table. Other components of the human diet are either noncaloric, or are usually consumed in such small amounts that they can be neglected. == Energy usage in the human body == The food energy actually obtained by respiration is used by the human body for a wide range of purposes, including basal metabolism of various organs and tissues, maintaining the internal body temperature, and exerting muscular force to maintain posture and produce motion. About 20% is used for brain metabolism. The conversion efficiency of energy from respiration into muscular (physical) power depends on the type of food and on the type of physical energy usage (e.g., which muscles are used, whether the muscle is used aerobically or anaerobically). In general, the efficiency of muscles is rather low: only 18 to 26% of the energy available from respiration is converted into mechanical energy. This low efficiency is the result of about 40% efficiency of generating ATP from the respiration of food, losses in converting energy from ATP into mechanical work inside the muscle, and mechanical losses inside the body. The latter two losses are dependent on the type of exercise and the type of muscle fibers being used (fast-twitch or slow-twitch). For an overall efficiency of 20%, one watt of mechanical power is equivalent to 18 kJ/h (4.3 kcal/h). For example, a manufacturer of rowing equipment shows calories released from "burning" food as four times the actual mechanical work, plus 1,300 kJ (300 kcal) per hour, which amounts to about 20% efficiency at 250 watts of mechanical output. It can take up to 20 hours of little physical output (e.g., walking) to "burn off" 17,000 kJ (4,000 kcal) more than a body would otherwise consume. For reference, each kilogram of body fat is roughly equivalent to 32,300 kilojoules of food energy (i.e., 3,500 kilocalories per pound or 7,700 kilocalories per kilogram). == Recommended daily intake == Many countries and health organizations have published recommendations for healthy levels of daily intake of food energy. For example, the United States government estimates 8,400 and 10,900 kJ (2,000 and 2,600 kcal) needed for men and women, respectively, between ages 26 and 45, whose total physical activity is equivalent to walking around 2.5 to 5 km (1+1⁄2 to 3 mi) per day in addition to the activities of sedentary living. These estimates are for a "reference man" who is 1.78 m (5 ft 10 in) tall and weighs 70 kg (154 lb) and a "reference woman" who is 1.63 m (5 ft 4 in) tall and weighs 57 kg (126 lb). Because caloric requirements vary by height, activity, age, pregnancy status, and other factors, the USDA created the DRI Calculator for Healthcare Professionals in order to determine individual caloric needs. According to the Food and Agriculture Organization of the United Nations, the average minimum energy requirement per person per day is about 7,500 kJ (1,800 kcal). Although the U.S. has changed over time with a growth in population and processed foods or food in general, Americans today have available roughly the same level of calories as the older generation. [1] Older people and those with sedentary lifestyles require less energy; children and physically active people require more. Recognizing these factors, Australia's National Health and Medical Research Council recommends different daily energy intakes for each age and gender group. Notwithstanding, nutrition labels on Australian food products typically recommend the average daily energy intake of 8,800 kJ (2,100 kcal). The minimum food energy intake is also higher in cold environments. Increased mental activity has been linked with moderately increased brain energy consumption. == Nutrition labels == Many governments require food manufacturers to label the energy content of their products, to help consumers control their energy intake. To facilitate evaluation by consumers, food energy values (and other nutritional properties) in package labels or tables are often quoted for convenient amounts of the food, rather than per gram or kilogram; such as in "calories per serving" or "kcal per 100 g", or "kJ per package". The units vary depending on country: == See also == Basal metabolic rate Food chain Food composition Heat of combustion Nutrition facts label Satiety value Table of food nutrients List of countries by food energy intake == References == == External links == Is a calorie a calorie? DRI Calculator for Healthcare Professionals
Wikipedia/Food_energy