text
stringlengths
559
401k
source
stringlengths
13
121
The Journal of Chemical Physics is a scientific journal published by the American Institute of Physics that carries research papers on chemical physics. Two volumes, each of 24 issues, are published annually. It was established in 1933 when Journal of Physical Chemistry editors refused to publish theoretical works. The editors have been: 2019–present: Tim Lian 2008–2018: Marsha I. Lester 2007–2008: Branka M. Ladanyi 1998–2007: Donald H. Levy 1983–1997: John C. Light 1960–1982: J. Willard Stout 1958–1959: Clyde A. Hutchison Jr. 1956–1957 (Acting): Joseph Edward Mayer 1953–1955: Clyde A. Hutchison Jr. 1942–1952: Joseph E. Mayer 1933–1941: Harold Urey == Highlights == According to the Web of Science database, as to 15 March 2018, a total of 132,435 articles have been published in the Journal of Chemical Physics. The number of articles published per year was about 180 in the 1930s and decreased to about 120 during second world war. After the war the number of articles increased steadily, reaching about 1800 articles per year in 1970. The publishing rate remained fairly stable at this level until about 1990, when it climbed up again, reaching a maximum of 2871 articles published in 2014. It has since decreased somewhat to 2300 articles per year in the period 2015–2017. As to 15 March 2018 and according to Web of Science, the ten most cited articles published in the Journal of Chemical Physics are: A. D. Becke, Density Functional Thermochemistry. 3. The role of exact exchange, 98(7), 5648–5652 (1993) [65911 citations] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, Equation of state calculations by fast computing machines, 21(6), 1087–1092 (1953) [19444 citations] W. L. Jorgensen, J. Chandrasekhar, J. D. Madura et al., Comparison of simple potential functions for simulating liquid water, 79(2), 926–935 (1983) [19397 citations] T. H. Dunning, Gaussian basis sets for use in correlated molecular calculations. 1. The atoms boron through neon and hydrogen, 90(2), 1007–2013 (1989), [18999 citations] H. J. C. Berendsen, J. P. M. Postma, W. F. Van Gunsteren et al., Molecular dynamics with coupling to an external bath, 81(8), 3684–3690 (1984), [15826 citations] T. Darden, D. York, J. Pedersen, Particle mesh Ewald – An N log(N) method for Ewald sums in large systems, 98(12), 10089–10092 (1993) [11591 citations] P. J. Hay, W. R. Wadt, Ab initio effective core potentials for molecular calculations - Potentials for K to Au including the outermost core orbitals, 82(1), 299–310 (1985), [11195 citations] R. F. Stewart, E. R. Davidson, W. T. Simpson, Coherent X-ray scattering for hydrogen atom in hydrogen molecule, 42(9), 3175 (1965) [10346 citations] W. J. Hehre, R. Ditchfield, J. A. Pople, Self-consistent molecular-orbital methods. 12. Further extensions of Gaussian-type basis sets for use in molecular-orbital studies of organic molecules, 56(5), 2257 (1972) [10279 citations] R. Krishnan, J. S. Binkley, R. Seeger et al., Self-consistent molecular-orbital methods. 20. Basis set for correlated wave-functions, 72(1), 650–654 (1980) [9558 citations] By the standards of chemical physics, these are huge numbers of citations, and all of these papers should be considered pivotal. == See also == Annual Review of Physical Chemistry Russian Journal of Physical Chemistry A Russian Journal of Physical Chemistry B == References == == External links == Journal home page American Institute of Physics American Institute of Physics Journals
Wikipedia/The_Journal_of_Chemical_Physics
In organic chemistry, a functional group is any substituent or moiety in a molecule that causes the molecule's characteristic chemical reactions. The same functional group will undergo the same or similar chemical reactions regardless of the rest of the molecule's composition. This enables systematic prediction of chemical reactions and behavior of chemical compounds and the design of chemical synthesis. The reactivity of a functional group can be modified by other functional groups nearby. Functional group interconversion can be used in retrosynthetic analysis to plan organic synthesis. A functional group is a group of atoms in a molecule with distinctive chemical properties, regardless of the other atoms in the molecule. The atoms in a functional group are linked to each other and to the rest of the molecule by covalent bonds. For repeating units of polymers, functional groups attach to their nonpolar core of carbon atoms and thus add chemical character to carbon chains. Functional groups can also be charged, e.g. in carboxylate salts (−COO−), which turns the molecule into a polyatomic ion or a complex ion. Functional groups binding to a central atom in a coordination complex are called ligands. Complexation and solvation are also caused by specific interactions of functional groups. In the common rule of thumb "like dissolves like", it is the shared or mutually well-interacting functional groups which give rise to solubility. For example, sugar dissolves in water because both share the hydroxyl functional group (−OH) and hydroxyls interact strongly with each other. Plus, when functional groups are more electronegative than atoms they attach to, the functional groups will become polar, and the otherwise nonpolar molecules containing these functional groups become polar and so become soluble in some aqueous environment. Combining the names of functional groups with the names of the parent alkanes generates what is termed a systematic nomenclature for naming organic compounds. In traditional nomenclature, the first carbon atom after the carbon that attaches to the functional group is called the alpha carbon; the second, beta carbon, the third, gamma carbon, etc. If there is another functional group at a carbon, it may be named with the Greek letter, e.g., the gamma-amine in gamma-aminobutyric acid is on the third carbon of the carbon chain attached to the carboxylic acid group. IUPAC conventions call for numeric labeling of the position, e.g. 4-aminobutanoic acid. In traditional names various qualifiers are used to label isomers, for example, isopropanol (IUPAC name: propan-2-ol) is an isomer of n-propanol (propan-1-ol). The term moiety has some overlap with the term "functional group". However, a moiety is an entire "half" of a molecule, which can be not only a single functional group, but also a larger unit consisting of multiple functional groups. For example, an "aryl moiety" may be any group containing an aromatic ring, regardless of how many functional groups the said aryl has. == Table of common functional groups == The following is a list of common functional groups. In the formulas, the symbols R and R' usually denote an attached hydrogen, or a hydrocarbon side chain of any length, but may sometimes refer to any group of atoms. === Hydrocarbons === Hydrocarbons are a class of molecule that is defined by functional groups called hydrocarbyls that contain only carbon and hydrogen, but vary in the number and order of double bonds. Each one differs in type (and scope) of reactivity. There are also a large number of branched or ring alkanes that have specific names, e.g., tert-butyl, bornyl, cyclohexyl, etc. There are several functional groups that contain an alkene such as vinyl group, allyl group, or acrylic group. Hydrocarbons may form charged structures: positively charged carbocations or negative carbanions. Carbocations are often named -um. Examples are tropylium and triphenylmethyl cations and the cyclopentadienyl anion. === Groups containing halogen === Haloalkanes are a class of molecule that is defined by a carbon–halogen bond. This bond can be relatively weak (in the case of an iodoalkane) or quite stable (as in the case of a fluoroalkane). In general, with the exception of fluorinated compounds, haloalkanes readily undergo nucleophilic substitution reactions or elimination reactions. The substitution on the carbon, the acidity of an adjacent proton, the solvent conditions, etc. all can influence the outcome of the reactivity. === Groups containing oxygen === Compounds that contain C–O bonds each possess differing reactivity based upon the location and hybridization of the C–O bond, owing to the electron-withdrawing effect of sp-hybridized oxygen (carbonyl groups) and the donating effects of sp2-hybridized oxygen (alcohol groups). === Groups containing nitrogen === Compounds that contain nitrogen in this category may contain C-O bonds, such as in the case of amides. === Groups containing sulfur === Compounds that contain sulfur exhibit unique chemistry due to sulfur's ability to form more bonds than oxygen, its lighter analogue on the periodic table. Substitutive nomenclature (marked as prefix in table) is preferred over functional class nomenclature (marked as suffix in table) for sulfides, disulfides, sulfoxides and sulfones. === Groups containing phosphorus === Compounds that contain phosphorus exhibit unique chemistry due to the ability of phosphorus to form more bonds than nitrogen, its lighter analogue on the periodic table. === Groups containing boron === Compounds containing boron exhibit unique chemistry due to their having partially filled octets and therefore acting as Lewis acids. === Groups containing metals === note 1 Fluorine is too electronegative to be bonded to magnesium; it becomes an ionic salt instead. === Names of radicals or moieties === These names are used to refer to the moieties themselves or to radical species, and also to form the names of halides and substituents in larger molecules. When the parent hydrocarbon is unsaturated, the suffix ("-yl", "-ylidene", or "-ylidyne") replaces "-ane" (e.g. "ethane" becomes "ethyl"); otherwise, the suffix replaces only the final "-e" (e.g. "ethyne" becomes "ethynyl"). When used to refer to moieties, multiple single bonds differ from a single multiple bond. For example, a methylene bridge (methanediyl) has two single bonds, whereas a methylidene group (methylidene) has one double bond. Suffixes can be combined, as in methylidyne (triple bond) vs. methylylidene (single bond and double bond) vs. methanetriyl (three double bonds). There are some retained names, such as methylene for methanediyl, 1,x-phenylene for phenyl-1,x-diyl (where x is 2, 3, or 4), carbyne for methylidyne, and trityl for triphenylmethyl. == See also == Category:Functional groups Group contribution method == References == == External links == IUPAC Blue Book (organic nomenclature) "IUPAC ligand abbreviations" (PDF). IUPAC. 2 April 2004. Archived from the original (PDF) on 27 September 2007. Retrieved 25 February 2015. Functional group video
Wikipedia/Functional_group
Kelvin probe force microscopy (KPFM), also known as surface potential microscopy, is a noncontact variant of atomic force microscopy (AFM). By raster scanning in the x,y plane the work function of the sample can be locally mapped for correlation with sample features. When there is little or no magnification, this approach can be described as using a scanning Kelvin probe (SKP). These techniques are predominantly used to measure corrosion and coatings. With KPFM, the work function of surfaces can be observed at atomic or molecular scales. The work function relates to many surface phenomena, including catalytic activity, reconstruction of surfaces, doping and band-bending of semiconductors, charge trapping in dielectrics and corrosion. The map of the work function produced by KPFM gives information about the composition and electronic state of the local structures on the surface of a solid. == History == The SKP technique is based on parallel plate capacitor experiments performed by Lord Kelvin in 1898. In the 1930s William Zisman built upon Lord Kelvin's experiments to develop a technique to measure contact potential differences of dissimilar metals. == Working principle == In SKP the probe and sample are held parallel to each other and electrically connected to form a parallel plate capacitor. The probe is selected to be of a different material to the sample, therefore each component initially has a distinct Fermi level. When electrical connection is made between the probe and the sample electron flow can occur between the probe and the sample in the direction of the higher to the lower Fermi level. This electron flow causes the equilibration of the probe and sample Fermi levels. Furthermore, a surface charge develops on the probe and the sample, with a related potential difference known as the contact potential (Vc). In SKP the probe is vibrated along a perpendicular to the plane of the sample. This vibration causes a change in probe to sample distance, which in turn results in the flow of current, taking the form of an ac sine wave. The resulting ac sine wave is demodulated to a dc signal through the use of a lock-in amplifier. Typically the user must select the correct reference phase value used by the lock-in amplifier. Once the dc potential has been determined, an external potential, known as the backing potential (Vb) can be applied to null the charge between the probe and the sample. When the charge is nullified, the Fermi level of the sample returns to its original position. This means that Vb is equal to -Vc, which is the work function difference between the SKP probe and the sample measured. The cantilever in the AFM is a reference electrode that forms a capacitor with the surface, over which it is scanned laterally at a constant separation. The cantilever is not piezoelectrically driven at its mechanical resonance frequency ω0 as in normal AFM although an alternating current (AC) voltage is applied at this frequency. When there is a direct-current (DC) potential difference between the tip and the surface, the AC+DC voltage offset will cause the cantilever to vibrate. The origin of the force can be understood by considering that the energy of the capacitor formed by the cantilever and the surface is E = 1 2 C [ V D C + V A C sin ⁡ ( ω 0 t ) ] 2 = 1 2 C [ 2 V D C V A C sin ⁡ ( ω 0 t ) − 1 2 V A C 2 cos ⁡ ( 2 ω 0 t ) ] {\displaystyle E={\frac {1}{2}}C[V_{DC}+V_{AC}\sin(\omega _{0}t)]^{2}={\frac {1}{2}}C[2V_{DC}V_{AC}\sin(\omega _{0}t)-{\frac {1}{2}}V_{AC}^{2}\cos(2\omega _{0}t)]} plus terms at DC. Only the cross-term proportional to the VDC·VAC product is at the resonance frequency ω0. The resulting vibration of the cantilever is detected using usual scanned-probe microscopy methods (typically involving a diode laser and a four-quadrant detector). A null circuit is used to drive the DC potential of the tip to a value which minimizes the vibration. A map of this nulling DC potential versus the lateral position coordinate therefore produces an image of the work function of the surface. A related technique, electrostatic force microscopy (EFM), directly measures the force produced on a charged tip by the electric field emanating from the surface. EFM operates much like magnetic force microscopy in that the frequency shift or amplitude change of the cantilever oscillation is used to detect the electric field. However, EFM is much more sensitive to topographic artifacts than KPFM. Both EFM and KPFM require the use of conductive cantilevers, typically metal-coated silicon or silicon nitride. Another AFM-based technique for the imaging of electrostatic surface potentials, scanning quantum dot microscopy, quantifies surface potentials based on their ability to gate a tip-attached quantum dot. == Factors affecting SKP measurements == The quality of an SKP measurement is affected by a number of factors. This includes the diameter of the SKP probe, the probe to sample distance, and the material of the SKP probe. The probe diameter is important in the SKP measurement because it affects the overall resolution of the measurement, with smaller probes leading to improved resolution. On the other hand, reducing the size of the probe causes an increase in fringing effects which reduces the sensitivity of the measurement by increasing the measurement of stray capacitances. The material used in the construction of the SKP probe is important to the quality of the SKP measurement. This occurs for a number of reasons. Different materials have different work function values which will affect the contact potential measured. Different materials have different sensitivity to humidity changes. The material can also affect the resulting lateral resolution of the SKP measurement. In commercial probes tungsten is used, though probes of platinum, copper, gold, and NiCr has been used. The probe to sample distance affects the final SKP measurement, with smaller probe to sample distances improving the lateral resolution and the signal-to-noise ratio of the measurement. Furthermore, reducing the SKP probe to sample distance increases the intensity of the measurement, where the intensity of the measurement is proportional to 1/d2, where d is the probe to sample distance. The effects of changing probe to sample distance on the measurement can be counteracted by using SKP in constant distance mode. == Work function == The Kelvin probe force microscope or Kelvin force microscope (KFM) is based on an AFM set-up and the determination of the work function is based on the measurement of the electrostatic forces between the small AFM tip and the sample. The conducting tip and the sample are characterized by (in general) different work functions, which represent the difference between the Fermi level and the vacuum level for each material. If both elements were brought in contact, a net electric current would flow between them until the Fermi levels were aligned. The difference between the work functions is called the contact potential difference and is denoted generally with VCPD. An electrostatic force exists between tip and sample, because of the electric field between them. For the measurement a voltage is applied between tip and sample, consisting of a DC-bias VDC and an AC-voltage VAC sin(ωt) of frequency ω. V = ( V D C − V C P D ) + V A C ⋅ sin ⁡ ( ω t ) {\displaystyle V=(V_{DC}-V_{CPD})+V_{AC}\cdot \sin(\omega t)} Tuning the AC-frequency to the resonant frequency of the AFM cantilever results in an improved sensitivity. The electrostatic force in a capacitor may be found by differentiating the energy function with respect to the separation of the elements and can be written as F = 1 2 d C d z V 2 {\displaystyle F={\frac {1}{2}}{\frac {dC}{dz}}V^{2}} where C is the capacitance, z is the separation, and V is the voltage, each between tip and surface. Substituting the previous formula for voltage (V) shows that the electrostatic force can be split up into three contributions, as the total electrostatic force F acting on the tip then has spectral components at the frequencies ω and 2ω. F = F D C + F ω + F 2 ω {\displaystyle F=F_{DC}+F_{\omega }+F_{2\omega }} The DC component, FDC, contributes to the topographical signal, the term Fω at the characteristic frequency ω is used to measure the contact potential and the contribution F2ω can be used for capacitance microscopy. F D C = d C d z [ 1 2 ( V D C − V C P D ) 2 + 1 4 V A C 2 ] {\displaystyle F_{DC}={\frac {dC}{dz}}\left[{\frac {1}{2}}(V_{DC}-V_{CPD})^{2}+{\frac {1}{4}}V_{AC}^{2}\right]} F ω = d C d z [ V D C − V C P D ] V A C sin ⁡ ( ω t ) {\displaystyle F_{\omega }={\frac {dC}{dz}}[V_{DC}-V_{CPD}]V_{AC}\sin(\omega t)} F 2 ω = − 1 4 d C d z V A C 2 cos ⁡ ( 2 ω t ) {\displaystyle F_{2\omega }=-{\frac {1}{4}}{\frac {dC}{dz}}V_{AC}^{2}\cos(2\omega t)} == Contact potential measurements == For contact potential measurements a lock-in amplifier is used to detect the cantilever oscillation at ω. During the scan VDC will be adjusted so that the electrostatic forces between the tip and the sample become zero and thus the response at the frequency ω becomes zero. Since the electrostatic force at ω depends on VDC − VCPD, the value of VDC that minimizes the ω-term corresponds to the contact potential. Absolute values of the sample work function can be obtained if the tip is first calibrated against a reference sample of known work function. Apart from this, one can use the normal topographic scan methods at the resonance frequency ω independently of the above. Thus, in one scan, the topography and the contact potential of the sample are determined simultaneously. This can be done in (at least) two different ways: 1) The topography is captured in AC mode which means that the cantilever is driven by a piezo at its resonant frequency. Simultaneously the AC voltage for the KPFM measurement is applied at a frequency slightly lower than the resonant frequency of the cantilever. In this measurement mode the topography and the contact potential difference are captured at the same time and this mode is often called single-pass. 2) One line of the topography is captured either in contact or AC mode and is stored internally. Then, this line is scanned again, while the cantilever remains on a defined distance to the sample without a mechanically driven oscillation but the AC voltage of the KPFM measurement is applied and the contact potential is captured as explained above. It is important to note that the cantilever tip must not be too close to the sample in order to allow good oscillation with applied AC voltage. Therefore, KPFM can be performed simultaneously during AC topography measurements but not during contact topography measurements. == Applications == The Volta potential measured by SKP is directly proportional to the corrosion potential of a material, as such SKP has found widespread use in the study of the fields of corrosion and coatings. In the field of coatings for example, a scratched region of a self-healing shape memory polymer coating containing a heat generating agent on aluminium alloys was measured by SKP. Initially after the scratch was made the Volta potential was noticeably higher and wider over the scratch than over the rest of the sample, implying this region is more likely to corrode. The Volta potential decreased over subsequent measurements, and eventually the peak over the scratch completely disappeared implying the coating has healed. Because SKP can be used to investigate coatings in a non-destructive way it has also been used to determine coating failure. In a study of polyurethane coatings, it was seen that the work function increases with increasing exposure to high temperature and humidity. This increase in work function is related to decomposition of the coating likely from hydrolysis of bonds within the coating. Using SKP the corrosion of industrially important alloys has been measured. In particular with SKP it is possible to investigate the effects of environmental stimulus on corrosion. For example, the microbially induced corrosion of stainless steel and titanium has been examined. SKP is useful to study this sort of corrosion because it usually occurs locally, therefore global techniques are poorly suited. Surface potential changes related to increased localized corrosion were shown by SKP measurements. Furthermore, it was possible to compare the resulting corrosion from different microbial species. In another example SKP was used to investigate biomedical alloy materials, which can be corroded within the human body. In studies on Ti-15Mo under inflammatory conditions, SKP measurements showed a lower corrosion resistance at the bottom of a corrosion pit than at the oxide protected surface of the alloy. SKP has also been used to investigate the effects of atmospheric corrosion, for example to investigate copper alloys in marine environment. In this study Kelvin potentials became more positive, indicating a more positive corrosion potential, with increased exposure time, due to an increase in thickness of corrosion products. As a final example SKP was used to investigate stainless steel under simulated conditions of gas pipeline. These measurements showed an increase in difference in corrosion potential of cathodic and anodic regions with increased corrosion time, indicating a higher likelihood of corrosion. Furthermore, these SKP measurements provided information about local corrosion, not possible with other techniques. SKP has been used to investigate the surface potential of materials used in solar cells, with the advantage that it is a non-contact, and therefore a non-destructive technique. A more recent advancement in KPFM is High-Definition Kelvin Force microscopy (HD-KFM), which enables improved spatial resolution and reduced measurement artifacts through a single-pass scanning approach and optimized tip–sample distance control. HD-KFM has been applied to map surface potential variations in nanocomposites and electronic materials with high precision. For instance, it has been used to examine the dispersion of carbon nanomaterials in electrically conductive epoxy adhesives, revealing correlations between surface potential distribution and the formation of conductive networks. It can be used to determine the electron affinity of different materials, thereby enabling analysis of energy level alignment in conduction bands of composite systems. This alignment is closely related to surface photovoltage response and device efficiency. As a non-contact, non-destructive technique SKP has been used to investigate latent fingerprints on materials of interest for forensic studies. When fingerprints are left on a metallic surface they leave behind salts which can cause the localized corrosion of the material of interest. This leads to a change in Volta potential of the sample, which is detectable by SKP. SKP is particularly useful for these analyses because it can detect this change in Volta potential even after heating, or coating by, for example, oils. SKP has been used to analyze the corrosion mechanisms of schreibersite-containing meteorites. The aim of these studies has been to investigate the role in such meteorites in releasing species utilized in prebiotic chemistry. In the field of biology SKP has been used to investigate the electric fields associated with wounding, and acupuncture points. In the field of electronics, KPFM is used to investigate the charge trapping in High-k gate oxides/interfaces of electronic devices. == See also == Scanning probe microscopy Surface photovoltage == References == == External links == Masaki Takihara (9 December 2008). "Kelvin probe force microscopy". Takahashi Lab., Institute of Industrial Science, University of Tokyo. Archived from the original on 29 October 2012. Retrieved 29 February 2012. – Full description of the principles with good illustrations to aid comprehension Transport measurements by Scanning Probe Microscopy Introduction to Kelvin Probe Force Microscopy (KPFM) Dynamic Kelvin Probe Force Microscopy Kelvin Probe Force Microscopy of Lateral Devices Kelvin Probe Force Microscopy in Liquids Current-voltage Measurements in Scanning Probe Microscopy Dynamic IV measurements in SPM
Wikipedia/Kelvin_probe_force_microscope
Electron energy loss spectroscopy (EELS) is a form of electron microscopy in which a material is exposed to a beam of electrons with a known, narrow range of kinetic energies. Some of the electrons will undergo inelastic scattering, which means that they lose energy and have their paths slightly and randomly deflected. The amount of energy loss can be measured via an electron spectrometer and interpreted in terms of what caused the energy loss. Inelastic interactions include phonon excitations, inter- and intra-band transitions, plasmon excitations, inner shell ionizations, and Cherenkov radiation. The inner-shell ionizations are particularly useful for detecting the elemental components of a material. For example, one might find that a larger-than-expected number of electrons comes through the material with 285 eV less energy than they had when they entered the material. This is approximately the amount of energy needed to remove an inner-shell electron from a carbon atom, which can be taken as evidence that there is a significant amount of carbon present in the sample. With some care, and looking at a wide range of energy losses, one can determine the types of atoms, and the numbers of atoms of each type, being struck by the beam. The scattering angle (that is, the amount that the electron's path is deflected) can also be measured, giving information about the dispersion relation of whatever material excitation caused the inelastic scattering. == History == The technique was developed by James Hillier and RF Baker in the mid-1940s but was not widely used over the next 50 years, only becoming more widespread in research in the 1990s due to advances in microscope instrumentation and vacuum technology. With modern instrumentation becoming widely available in laboratories worldwide, the technical and scientific developments from the mid-1990s have been rapid. The technique is able to take advantage of modern aberration-corrected probe forming systems to attain spatial resolutions down to ~0.1 nm, while with a monochromated electron source and/or careful deconvolution the energy resolution can reach units of meV. This has enabled detailed measurements of the atomic and electronic properties of single columns of atoms, and in a few cases, of single atoms. == Comparison with EDX == EELS is spoken of as being complementary to energy-dispersive x-ray spectroscopy (variously called EDX, EDS, XEDS, etc.), which is another common spectroscopy technique available on many electron microscopes. EDX excels at identifying the atomic composition of a material, is quite easy to use, and is particularly sensitive to heavier elements. EELS has historically been a more difficult technique but is in principle capable of measuring atomic composition, chemical bonding, valence and conduction band electronic properties, surface properties, and element-specific pair distance distribution functions. EELS tends to work best at relatively low atomic numbers, where the excitation edges tend to be sharp, well-defined, and at experimentally accessible energy losses (the signal being very weak beyond about 3 keV energy loss). EELS is perhaps best developed for the elements ranging from carbon through the 3d transition metals (from scandium to zinc). For carbon, an experienced spectroscopist can tell at a glance the differences between diamond, graphite, amorphous carbon, and "mineral" carbon (such as the carbon appearing in carbonates). The spectra of 3d transition metals can be analyzed to identify the oxidation states of the atoms. Cu(I), for instance, has a different so-called "white-line" intensity ratio than Cu(II) does. This ability to "fingerprint" different forms of the same element is a strong advantage of EELS over EDX. The difference is mainly due to the difference in energy resolution between the two techniques (~1 eV or better for EELS, perhaps a few tens of eV for EDX). == Variants == There are several basic flavors of EELS, primarily classified by the geometry and by the kinetic energy of the incident electrons (typically measured in kiloelectron-volts, or keV). Probably the most common today is transmission EELS, in which the kinetic energies are typically 100 to 300 keV and the incident electrons pass entirely through the material sample. Usually this occurs in a transmission electron microscope (TEM), although some dedicated systems exist which enable extreme resolution in terms of energy and momentum transfer at the expense of spatial resolution. Other flavors include reflection EELS (including reflection high-energy electron energy-loss spectroscopy (RHEELS)), typically at 10 to 30 keV, and aloof EELS (sometimes called near-field EELS), in which the electron beam does not in fact strike the sample but instead interacts with it via the long-ranged Coulomb interaction. Aloof EELS is particularly sensitive to surface properties but is limited to very small energy losses such as those associated with surface plasmons or direct interband transitions. Within transmission EELS, the technique is further subdivided into valence EELS (which measures plasmons and interband transitions) and inner-shell ionization EELS (which provides much the same information as x-ray absorption spectroscopy, but from much smaller volumes of material). The dividing line between the two, while somewhat ill-defined, is in the vicinity of 50 eV energy loss. Instrumental developments have opened up the ultra-low energy loss part of the EELS spectrum, enabling vibrational spectroscopy in the TEM. Both IR-active and non-IR-active vibrational modes are present in EELS. == EEL spectrum == The electron energy loss (EEL) spectrum (sometimes spelled EELS spectrum) can be roughly split into two different regions: the low-loss spectrum (up until about 50 eV in energy loss) and the high-loss spectrum. The low-loss spectrum contains the zero-loss peak (signal from all the electrons which did not loose a measurable energy) as well as the phonon and plasmon peaks, and contains information about the band structure and dielectric properties of the sample. It is also possible to resolve the energy spectrum in momentum to directly measure the band structure. The high-loss spectrum contains the ionisation edges that arise due to inner shell ionisations in the sample. These are characteristic to the species present in the sample, and as such can be used to obtain accurate information about the chemistry of a sample. Typically, EEL spectra are susceptible to noise, especially for measurements of beam sensitive materials, such as polymers or biological specimen, requiring limited acquisition times. The two major noise contributions are Poisson noise arising from the quantized nature of the beam electrons and Gaussian distributed detector noise. As EEL spectra are usually measured on CCD or direct electron detectors, where multiple pixels of a pixel-column are summed to create a spectrum out of a 2D pixel array, the noise statistics of such spectra is altered compared to regular 2D images. Due to the image formation process, especially on scintillation-based CCD detectors, the Poisson noise is also heavily correlated by the detector. == Thickness measurements == EELS allows quick and reliable measurement of local thickness in transmission electron microscopy. The most efficient procedure is the following: Measure the energy loss spectrum in the energy range about −5..200 eV (wider better). Such measurement is quick (milliseconds) and thus can be applied to materials normally unstable under electron beams. Analyse the spectrum: (i) extract zero-loss peak (ZLP) using standard routines; (ii) calculate integrals under the ZLP (I0) and under the whole spectrum (I). The thickness t is calculated as mfp*ln(I/I0). Here mfp is the mean free path of electron inelastic scattering, which has been tabulated for most elemental solids and oxides. The spatial resolution of this procedure is limited by the plasmon localization and is about 1 nm, meaning that spatial thickness maps can be measured in scanning transmission electron microscopy with ~1 nm resolution. == Pressure measurements == The intensity and position of low-energy EELS peaks are affected by pressure. This fact allows mapping local pressure with ~1 nm spatial resolution. Peak shift method is reliable and straightforward. The peak position is calibrated by independent (usually optical) measurement using a diamond anvil cell. However, the spectral resolution of most EEL spectrometers (0.3-2 eV, typically 1 eV) is often too crude for the small pressure-induced shifts. Therefore, the sensitivity and accuracy of this method is relatively poor. Nevertheless, pressures as small as 0.2 GPa inside helium bubbles in aluminum have been measured. Peak intensity method relies on pressure-induced change in the intensity of dipole-forbidden transitions. Because this intensity is zero for zero pressure the method is relatively sensitive and accurate. However, it requires existence of allowed and forbidden transitions of similar energies and thus is only applicable to specific systems, e.g., Xe bubbles in aluminum. == Use in confocal geometry == Scanning confocal electron energy loss microscopy (SCEELM) is a new analytical microscopy tool that enables a double corrected transmission electron microscope to achieve sub-10 nm depth resolution in depth sectioning imaging of nanomaterials. It was previously termed as energy filtered scanning confocal electron microscopy due to the lack to full spectrum acquisition capability (only a small energy window on the order of 5 eV can be used at a time). SCEELM takes advantages of the newly developed chromatic aberration corrector which allows electrons of more than 100 eV of energy spread to be focused to roughly the same focal plane. It has been demonstrated that a simultaneous acquisition of the zero loss, low-loss, and core loss signals up to 400 eV in the confocal geometry with depth discrimination capability. == See also == Energy filtered transmission electron microscopy Magic angle (EELS) Transmission electron microscopy Scanning transmission electron microscopy == References == == Further reading == Egerton, R. F. (1996). Electron Energy Loss Spectroscopy in the Electron Microscope (2nd ed.). New York: Plenum. ISBN 978-0-306-45223-9. Spence, J. C. H. (2006). "Absorption spectroscopy with sub-angstrom beams: ELS in STEM". Rep. Prog. Phys. 69 (3): 725–758. Bibcode:2006RPPh...69..725S. doi:10.1088/0034-4885/69/3/R04. S2CID 122148401. Gergely, G. (2002). "Elastic backscattering of electrons: determination of physical parameters of electron transport processes by elastic peak electron spectroscopy". Progress in Surface Science. 71 (1): 31–88. Bibcode:2002PrSS...71...31G. doi:10.1016/S0079-6816(02)00019-9. Brydson, Rik (2001). Electron energy loss spectroscopy. Garland/BIOS Scientific Publishers. ISBN 978-1-85996-134-6. == External links == A Database of EELS fine structure fingerprints at Cornell A database of EELS and X-Ray excitation spectra Cornell Spectrum Imager, an EELS Analysis open-source plugin for ImageJ HyperSpy, a hyperspectral data analysis Python toolbox especially well suited for EELS data analysis EELSMODEL, software to quantify Electron Energy Loss (EELS) spectra by using model fitting Archived 2017-04-12 at the Wayback Machine
Wikipedia/Electron_energy_loss_spectroscopy
Low-energy electron diffraction (LEED) is a technique for the determination of the surface structure of single-crystalline materials by bombardment with a collimated beam of low-energy electrons (30–200 eV) and observation of diffracted electrons as spots on a fluorescent screen. LEED may be used in one of two ways: Qualitatively, where the diffraction pattern is recorded and analysis of the spot positions gives information on the symmetry of the surface structure. In the presence of an adsorbate the qualitative analysis may reveal information about the size and rotational alignment of the adsorbate unit cell with respect to the substrate unit cell. Quantitatively, where the intensities of diffracted beams are recorded as a function of incident electron beam energy to generate the so-called I–V curves. By comparison with theoretical curves, these may provide accurate information on atomic positions on the surface at hand. == Historical perspective == An electron-diffraction experiment similar to modern LEED was the first to observe the wavelike properties of electrons, but LEED was established as an ubiquitous tool in surface science only with the advances in vacuum generation and electron detection techniques. === Davisson and Germer's discovery of electron diffraction === The theoretical possibility of the occurrence of electron diffraction first emerged in 1924, when Louis de Broglie introduced wave mechanics and proposed the wavelike nature of all particles. In his Nobel-laureated work de Broglie postulated that the wavelength of a particle with linear momentum p is given by h/p, where h is the Planck constant. The de Broglie hypothesis was confirmed experimentally at Bell Labs in 1927, when Clinton Davisson and Lester Germer fired low-energy electrons at a crystalline nickel target and observed that the angular dependence of the intensity of backscattered electrons showed diffraction patterns. These observations were consistent with the diffraction theory for X-rays developed by Bragg and Laue earlier. Before the acceptance of the de Broglie hypothesis, diffraction was believed to be an exclusive property of waves. Davisson and Germer published notes of their electron-diffraction experiment result in Nature and in Physical Review in 1927. One month after Davisson and Germer's work appeared, Thompson and Reid published their electron-diffraction work with higher kinetic energy (thousand times higher than the energy used by Davisson and Germer) in the same journal. Those experiments revealed the wave property of electrons and opened up an era of electron-diffraction study. === Development of LEED as a tool in surface science === Though discovered in 1927, low-energy electron diffraction did not become a popular tool for surface analysis until the early 1960s. The main reasons were that monitoring directions and intensities of diffracted beams was a difficult experimental process due to inadequate vacuum techniques and slow detection methods such as a Faraday cup. Also, since LEED is a surface-sensitive method, it required well-ordered surface structures. Techniques for the preparation of clean metal surfaces first became available much later. Nonetheless, H. E. Farnsworth and coworkers at Brown University pioneered the use of LEED as a method for characterizing the absorption of gases onto clean metal surfaces and the associated regular adsorption phases, starting shortly after the Davisson and Germer discovery into the 1970s. In the early 1960s LEED experienced a renaissance, as ultra-high vacuum became widely available, and the post acceleration detection method was introduced by Germer and his coworkers at Bell Labs using a flat phosphor screen. Using this technique, diffracted electrons were accelerated to high energies to produce clear and visible diffraction patterns on the screen. Ironically the post-acceleration method had already been proposed by Ehrenberg in 1934. In 1962 Lander and colleagues introduced the modern hemispherical screen with associated hemispherical grids. In the mid-1960s, modern LEED systems became commercially available as part of the ultra-high-vacuum instrumentation suite by Varian Associates and triggered an enormous boost of activities in surface science. Notably, future Nobel prize winner Gerhard Ertl started his studies of surface chemistry and catalysis on such a Varian system. It soon became clear that the kinematic (single-scattering) theory, which had been successfully used to explain X-ray diffraction experiments, was inadequate for the quantitative interpretation of experimental data obtained from LEED. At this stage a detailed determination of surface structures, including adsorption sites, bond angles and bond lengths was not possible. A dynamical electron-diffraction theory, which took into account the possibility of multiple scattering, was established in the late 1960s. With this theory, it later became possible to reproduce experimental data with high precision. == Experimental setup == In order to keep the studied sample clean and free from unwanted adsorbates, LEED experiments are performed in an ultra-high vacuum environment (residual gas pressure <10−7 Pa). === LEED optics === The main components of a LEED instrument are: An electron gun from which monochromatic electrons are emitted by a cathode filament that is at a negative potential, typically 10–600 V, with respect to the sample. The electrons are accelerated and focused into a beam, typically about 0.1 to 0.5 mm wide, by a series of electrodes serving as electron lenses. Some of the electrons incident on the sample surface are backscattered elastically, and diffraction can be detected if sufficient order exists on the surface. This typically requires a region of single crystal surface as wide as the electron beam, although sometimes polycrystalline surfaces such as highly oriented pyrolytic graphite (HOPG) are sufficient. A high-pass filter for scattered electrons in the form of a retarding field analyzer, which blocks all but elastically scattered electrons. It usually contains three or four hemispherical concentric grids. Because only radial fields around the sampled point would be allowed and the geometry of the sample and the surrounding area is not spherical, the space between the sample and the analyzer has to be field-free. The first grid, therefore, separates the space above the sample from the retarding field. The next grid is at a negative potential to block low energy electrons, and is called the suppressor or the gate. To make the retarding field homogeneous and mechanically more stable another grid at the same potential is added behind the second grid. The fourth grid is only necessary when the LEED is used like a tetrode and the current at the screen is measured, when it serves as screen between the gate and the anode. A hemispherical positively-biased fluorescent screen on which the diffraction pattern can be directly observed, or a position-sensitive electron detector. Most new LEED systems use a reverse view scheme, which has a minimized electron gun, and the pattern is viewed from behind through a transmission screen and a viewport. Recently, a new digitized position sensitive detector called a delay-line detector with better dynamic range and resolution has been developed. === Sample === The sample of the desired surface crystallographic orientation is initially cut and prepared outside the vacuum chamber. The correct alignment of the crystal can be achieved with the help of X-ray diffraction methods such as Laue diffraction. After being mounted in the UHV chamber the sample is cleaned and flattened. Unwanted surface contaminants are removed by ion sputtering or by chemical processes such as oxidation and reduction cycles. The surface is flattened by annealing at high temperatures. Once a clean and well-defined surface is prepared, monolayers can be adsorbed on the surface by exposing it to a gas consisting of the desired adsorbate atoms or molecules. Often the annealing process will let bulk impurities diffuse to the surface and therefore give rise to a re-contamination after each cleaning cycle. The problem is that impurities that adsorb without changing the basic symmetry of the surface, cannot easily be identified in the diffraction pattern. Therefore, in many LEED experiments Auger electron spectroscopy is used to accurately determine the purity of the sample. === Using the detector for Auger electron spectroscopy === LEED optics is in some instruments also used for Auger electron spectroscopy. To improve the measured signal, the gate voltage is scanned in a linear ramp. An RC circuit serves to derive the second derivative, which is then amplified and digitized. To reduce the noise, multiple passes are summed up. The first derivative is very large due to the residual capacitive coupling between gate and the anode and may degrade the performance of the circuit. By applying a negative ramp to the screen this can be compensated. It is also possible to add a small sine to the gate. A high-Q RLC circuit is tuned to the second harmonic to detect the second derivative. === Data acquisition === A modern data acquisition system usually contains a CCD/CMOS camera pointed to the screen for diffraction pattern visualization and a computer for data recording and further analysis. More expensive instruments have in-vacuum position sensitive electron detectors that measure the current directly, which helps in the quantitative I–V analysis of the diffraction spots. == Theory == === Surface sensitivity === The basic reason for the high surface sensitivity of LEED is that for low-energy electrons the interaction between the solid and electrons is especially strong. Upon penetrating the crystal, primary electrons will lose kinetic energy due to inelastic scattering processes such as plasmon and phonon excitations, as well as electron–electron interactions. In cases where the detailed nature of the inelastic processes is unimportant, they are commonly treated by assuming an exponential decay of the primary electron-beam intensity I0 in the direction of propagation: I ( d ) = I 0 e − d / Λ ( E ) . {\displaystyle I(d)=I_{0}\,e^{-d/\Lambda (E)}.} Here d is the penetration depth, and Λ ( E ) {\displaystyle \Lambda (E)} denotes the inelastic mean free path, defined as the distance an electron can travel before its intensity has decreased by the factor 1/e. While the inelastic scattering processes and consequently the electronic mean free path depend on the energy, it is relatively independent of the material. The mean free path turns out to be minimal (5–10 Å) in the energy range of low-energy electrons (20–200 eV). This effective attenuation means that only a few atomic layers are sampled by the electron beam, and, as a consequence, the contribution of deeper atoms to the diffraction progressively decreases. === Kinematic theory: single scattering === Kinematic diffraction is defined as the situation where electrons impinging on a well-ordered crystal surface are elastically scattered only once by that surface. In the theory the electron beam is represented by a plane wave with a wavelength given by the de Broglie hypothesis: λ = h 2 m e E , λ [nm] ≈ 1.5 E [eV] . {\displaystyle \lambda ={\frac {h}{\sqrt {2m_{\text{e}}E}}},\quad \lambda {\text{ [nm]}}\approx {\sqrt {\frac {1.5}{E{\text{ [eV]}}}}}.} The interaction between the scatterers present in the surface and the incident electrons is most conveniently described in reciprocal space. In three dimensions the primitive reciprocal lattice vectors are related to the real space lattice {a, b, c} in the following way: a ∗ = 2 π b × c a ⋅ ( b × c ) , b ∗ = 2 π c × a b ⋅ ( c × a ) , c ∗ = 2 π a × b c ⋅ ( a × b ) . {\displaystyle \mathbf {a} ^{*}=2\pi {\frac {\mathbf {b} \times \mathbf {c} }{\mathbf {a} \cdot (\mathbf {b} \times \mathbf {c} )}},\quad \mathbf {b} ^{*}=2\pi {\frac {\mathbf {c} \times \mathbf {a} }{\mathbf {b} \cdot (\mathbf {c} \times \mathbf {a} )}},\quad \mathbf {c} ^{*}=2\pi {\frac {\mathbf {a} \times \mathbf {b} }{\mathbf {c} \cdot (\mathbf {a} \times \mathbf {b} )}}.} For an incident electron with wave vector k i = 2 π / λ i {\displaystyle \mathbf {k} _{i}=2\pi /\lambda _{i}} and scattered wave vector k f = 2 π / λ f {\displaystyle \mathbf {k} _{f}=2\pi /\lambda _{f}} , the condition for constructive interference and hence diffraction of scattered electron waves is given by the Laue condition: k f − k i = G h k l , {\displaystyle \mathbf {k} _{f}-\mathbf {k} _{i}=\mathbf {G} _{hkl},} where (h, k, l) is a set of integers, and G h k l = h a ∗ + k b ∗ + l c ∗ {\displaystyle {\textbf {G}}_{hkl}=h\mathbf {a} ^{*}+k\mathbf {b} ^{*}+l\mathbf {c} ^{*}} is a vector of the reciprocal lattice. Note that these vectors specify the Fourier components of charge density in the reciprocal (momentum) space, and that the incoming electrons scatter at these density modulations within the crystal lattice. The magnitudes of the wave vectors are unchanged, i.e. | k f | = | k i | {\displaystyle |\mathbf {k} _{f}|=|\mathbf {k} _{i}|} , because only elastic scattering is considered. Since the mean free path of low-energy electrons in a crystal is only a few angstroms, only the first few atomic layers contribute to the diffraction. This means that there are no diffraction conditions in the direction perpendicular to the sample surface. As a consequence, the reciprocal lattice of a surface is a 2D lattice with rods extending perpendicular from each lattice point. The rods can be pictured as regions where the reciprocal lattice points are infinitely dense. Therefore, in the case of diffraction from a surface the Laue condition reduces to the 2D form: k f ∥ − k i ∥ = G h k = h a ∗ + k b ∗ , {\displaystyle \mathbf {k} _{f}^{\parallel }-\mathbf {k} _{i}^{\parallel }=\mathbf {G} _{hk}=h\mathbf {a} ^{*}+k\mathbf {b} ^{*},} where a ∗ {\displaystyle \mathbf {a} ^{*}} and b ∗ {\displaystyle \mathbf {b} ^{*}} are the primitive translation vectors of the 2D reciprocal lattice of the surface and k f ∥ {\displaystyle {\textbf {k}}_{f}^{\parallel }} , k i ∥ {\displaystyle {\textbf {k}}_{i}^{\parallel }} denote the component of respectively the reflected and incident wave vector parallel to the sample surface. a ∗ {\displaystyle {\textbf {a}}^{*}} and b ∗ {\displaystyle {\textbf {b}}^{*}} are related to the real space surface lattice, with n ^ {\displaystyle {\hat {\mathbf {n} }}} as the surface normal, in the following way: a ∗ = 2 π b × n ^ | a × b | , b ∗ = 2 π n ^ × a | a × b | . {\displaystyle {\begin{aligned}\mathbf {a} ^{*}&=2\pi {\frac {\mathbf {b} \times {\hat {\mathbf {n} }}}{|\mathbf {a} \times \mathbf {b} |}},\\\mathbf {b} ^{*}&=2\pi {\frac {{\hat {\mathbf {n} }}\times \mathbf {a} }{|\mathbf {a} \times \mathbf {b} |}}.\end{aligned}}} The Laue-condition equation can readily be visualized using the Ewald's sphere construction. Figures 3 and 4 show a simple illustration of this principle: The wave vector k i {\displaystyle \mathbf {k} _{i}} of the incident electron beam is drawn such that it terminates at a reciprocal lattice point. The Ewald's sphere is then the sphere with radius | k i | {\displaystyle |\mathbf {k} _{i}|} and origin at the center of the incident wave vector. By construction, every wave vector centered at the origin and terminating at an intersection between a rod and the sphere will then satisfy the 2D Laue condition and thus represent an allowed diffracted beam. === Interpretation of LEED patterns === Figure 4 shows the Ewald's sphere for the case of normal incidence of the primary electron beam, as would be the case in an actual LEED setup. It is apparent that the pattern observed on the fluorescent screen is a direct picture of the reciprocal lattice of the surface. The spots are indexed according to the values of h and k. The size of the Ewald's sphere and hence the number of diffraction spots on the screen is controlled by the incident electron energy. From the knowledge of the reciprocal lattice models for the real space lattice can be constructed and the surface can be characterized at least qualitatively in terms of the surface periodicity and the point group. Figure 7 shows a model of an unreconstructed (100) face of a simple cubic crystal and the expected LEED pattern. Since these patterns can be inferred from the crystal structure of the bulk crystal, known from other more quantitative diffraction techniques, LEED is more interesting in the cases where the surface layers of a material reconstruct, or where surface adsorbates form their own superstructures. ==== Superstructures ==== Overlaying superstructures on a substrate surface may introduce additional spots in the known (1×1) arrangement. These are known as extra spots or super spots. Figure 6 shows many such spots appearing after a simple hexagonal surface of a metal has been covered with a layer of graphene. Figure 7 shows a schematic of real and reciprocal space lattices for a simple (1×2) superstructure on a square lattice. For a commensurate superstructure the symmetry and the rotational alignment with respect to adsorbent surface can be determined from the LEED pattern. This is easiest shown by using a matrix notation, where the primitive translation vectors of the superlattice {as, bs} are linked to the primitive translation vectors of the underlying (1×1) lattice {a, b} in the following way a s = G 11 a + G 12 b , b s = G 21 a + G 22 b . {\displaystyle {\begin{aligned}{\textbf {a}}_{s}&=G_{11}{\textbf {a}}+G_{12}{\textbf {b}},\\{\textbf {b}}_{s}&=G_{21}{\textbf {a}}+G_{22}{\textbf {b}}.\end{aligned}}} The matrix for the superstructure then is G = ( G 11 G 12 G 21 G 22 ) . {\displaystyle {\begin{aligned}G=\left({\begin{array}{cc}G_{11}&G_{12}\\G_{21}&G_{22}\end{array}}\right).\end{aligned}}} Similarly, the primitive translation vectors of the lattice describing the extra spots {a∗s, b∗s} are linked to the primitive translation vectors of the reciprocal lattice {a∗, b∗} a s ∗ = G 11 ∗ a ∗ + G 12 ∗ b ∗ , b s ∗ = G 21 ∗ a ∗ + G 22 ∗ b ∗ . {\displaystyle {\begin{aligned}{\textbf {a}}_{s}^{*}&=G_{11}^{*}{\textbf {a}}^{*}+G_{12}^{*}{\textbf {b}}^{*},\\{\textbf {b}}_{s}^{*}&=G_{21}^{*}{\textbf {a}}^{*}+G_{22}^{*}{\textbf {b}}^{*}.\end{aligned}}} G∗ is related to G in the following way G ∗ = ( G − 1 ) T = 1 det ( G ) ( G 22 − G 21 − G 12 G 11 ) . {\displaystyle {\begin{aligned}G^{*}&=(G^{-1})^{\text{T}}\\&={\frac {1}{\det(G)}}\left({\begin{array}{cc}G_{22}&-G_{21}\\-G_{12}&G_{11}\end{array}}\right).\end{aligned}}} ==== Domains ==== An essential problem when considering LEED patterns is the existence of symmetrically equivalent domains. Domains may lead to diffraction patterns that have higher symmetry than the actual surface at hand. The reason is that usually the cross sectional area of the primary electron beam (~1 mm2) is large compared to the average domain size on the surface and hence the LEED pattern might be a superposition of diffraction beams from domains oriented along different axes of the substrate lattice. However, since the average domain size is generally larger than the coherence length of the probing electrons, interference between electrons scattered from different domains can be neglected. Therefore, the total LEED pattern emerges as the incoherent sum of the diffraction patterns associated with the individual domains. Figure 8 shows the superposition of the diffraction patterns for the two orthogonal domains (2×1) and (1×2) on a square lattice, i.e. for the case where one structure is just rotated by 90° with respect to the other. The (1×2) structure and the respective LEED pattern are shown in Figure 7. It is apparent that the local symmetry of the surface structure is twofold while the LEED pattern exhibits a fourfold symmetry. Figure 1 shows a real diffraction pattern of the same situation for the case of a Si(100) surface. However, here the (2×1) structure is formed due to surface reconstruction. === Dynamical theory: multiple scattering === The inspection of the LEED pattern gives a qualitative picture of the surface periodicity i.e. the size of the surface unit cell and to a certain degree of surface symmetries. However it will give no information about the atomic arrangement within a surface unit cell or the sites of adsorbed atoms. For instance, when the whole superstructure in Figure 7 is shifted such that the atoms adsorb in bridge sites instead of on-top sites the LEED pattern stays the same, although the individual spot intensities may somewhat differ. A more quantitative analysis of LEED experimental data can be achieved by analysis of so-called I–V curves, which are measurements of the intensity versus incident electron energy. The I–V curves can be recorded by using a camera connected to computer controlled data handling or by direct measurement with a movable Faraday cup. The experimental curves are then compared to computer calculations based on the assumption of a particular model system. The model is changed in an iterative process until a satisfactory agreement between experimental and theoretical curves is achieved. A quantitative measure for this agreement is the so-called reliability- or R-factor. A commonly used reliability factor is the one proposed by Pendry. It is expressed in terms of the logarithmic derivative of the intensity: L ( E ) = I ′ / I . {\displaystyle {\begin{aligned}L(E)&=I'/I.\end{aligned}}} The R-factor is then given by: R = ∑ g ∫ ( Y gth ( E ) − Y gexpt ( E ) ) 2 d E / ∑ g ∫ ( Y gth 2 ( E ) + Y gexpt 2 ( E ) ) d E , {\displaystyle {\begin{aligned}R&=\sum _{g}\int (Y_{\textrm {gth}}(E)-Y_{\textrm {gexpt}}(E))^{2}dE/\sum _{g}\int (Y_{\textrm {gth}}^{2}(E)+Y_{\textrm {gexpt}}^{2}(E))dE,\end{aligned}}} where Y ( E ) = L − 1 / ( L − 2 + V o i 2 ) {\displaystyle Y(E)=L^{-1}/(L^{-2}+V_{oi}^{2})} and V o i {\displaystyle V_{oi}} is the imaginary part of the electron self-energy. In general, R p ≤ 0.2 {\displaystyle R_{p}\leq 0.2} is considered as a good agreement, R p ≃ 0.3 {\displaystyle R_{p}\simeq 0.3} is considered mediocre and R p ≃ 0.5 {\displaystyle R_{p}\simeq 0.5} is considered a bad agreement. Figure 9 shows examples of the comparison between experimental I–V spectra and theoretical calculations. === Dynamical LEED calculations === The term dynamical stems from the studies of X-ray diffraction and describes the situation where the response of the crystal to an incident wave is included self-consistently and multiple scattering can occur. The aim of any dynamical LEED theory is to calculate the intensities of diffraction of an electron beam impinging on a surface as accurately as possible. A common method to achieve this is the self-consistent multiple scattering approach. One essential point in this approach is the assumption that the scattering properties of the surface, i.e. of the individual atoms, are known in detail. The main task then reduces to the determination of the effective wave field incident on the individual scatters present in the surface, where the effective field is the sum of the primary field and the field emitted from all the other atoms. This must be done in a self-consistent way, since the emitted field of an atom depends on the incident effective field upon it. Once the effective field incident on each atom is determined, the total field emitted from all atoms can be found and its asymptotic value far from the crystal then gives the desired intensities. A common approach in LEED calculations is to describe the scattering potential of the crystal by a "muffin tin" model, where the crystal potential can be imagined being divided up by non-overlapping spheres centered at each atom such that the potential has a spherically symmetric form inside the spheres and is constant everywhere else. The choice of this potential reduces the problem to scattering from spherical potentials, which can be dealt with effectively. The task is then to solve the Schrödinger equation for an incident electron wave in that "muffin tin" potential. == Related techniques == === Tensor LEED === In LEED the exact atomic configuration of a surface is determined by a trial and error process where measured I–V curves are compared to computer-calculated spectra under the assumption of a model structure. From an initial reference structure a set of trial structures is created by varying the model parameters. The parameters are changed until an optimal agreement between theory and experiment is achieved. However, for each trial structure a full LEED calculation with multiple scattering corrections must be conducted. For systems with a large parameter space the need for computational time might become significant. This is the case for complex surfaces structures or when considering large molecules as adsorbates. Tensor LEED is an attempt to reduce the computational effort needed by avoiding full LEED calculations for each trial structure. The scheme is as follows: One first defines a reference surface structure for which the I–V spectrum is calculated. Next a trial structure is created by displacing some of the atoms. If the displacements are small the trial structure can be considered as a small perturbation of the reference structure and first-order perturbation theory can be used to determine the I–V curves of a large set of trial structures. === Spot profile analysis low-energy electron diffraction (SPA-LEED) === A real surface is not perfectly periodic but has many imperfections in the form of dislocations, atomic steps, terraces and the presence of unwanted adsorbed atoms. This departure from a perfect surface leads to a broadening of the diffraction spots and adds to the background intensity in the LEED pattern. SPA-LEED is a technique where the profile and shape of the intensity of diffraction beam spots is measured. The spots are sensitive to the irregularities in the surface structure and their examination therefore permits more-detailed conclusions about some surface characteristics. Using SPA-LEED may for instance permit a quantitative determination of the surface roughness, terrace sizes, dislocation arrays, surface steps and adsorbates. Although some degree of spot profile analysis can be performed in regular LEED and even LEEM setups, dedicated SPA-LEED setups, which scan the profile of the diffraction spot over a dedicated channeltron detector allow for much higher dynamic range and profile resolution. === Other === Spin-polarized low energy electron diffraction Inelastic low energy electron diffraction Very low-energy electron diffraction (VLEED) Reflection high-energy electron diffraction (RHEED) Ultrafast low-energy electron diffraction (ULEED) == See also == List of surface analysis methods == External links == LEED program packages LEED pattern analyzer (LEEDpat) == References ==
Wikipedia/Low_energy_electron_diffraction
Cyclopædia: or, an Universal Dictionary of Arts and Sciences is a British encyclopedia prepared by Ephraim Chambers and first published in 1728. Six more editions appeared between 1728 and 1751, and there was a Supplement in 1753. The Cyclopædia was one of the first general encyclopedias produced in English. == Noteworthy features == The title page of the first edition summarizes the author’s aims: Cyclopædia: or, An Universal Dictionary of Arts and Sciences; containing the Definitions of the Terms, and Accounts of the Things ſignify'd thereby, in the several Arts, both Liberal and Mechanical, and the ſeveral Sciences, Human and Divine: the Figures, Kinds, Properties, Productions, Preparations, and Uſes, of Things Natural and Artificial; the Riſe, Progreſs, and State of Things Ecclesiastical, Civil, Military, and Commercial: with the ſeveral Syſtems, Sects, Opinions, &c. among Philoſophers, Divines, Mathematicians, Phyſicians, Antiquaries, Criticks, &c. The Whole intended as a Course of Antient and Modern Learning. The first edition included numerous cross-references meant to connect articles scattered by the use of alphabetical order, a dedication to the king, George II, and a philosophical preface at the beginning of Volume 1. Among other things, the preface gives an analysis of forty-seven divisions of knowledge, with classed lists of the articles belonging to each, intended to serve as a table of contents and also as a directory indicating the order in which the articles should be read. == Printing history == A second edition appeared in 1738 in two volumes in folio, with 2,466 pages. This edition was supposedly retouched and amended in a thousand places, with a few added articles and some enlarged articles. Chambers was prevented from doing more because booksellers were alarmed by a bill in Parliament containing a clause to oblige publishers of all improved editions of books to print their improvements separately. The bill, after passing the House of Commons, was unexpectedly thrown out by the House of Lords; but fearing that it might be revived, the booksellers thought it best to retreat, though more than twenty sheets had been printed. Five other editions were published in London from 1739 to 1751–1752. An edition was also published in Dublin in 1742; this and the London editions were all two volumes in folio. An Italian translation appearing in Venice, 1748–1749, 4to, nine volumes, was the first complete Italian encyclopaedia. When Chambers was in France in 1739, he rejected very favorable proposals to publish an edition there dedicated to Louis XV. Chambers' work was carefully done and popular. However, it had defects and omissions, as he was well aware; by his death on 15 May 1740, he had collected and arranged materials for seven new volumes. George Lewis Scott was employed by the booksellers to select articles for the press and to supply others, but he left before the job was finished. The job was then given to John Hill. The Supplement was published in London in 1753 in two folio volumes with 3307 pages and 12 plates. Hill was a botanist, and the botanical part, which had been weak in the Cyclopaedia, was the best. Abraham Rees, a nonconformist minister, published a revised and enlarged edition in 1778–1788, with the supplement and improvements incorporated. It was published in London as a folio of five volumes, 5,010 pages ( not paginated), and 159 plates. It was published in 418 numbers at 6d. each. Rees claimed to have added more than 4,400 new articles. At the end, he gave an index of articles, classed under 100 heads, numbering about 57,000 and filling 80 pages. The heads, with 39 cross references, were arranged alphabetically. == Precursors == Among the precursors of Chambers's Cyclopaedia was John Harris's Lexicon Technicum of 1704 (later editions from 1708 to 1744). By its title and content, it was "An Universal English Dictionary of Arts and Sciences: Explaining not only the Terms of Art, but the Arts Themselves." While Harris's work is often classified as a technical dictionary, it also took material from Newton and Halley, among others. == Successors == Chambers's Cyclopaedia in turn became the inspiration for the landmark Encyclopédie of Denis Diderot and Jean le Rond d'Alembert, which owed its inception to a proposed French translation of Chambers's work begun in 1744 by John Mills, assisted by Gottfried Sellius. The later Chambers's Encyclopaedia (1860–1868) had no connection to Ephraim Chambers's work but was the product of Robert Chambers and his brother William. == References == == Further reading == == External links == Chambers' Cyclopaedia, 1728, 2 volumes, with the 1753 supplement, 2 volumes; digitized by the University of Wisconsin Digital Collections Center. Chambers' Cyclopaedia, 1728, 2 volumes, articles are categorized. Searchable 4th edition (1741), digitized at the University of Chicago Library as part of The ARTFL Project. Cyclopaedia, or, An Universal Dictionary of Arts and Sciences: Containing an Explication of the Terms, and an Account of the Things Signified Thereby, in the Several Arts, Both Liberal and Mechanical, and the Several Sciences, Human and Divine sixth edition, 2 volumes; London: Printed for W. Innys et al., 1750
Wikipedia/Cyclopædia,_or_an_Universal_Dictionary_of_Arts_and_Sciences
In fluid mechanics, the force density is the negative gradient of pressure. It has the physical dimensions of force per unit volume. Force density is a vector field representing the flux density of the hydrostatic force within the bulk of a fluid. Force density is represented by the symbol f, and given by the following equation, where p is the pressure: f = − ∇ p {\displaystyle \mathbf {f} =-\nabla p} . The net force on a differential volume element dV of the fluid is: d F = f d V {\displaystyle d\mathbf {F} =\mathbf {f} dV} Force density acts in different ways which is caused by the boundary conditions. There are stick-slip boundary conditions and stick boundary conditions which affect force density. In a sphere placed in an arbitrary non-stationary flow field of viscous incompressible fluid for stick boundary conditions where the force density's calculations leads to show the generalisation of Faxen's theorem to force multipole moments of arbitrary order. In a sphere moving in an incompressible fluid in a non-stationary flow with mixed stick-slip boundary condition where the force of density shows an expression of the Faxén type for the total force, but the total torque and the symmetric force-dipole moment. The force density at a point in a fluid, divided by the density, is the acceleration of the fluid at that point. The force density f is defined as the force per unit volume, so that the net force can be calculated by: F = ∫ f ( r ) d 3 r {\displaystyle \mathbf {F} =\int f(\mathbf {r} )d^{3}\mathbf {r} } . The force density in an electromagnetic field is given in CGS by: f = ρ E + J c × B {\displaystyle \mathbf {f} =\rho \mathbf {E} +{\frac {\mathbf {J} }{c}}\times \mathbf {B} } , where ρ {\displaystyle \rho } is the charge density, E is the electric field, J is the current density, c is the speed of light, and B is the magnetic field. == See also == Body force Pressure gradient Gradient == References ==
Wikipedia/Force_density
In physics, magnetic tension is a restoring force with units of force density that acts to straighten bent magnetic field lines. In SI units, the force density f T {\displaystyle \mathbf {f} _{T}} exerted perpendicular to a magnetic field B {\displaystyle \mathbf {B} } can be expressed as f T = ( B ⋅ ∇ ) B μ 0 {\displaystyle \mathbf {f} _{T}={\frac {\left(\mathbf {B} \cdot \nabla \right)\mathbf {B} }{\mu _{0}}}} where μ 0 {\displaystyle \mu _{0}} is the vacuum permeability. Magnetic tension forces also rely on vector current densities and their interaction with the magnetic field. Plotting magnetic tension along adjacent field lines can give a picture as to their divergence and convergence with respect to each other as well as current densities. Magnetic tension is analogous to the restoring force of rubber bands. == Mathematical statement == In ideal magnetohydrodynamics (MHD) the magnetic tension force in an electrically conducting fluid with a bulk plasma velocity field v {\displaystyle \mathbf {v} } , current density J {\displaystyle \mathbf {J} } , mass density ρ {\displaystyle \rho } , magnetic field B {\displaystyle \mathbf {B} } , and plasma pressure p {\displaystyle p} can be derived from the Cauchy momentum equation: ρ ( ∂ ∂ t + v ⋅ ∇ ) v = J × B − ∇ p , {\displaystyle \rho \left({\frac {\partial }{\partial t}}+\mathbf {v} \cdot \nabla \right)\mathbf {v} =\mathbf {J} \times \mathbf {B} -\nabla p,} where the first term on the right hand side represents the Lorentz force and the second term represents pressure gradient forces. The Lorentz force can be expanded using Ampère's law, μ 0 J = ∇ × B {\displaystyle \mu _{0}\mathbf {J} =\nabla \times \mathbf {B} } , and the vector identity 1 2 ∇ ( B ⋅ B ) = ( B ⋅ ∇ ) B + B × ( ∇ × B ) {\displaystyle {\tfrac {1}{2}}\nabla (\mathbf {B} \cdot \mathbf {B} )=(\mathbf {B} \cdot \nabla )\mathbf {B} +\mathbf {B} \times (\nabla \times \mathbf {B} )} to give J × B = ( B ⋅ ∇ ) B μ 0 − ∇ ( B 2 2 μ 0 ) , {\displaystyle \mathbf {J} \times \mathbf {B} ={(\mathbf {B} \cdot \nabla )\mathbf {B} \over \mu _{0}}-\nabla \left({\frac {B^{2}}{2\mu _{0}}}\right),} where the first term on the right hand side is the magnetic tension and the second term is the magnetic pressure force. The force due to changes in the magnitude of B {\displaystyle \mathbf {B} } and its direction can be separated by writing B = B b {\displaystyle \mathbf {B} =B\mathbf {b} } with B = | B | {\displaystyle B=|\mathbf {B} |} and b {\displaystyle \mathbf {b} } a unit vector: ( B ⋅ ∇ ) B μ 0 = B 2 μ 0 ( b ⋅ ∇ ) b = B 2 μ 0 κ {\displaystyle {(\mathbf {B} \cdot \nabla )\mathbf {B} \over \mu _{0}}={\frac {B^{2}}{\mu _{0}}}(\mathbf {b} \cdot \nabla )\mathbf {b} ={\frac {B^{2}}{\mu _{0}}}{\boldsymbol {\kappa }}} where the spatial constancy of the magnitude has been assumed ∇ B = 0 {\displaystyle \nabla B=0} and κ = ( b ⋅ ∇ ) b {\displaystyle {\boldsymbol {\kappa }}=(\mathbf {b} \cdot \nabla )\mathbf {b} } has magnitude equal to the curvature, or the reciprocal of the radius of curvature, and is directed from a point on a magnetic field line to the center of curvature. Therefore, as the curvature of the magnetic field line increases, so too does the magnetic tension force resisting this curvature. Magnetic tension and pressure are both implicitly included in the Maxwell stress tensor. Terms representing these two forces are present along the main diagonal where they act on differential area elements normal to the corresponding axis. == Plasma physics == Magnetic tension is particularly important in plasma physics and MHD, where it controls dynamics of some systems and the shape of magnetic structures. For example, in a homogeneous magnetic field and an absence of gravity, magnetic tension is the sole driver of linear Alfvén waves. == See also == Magnetic pinch Magnetosonic wave == References ==
Wikipedia/Magnetic_tension_force
The Grad–Shafranov equation (H. Grad and H. Rubin (1958); Vitalii Dmitrievich Shafranov (1966)) is the equilibrium equation in ideal magnetohydrodynamics (MHD) for a two dimensional plasma, for example the axisymmetric toroidal plasma in a tokamak. This equation takes the same form as the Hicks equation from fluid dynamics. This equation is a two-dimensional, nonlinear, elliptic partial differential equation obtained from the reduction of the ideal MHD equations to two dimensions, often for the case of toroidal axisymmetry (the case relevant in a tokamak). Taking ( r , θ , z ) {\displaystyle (r,\theta ,z)} as the cylindrical coordinates, the flux function ψ {\displaystyle \psi } is governed by the equation,where μ 0 {\displaystyle \mu _{0}} is the magnetic permeability, p ( ψ ) {\displaystyle p(\psi )} is the pressure, F ( ψ ) = r B θ {\displaystyle F(\psi )=rB_{\theta }} and the magnetic field and current are, respectively, given by B = 1 r ∇ ψ × e ^ θ + F r e ^ θ , μ 0 J = 1 r d F d ψ ∇ ψ × e ^ θ − [ ∂ ∂ r ( 1 r ∂ ψ ∂ r ) + 1 r ∂ 2 ψ ∂ z 2 ] e ^ θ . {\displaystyle {\begin{aligned}\mathbf {B} &={\frac {1}{r}}\nabla \psi \times {\hat {\mathbf {e} }}_{\theta }+{\frac {F}{r}}{\hat {\mathbf {e} }}_{\theta },\\\mu _{0}\mathbf {J} &={\frac {1}{r}}{\frac {dF}{d\psi }}\nabla \psi \times {\hat {\mathbf {e} }}_{\theta }-\left[{\frac {\partial }{\partial r}}\left({\frac {1}{r}}{\frac {\partial \psi }{\partial r}}\right)+{\frac {1}{r}}{\frac {\partial ^{2}\psi }{\partial z^{2}}}\right]{\hat {\mathbf {e} }}_{\theta }.\end{aligned}}} The nature of the equilibrium, whether it be a tokamak, reversed field pinch, etc. is largely determined by the choices of the two functions F ( ψ ) {\displaystyle F(\psi )} and p ( ψ ) {\displaystyle p(\psi )} as well as the boundary conditions. == Derivation (in Cartesian coordinates) == In the following, it is assumed that the system is 2-dimensional with z {\displaystyle z} as the invariant axis, i.e. ∂ ∂ z {\textstyle {\frac {\partial }{\partial z}}} produces 0 for any quantity. Then the magnetic field can be written in cartesian coordinates as B = ( ∂ A ∂ y , − ∂ A ∂ x , B z ( x , y ) ) , {\displaystyle \mathbf {B} =\left({\frac {\partial A}{\partial y}},-{\frac {\partial A}{\partial x}},B_{z}(x,y)\right),} or more compactly, B = ∇ A × z ^ + B z z ^ , {\displaystyle \mathbf {B} =\nabla A\times {\hat {\mathbf {z} }}+B_{z}{\hat {\mathbf {z} }},} where A ( x , y ) z ^ {\displaystyle A(x,y){\hat {\mathbf {z} }}} is the vector potential for the in-plane (x and y components) magnetic field. Note that based on this form for B we can see that A is constant along any given magnetic field line, since ∇ A {\displaystyle \nabla A} is everywhere perpendicular to B. (Also note that -A is the flux function ψ {\displaystyle \psi } mentioned above.) Two dimensional, stationary, magnetic structures are described by the balance of pressure forces and magnetic forces, i.e.: ∇ p = j × B , {\displaystyle \nabla p=\mathbf {j} \times \mathbf {B} ,} where p is the plasma pressure and j is the electric current. It is known that p is a constant along any field line, (again since ∇ p {\displaystyle \nabla p} is everywhere perpendicular to B). Additionally, the two-dimensional assumption ( ∂ ∂ z = 0 {\textstyle {\frac {\partial }{\partial z}}=0} ) means that the z- component of the left hand side must be zero, so the z-component of the magnetic force on the right hand side must also be zero. This means that j ⊥ × B ⊥ = 0 {\displaystyle \mathbf {j} _{\perp }\times \mathbf {B} _{\perp }=0} , i.e. j ⊥ {\displaystyle \mathbf {j} _{\perp }} is parallel to B ⊥ {\displaystyle \mathbf {B} _{\perp }} . The right hand side of the previous equation can be considered in two parts: j × B = j z ( z ^ × B ⊥ ) + j ⊥ × z ^ B z , {\displaystyle \mathbf {j} \times \mathbf {B} =j_{z}({\hat {\mathbf {z} }}\times \mathbf {B_{\perp }} )+\mathbf {j_{\perp }} \times {\hat {\mathbf {z} }}B_{z},} where the ⊥ {\displaystyle \perp } subscript denotes the component in the plane perpendicular to the z {\displaystyle z} -axis. The z {\displaystyle z} component of the current in the above equation can be written in terms of the one-dimensional vector potential as j z = − 1 μ 0 ∇ 2 A . {\displaystyle j_{z}=-{\frac {1}{\mu _{0}}}\nabla ^{2}A.} The in plane field is B ⊥ = ∇ A × z ^ , {\displaystyle \mathbf {B} _{\perp }=\nabla A\times {\hat {\mathbf {z} }},} and using Maxwell–Ampère's equation, the in plane current is given by j ⊥ = 1 μ 0 ∇ B z × z ^ . {\displaystyle \mathbf {j} _{\perp }={\frac {1}{\mu _{0}}}\nabla B_{z}\times {\hat {\mathbf {z} }}.} In order for this vector to be parallel to B ⊥ {\displaystyle \mathbf {B} _{\perp }} as required, the vector ∇ B z {\displaystyle \nabla B_{z}} must be perpendicular to B ⊥ {\displaystyle \mathbf {B} _{\perp }} , and B z {\displaystyle B_{z}} must therefore, like p {\displaystyle p} , be a field-line invariant. Rearranging the cross products above leads to z ^ × B ⊥ = ∇ A − ( z ^ ⋅ ∇ A ) z ^ = ∇ A , {\displaystyle {\hat {\mathbf {z} }}\times \mathbf {B} _{\perp }=\nabla A-(\mathbf {\hat {z}} \cdot \nabla A)\mathbf {\hat {z}} =\nabla A,} and j ⊥ × B z z ^ = B z μ 0 ( z ^ ⋅ ∇ B z ) z ^ − 1 μ 0 B z ∇ B z = − 1 μ 0 B z ∇ B z . {\displaystyle \mathbf {j} _{\perp }\times B_{z}\mathbf {\hat {z}} ={\frac {B_{z}}{\mu _{0}}}(\mathbf {\hat {z}} \cdot \nabla B_{z})\mathbf {\hat {z}} -{\frac {1}{\mu _{0}}}B_{z}\nabla B_{z}=-{\frac {1}{\mu _{0}}}B_{z}\nabla B_{z}.} These results can be substituted into the expression for ∇ p {\displaystyle \nabla p} to yield: ∇ p = − [ 1 μ 0 ∇ 2 A ] ∇ A − 1 μ 0 B z ∇ B z . {\displaystyle \nabla p=-\left[{\frac {1}{\mu _{0}}}\nabla ^{2}A\right]\nabla A-{\frac {1}{\mu _{0}}}B_{z}\nabla B_{z}.} Since p {\displaystyle p} and B z {\displaystyle B_{z}} are constants along a field line, and functions only of A {\displaystyle A} , hence ∇ p = d p d A ∇ A {\displaystyle \nabla p={\frac {dp}{dA}}\nabla A} and ∇ B z = d B z d A ∇ A {\displaystyle \nabla B_{z}={\frac {dB_{z}}{dA}}\nabla A} . Thus, factoring out ∇ A {\displaystyle \nabla A} and rearranging terms yields the Grad–Shafranov equation: ∇ 2 A = − μ 0 d d A ( p + B z 2 2 μ 0 ) . {\displaystyle \nabla ^{2}A=-\mu _{0}{\frac {d}{dA}}\left(p+{\frac {B_{z}^{2}}{2\mu _{0}}}\right).} == Derivation in contravariant representation == This derivation is only used for Tokamaks, but it can be enlightening. Using the definition of 'The Theory of Toroidally Confined Plasmas 1:3'(Roscoe White), Writing B → {\displaystyle {\vec {B}}} by contravariant basis ( ∇ Ψ , ∇ ϕ , ∇ ζ ) {\displaystyle (\nabla \Psi ,\nabla \phi ,\nabla \zeta )} : B → = ∇ Ψ × ∇ ϕ + F ¯ ∇ ϕ , {\displaystyle {\vec {B}}=\nabla \Psi \times \nabla \phi +{\bar {F}}\nabla \phi ,} we have j → {\displaystyle {\vec {j}}} : μ 0 j → = ∇ × B → = − Δ ∗ Ψ ∇ ϕ + ∇ F ¯ × ∇ ϕ , where Δ ∗ = r ∂ r ( r − 1 ∂ r ) + ∂ ϕ 2 ; {\displaystyle \mu _{0}{\vec {j}}=\nabla \times {\vec {B}}=-\Delta ^{*}\Psi \nabla \phi +\nabla {\bar {F}}\times \nabla \phi \quad {\text{, where}}\ \Delta ^{*}=r\partial _{r}(r^{-1}\partial _{r})+\partial _{\phi }^{2}{\text{;}}} then force balance equation: μ 0 j → × B → = μ 0 ∇ p . {\displaystyle \mu _{0}{\vec {j}}\times {\vec {B}}=\mu _{0}\nabla p{\text{.}}} Working out, we have: − Δ ∗ Ψ = F ¯ d F ¯ d Ψ + μ 0 R 2 d p d Ψ . {\displaystyle -\Delta ^{*}\Psi ={\bar {F}}{\frac {d{\bar {F}}}{d\Psi }}+\mu _{0}R^{2}{\frac {dp}{d\Psi }}{\text{.}}} == References == == Further reading == Grad, H., and Rubin, H. (1958) Hydromagnetic Equilibria and Force-Free Fields Archived 2023-06-21 at the Wayback Machine. Proceedings of the 2nd UN Conf. on the Peaceful Uses of Atomic Energy, Vol. 31, Geneva: IAEA p. 190. Shafranov, V.D. (1966) Plasma equilibrium in a magnetic field, Reviews of Plasma Physics, Vol. 2, New York: Consultants Bureau, p. 103. Woods, Leslie C. (2004) Physics of plasmas, Weinheim: WILEY-VCH Verlag GmbH & Co. KGaA, chapter 2.5.4 Haverkort, J.W. (2009) Axisymmetric Ideal MHD Tokamak Equilibria. Notes about the Grad–Shafranov equation, selected aspects of the equation and its analytical solutions. Haverkort, J.W. (2009) Axisymmetric Ideal MHD equilibria with Toroidal Flow. Incorporation of toroidal flow, relation to kinetic and two-fluid models, and discussion of specific analytical solutions.
Wikipedia/Grad–Shafranov_equation
Magnetohydrodynamics is a peer-reviewed physics journal published by the Institute of Physics of the University of Latvia, covering fundamental and applied problems of magnetohydrodynamics in incompressible media, including magnetic fluids. This involves both classical and emerging areas in the physics, thermodynamics, hydrodynamics, and electrodynamics of magnetic fluids. As of 2010, the editor-in-chief is Andrejs Cēbers of the Institute of Physics of the University of Latvia. Since 2001 the journal has been published solely in English. The English and online edition were published by Kluwer Academic Publishers (now part of Springer-Verlag) through volume 36, number 4 (2001). Now the entire content is available by subscription directly from the journal's website. == Abstracting and indexing == The journal is abstracted and indexed in the Science Citation Index Expanded, as well as the Journal Citation Reports, and Inspec. == References == == External links == Official website
Wikipedia/Magnetohydrodynamics_(journal)
Electrohydrodynamics (EHD), also known as electro-fluid-dynamics (EFD) or electrokinetics, is the study of the dynamics of electrically charged fluids. Electrohydrodynamics (EHD) is a joint domain of electrodynamics and fluid dynamics mainly focused on the fluid motion induced by electric fields. EHD, in its simplest form, involves the application of an electric field to a fluid medium, resulting in fluid flow, form, or properties manipulation. These mechanisms arise from the interaction between the electric fields and charged particles or polarization effects within the fluid. The generation and movement of charge carriers (ions) in a fluid subjected to an electric field are the underlying physics of all EHD-based technologies. The electric forces acting on particles consist of electrostatic (Coulomb) and electrophoresis force (first term in the following equation)., dielectrophoretic force (second term in the following equation), and electrostrictive force (third term in the following equation): F e = ρ e E → − 1 2 ε 0 E → 2 ▽ ε r + 1 2 ε 0 ▽ ( E → 2 ρ f ( ∂ ε r ∂ ρ f ) ) {\displaystyle F_{e}=\rho _{e}{\overrightarrow {E}}-{1 \over 2}\varepsilon _{0}{\overrightarrow {E}}^{2}\triangledown \varepsilon _{r}+{1 \over 2}\varepsilon _{0}\triangledown {\Bigl (}{\overrightarrow {E}}^{2}\rho _{f}\left({\frac {\partial \varepsilon _{r}}{\partial \rho _{f}}}\right){\Bigr )}} This electrical force is then inserted in Navier-Stokes equation, as a body (volumetric) force.EHD covers the following types of particle and fluid transport mechanisms: electrophoresis, electrokinesis, dielectrophoresis, electro-osmosis, and electrorotation. In general, the phenomena relate to the direct conversion of electrical energy into kinetic energy, and vice versa. In the first instance, shaped electrostatic fields (ESF's) create hydrostatic pressure (HSP, or motion) in dielectric media. When such media are fluids, a flow is produced. If the dielectric is a vacuum or a solid, no flow is produced. Such flow can be directed against the electrodes, generally to move the electrodes. In such case, the moving structure acts as an electric motor. Practical fields of interest of EHD are the common air ioniser, electrohydrodynamic thrusters and EHD cooling systems. In the second instance, the converse takes place. A powered flow of medium within a shaped electrostatic field adds energy to the system which is picked up as a potential difference by electrodes. In such case, the structure acts as an electrical generator. == Electrokinesis == Electrokinesis is the particle or fluid transport produced by an electric field acting on a fluid having a net mobile charge. (See -kinesis for explanation and further uses of the -kinesis suffix.) Electrokinesis was first observed by Ferdinand Frederic Reuss during 1808, in the electrophoresis of clay particles The effect was also noticed and publicized in the 1920s by Thomas Townsend Brown which he called the Biefeld–Brown effect, although he seems to have misidentified it as an electric field acting on gravity. The flow rate in such a mechanism is linear in the electric field. Electrokinesis is of considerable practical importance in microfluidics, because it offers a way to manipulate and convey fluids in microsystems using only electric fields, with no moving parts. The force acting on the fluid, is given by the equation F = I d k {\displaystyle F={\frac {Id}{k}}} where, F {\displaystyle F} is the resulting force, measured in newtons, I {\displaystyle I} is the current, measured in amperes, d {\displaystyle d} is the distance between electrodes, measured in metres, and k {\displaystyle k} is the ion mobility coefficient of the dielectric fluid, measured in m2/(V·s). If the electrodes are free to move within the fluid, while keeping their distance fixed from each other, then such a force will actually propel the electrodes with respect to the fluid. Electrokinesis has also been observed in biology, where it was found to cause physical damage to neurons by inciting movement in their membranes. It is discussed in R. J. Elul's "Fixed charge in the cell membrane" (1967). == Water electrokinetics == In October 2003, Dr. Daniel Kwok, Dr. Larry Kostiuk and two graduate students from the University of Alberta discussed a method to convert hydrodynamic to electrical energy by exploiting the natural electrokinetic properties of a liquid such as ordinary tap water, by pumping fluid through tiny micro-channels with a pressure difference. This technology could lead to a practical and clean energy storage device, replacing batteries for devices such as mobile phones or calculators which would be charged up by simply compressing water to high pressure. Pressure would then be released on demand, for the fluid to flow through micro-channels. When water travels, or streams over a surface, the ions in the water "rub" against the solid, leaving the surface slightly charged. Kinetic energy from the moving ions would thus be converted to electrical energy. Although the power generated from a single channel is extremely small, millions of parallel micro-channels can be used to increase the power output. This streaming potential, water-flow phenomenon was discovered in 1859 by German physicist Georg Hermann Quincke. == Electrokinetic instabilities == The fluid flows in microfluidic and nanofluidic devices are often stable and strongly damped by viscous forces (with Reynolds numbers of order unity or smaller). However, heterogeneous ionic conductivity fields in the presence of applied electric fields can, under certain conditions, generate an unstable flow field owing to electrokinetic instabilities (EKI). Conductivity gradients are prevalent in on-chip electrokinetic processes such as preconcentration methods (e.g. field amplified sample stacking and isoelectric focusing), multidimensional assays, and systems with poorly specified sample chemistry. The dynamics and periodic morphology of electrokinetic instabilities are similar to other systems with Rayleigh–Taylor instabilities. The particular case of a flat plane geometry with homogeneous ions injection in the bottom side leads to a mathematical frame identical to the Rayleigh–Bénard convection. EKI's can be leveraged for rapid mixing or can cause undesirable dispersion in sample injection, separation and stacking. These instabilities are caused by a coupling of electric fields and ionic conductivity gradients that results in an electric body force. This coupling results in an electric body force in the bulk liquid, outside the electric double layer, that can generate temporal, convective, and absolute flow instabilities. Electrokinetic flows with conductivity gradients become unstable when the electroviscous stretching and folding of conductivity interfaces grows faster than the dissipative effect of molecular diffusion. Since these flows are characterized by low velocities and small length scales, the Reynolds number is below 0.01 and the flow is laminar. The onset of instability in these flows is best described as an electric "Rayleigh number". == Misc == Liquids can be printed at nanoscale by pyro-EHD. == See also == Magnetohydrodynamic drive Magnetohydrodynamics Electrodynamic droplet deformation Electrospray Electrokinetic phenomena Optoelectrofluidics Electrostatic precipitator List of textbooks in electromagnetism == References == == External links == Dr. Larry Kostiuk's website. Science-daily article about the discovery. BBC article with graphics.
Wikipedia/Electrohydrodynamics
Lorentz force velocimetry (LFV) is a noncontact electromagnetic flow measurement technique. LFV is particularly suited for the measurement of velocities in liquid metals like steel or aluminium and is currently under development for metallurgical applications. The measurement of flow velocities in hot and aggressive liquids such as liquid aluminium and molten glass constitutes one of the grand challenges of industrial fluid mechanics. Apart from liquids, LFV can also be used to measure the velocity of solid materials as well as for detection of micro-defects in their structures. A Lorentz force velocimetry system is called Lorentz force flowmeter (LFF). A LFF measures the integrated or bulk Lorentz force resulting from the interaction between a liquid metal in motion and an applied magnetic field. In this case the characteristic length of the magnetic field is of the same order of magnitude as the dimensions of the channel. It must be addressed that in the case where localized magnetic fields are used, it is possible to perform local velocity measurements and thus the term Lorentz force velocimeter is used. == Introduction == The use of magnetic fields in flow measurement date back to the 19th century, when in 1832 Michael Faraday attempted to determine the velocity of the River Thames. Faraday applied a method in which a flow (the river flow) is exposed to a magnetic field (earth magnetic field) and the induced voltage is measured using two electrodes across the same flow. This method is the basis of one of the most successful commercial applications in flow metering known as the inductive flowmeter. The theory of such devices has been developed and comprehensively summarized by Prof. J. A. Shercliff in the early 1950s. While inductive flowmeters are widely used for flow measurement in fluids at room temperatures such as beverages, chemicals and waste water, they are not suited for flow measurement of media such as hot, aggressive or for local measurements where surrounding obstacles limit access to the channel or pipe. Since they require electrodes to be inserted into the fluid, their use is limited to applications at temperatures far below the melting points of practically relevant metals. The Lorentz force velocimetry was invented by the A. Shercliff. However, it did not find practical application in these early years up until recent technical advances; in manufacturing of rare earth and non rare-earth strong permanent magnets, accurate force measurement techniques, multiphysical process simulation software for magnetohydrodynamic (MHD) problems that this principle could be turned into a feasible working flow measurement technique. LFV is currently being developed for applications in metallurgy as well as in other areas. Based on theory introduced by Shercliff there have been several attempts to develop flow measurement methods which do not require any mechanical contact with the fluid,. Among them is the eddy current flowmeter which measures flow-induced changes in the electric impedance of coils interacting with the flow. More recently, a non-contact method was proposed in which a magnetic field is applied to the flow and the velocity is determined from measurements of flow-induced deformations of the applied magnetic field,. == Principle and physical interpretation == The principle of Lorentz force velocimetry is based on measurements of the Lorentz force that occurs due to the flow of a conductive fluid under the influence of a variable magnetic field. According to Faraday's law, when a metal or conductive fluid moves through a magnetic field, eddy currents generate there by electromotive force in zones of maximal magnetic field gradient (in the present case in the inlet and outlet zones). Eddy current in its turn creates induced magnetic field according to Ampère's law. The interaction between eddy currents and total magnetic field gives rise to Lorentz force that breaks the flow. By virtue of Newton's third law "actio=reactio" a force with the same magnitude but opposite direction acts upon its source - permanent magnet. Direct measurement of the magnet's reaction force allows to determine fluid's velocity, since this force is proportional to flow rate. The Lorentz force used in LFV has nothing to do with magnetic attraction or repulsion. It is only due to the eddy currents whose strength depends on the electrical conductivity, the relative velocity between the liquid and the permanent magnet as well as the magnitude of the magnetic field. So, when a liquid metal moves across magnetic field lines, the interaction of the magnetic field (which are either produced by a current-carrying coil or by a permanent magnet) with the induced eddy currents leads to a Lorentz force (with density f → = j → × B → {\displaystyle {\vec {f}}={\vec {j}}\times {\vec {B}}} ) which brakes the flow. The Lorentz force density is roughly where σ {\displaystyle \sigma } is the electrical conductivity of the fluid, v {\displaystyle v} its velocity, and B {\displaystyle B} the magnitude of the magnetic field. This fact is well known and has found a variety of applications. This force is proportional to the velocity and conductivity of the fluid, and its measurement is the key idea of LFV. With the recent advent of powerful rare earth permanent magnets (like NdFeB, SmCo and other kind of magnets) and tools for designing sophisticated systems by permanent magnet the practical realization of this principle has now become possible. The primary magnetic field B → ( r → ) {\displaystyle {\vec {B}}\left({\vec {r}}\right)} can be produced by a permanent magnet or a primary current J → ( r → ) {\displaystyle {\vec {J}}\left({\vec {r}}\right)} (see Fig. 1). The motion of the fluid under the action of the primary field induces eddy currents which are sketched in figure 3. They will be denoted by j → ( r → ) {\displaystyle {\vec {j}}\left({\vec {r}}\right)} and are called secondary currents. The interaction of the secondary current with the primary magnetic field is responsible for the Lorentz force within the fluid which breaks the flow. The secondary currents create a magnetic field b → ( r → ) {\displaystyle {\vec {b}}\left({\vec {r}}\right)} , the secondary magnetic field. The interaction of the primary electric current with the secondary magnetic field gives rise to the Lorentz force on the magnet system The reciprocity principle for the Lorentz force velocimetry states that the electromagnetic forces on the fluid and on the magnet system have the same magnitude and act in opposite direction, namely The general scaling law that relates the measured force to the unknown velocity can be derived with reference to the simplified situation shown in Fig. 2. Here a small permanent magnet with dipole moment m {\displaystyle m} is located at a distance L {\displaystyle L} above a semi-infinite fluid moving with uniform velocity v {\displaystyle v} parallel to its free surface. The analysis that leads to the scaling relation can be made quantitative by assuming that the magnet is a point dipole with dipole moment m → = m e ^ z {\displaystyle {\vec {m}}=m{\hat {e}}_{z}} whose magnetic field is given by where R → = r → − L e ^ z {\displaystyle {\vec {R}}={\vec {r}}-L{\hat {e}}_{z}} and R =∣ R → ∣ {\displaystyle R=\mid {\vec {R}}\mid } . Assuming a velocity field v → = v e ^ x {\displaystyle {\vec {v}}=v{\hat {e}}_{x}} for z < 0 {\displaystyle z<0} , the eddy currents can be computed from Ohm's law for a moving electrically conducting fluid subject to the boundary conditions J z = 0 {\displaystyle J_{z}=0} at z = 0 {\displaystyle z=0} and J z → 0 {\displaystyle J_{z}\to 0} as z → 1 {\displaystyle z\to 1} . First, the scalar electric potential is obtained as from which the electric current density is readily calculated. They are indeed horizontal. Once they are known, the Biot–Savart law can be used to compute the secondary magnetic field b → ( r → ) {\displaystyle {\vec {b}}\left({\vec {r}}\right)} . Finally, the force is given by where the gradient of b → {\displaystyle {\vec {b}}} has to be evaluated at the location of the dipole. For the problem at hand all these steps can be carried out analytically without any approximation leading to the result This provides us with the estimate == Conceptual setups == Lorentz force flowmeters are usually classified in several main conceptual setups. Some of them designed as static flowmeters where the magnet system is at rest and one measures the force acting on it. Alternatively, they can be designed as rotary flowmeters where the magnets are arranged on a rotating wheel and the spinning velocity is a measure of the flow velocity. Obviously, the force acting on a Lorentz force flowmeter depends both on the velocity distribution and on the shape of the magnet system. This classification depends on the relative direction of the magnetic field that is being applied respect to the direction of the flow. In Figure 3 one can distinguish diagrams of the longitudinal and the transverse Lorentz force flowmeters. It is important to mention that even that in figures only a coil or a magnet are sketched, the principle holds for both. Rotary LFF consists of a freely rotating permanent magnet (or an array of magnets mounted on a flywheel as shown in figure 4), which is magnetized perpendicularly to the axle it is mounted on. When such a system is placed close to a duct carrying an electrically conducting fluid flow, it rotates so that the driving torque due to the eddy currents induced by the flow is balanced by the braking torque induced by the rotation itself. The equilibrium rotation rate varies directly with the flow velocity and inversely with the distance between the magnet and the duct. In this case it is possible to measure either the torque on the magnet system or the angular velocity at which the wheel spins. == Practical applications == LFV is sought to be extended to all fluid or solid materials, providing that they are electrical conductors. As shown before, the Lorentz force generated by the flow depend linearly on the conductivity of the fluid. Typically, the electrical conductivity of molten metals is of the order of 10 6 S / m {\displaystyle 10^{6}~S/m} so the Lorentz force is in the range of some mN. However, equally important liquids as glass melts and electrolytic solutions have a conductivity of ∼ 1 S / m {\displaystyle \sim ~1~S/m} giving rise to a Lorentz force of the order of micronewtons or even smaller. === High Conducting media: liquid or solid metals === Among different possibilities to measure the effect on the magnet system, it has been successfully applied those based on the measurement of the deflection of a parallel spring under an applied force. Firstly using a strain gauge and then recording the deflection of a quartz spring with an interferometer, in whose case the deformation is detected to within 0.1 nm. === Low Conducting media: Electrolytic solution or glass melts === Recent advance in LFV made it possible for metering flow velocity of media which has very low electroconductivity, particularly by varying parameters as well as using some state-of-art force measurement devices enable to measure flow velocity of electrolyte solutions with conductivity that is 106 times smaller than that for the liquid metals. There are variety of industrial and scientific applications where noncontact flow measurement through opaque walls or in opaque liquids is desirable. Such applications include flow metering of chemicals, food, beverages, blood, aqueous solutions in the pharmaceutical industry, molten salts in solar thermal power plants, and high temperature reactors as well as glass melts for high-precision optics. A noncontact flowmeter is a device that is neither in mechanical contact with the liquid nor with the wall of the pipe in which the liquid flows. Noncontact flowmeters are equally useful when walls are contaminated like in the processing of radioactive materials, when pipes are strongly vibrating or in cases when portable flowmeters are to be developed. If the liquid and the wall of the pipe are transparent and the liquid contains tracer particles, optical measurement techniques, are effective enough tool to perform noncontact measurements. However, if either the wall or the liquid are opaque as is often the case in food production, chemical engineering, glass making, and metallurgy, very few possibilities for noncontact flow measurement exist. The force measurement system is an important part of the Lorentz force velocimetry. With high resolution force measurement system makes the measurement of even lower conductivity possible. Up to date has the force measurement system continually being developed. At first the pendulum-like setups was used (Figure 5). One of the experimental facilities consists of two high power (410 mT) magnets made of NdFeB suspended by thin wires on both side of channel thereby creating magnetic field perpendicular to the fluid flow, here deflection is measured by interferometer system,. The second setup consists of state-of-art weighting balance system (Figure 6) from which is being hanged optimized magnets on the base of Halbach array system. While the total mass of both magnet systems are equal (1 kg), this system induces 3 times higher system response due to arrangement of individual elements in the array and its interaction with predefined fluid profile. Here use of very sensitive force measuring devices is desirable, since flow velocity is being converted from the very tiny detected Lorentz Force. This force in combination with unavoidable dead weight F G {\displaystyle F_{G}} of the magnet ( F G = m ⋅ g {\displaystyle F_{G}=m\cdot g} ) is around F / F G = 10 − 7 {\displaystyle F/F_{G}=10^{-7}} . After that, the method of differential force measurement was developed. With this method two balance were used, one with magnet and the other is with same-weight-dummy. In this way the influence of environment would be reduced. Recently, it have been reported that the flow measurements by this method is possible for saltwater flows whose electrical conductivity is as small as 0.06 S/m (range of electrical conductivity of the regular water from tap). === Lorentz force sigmometry === Lorentz force sigmometry (LOFOS) is a contactless method for measuring the thermophysical properties of materials, no matter whether it is a fluid or a solid body. The precise measurements of electrical value, density, viscosity, thermal conductivity and surface tension of molten metals are in great importance in industry applications. One of the major problems in the experimental measurements of the thermophysical properties at high temperature (>1000 K) in the liquid state is the problem of chemical reaction between the hot fluid and the electrical probes. The basic equation for calculating the electrical conductivity is derived from the equation that links the mass flow rate m ˙ {\displaystyle {\dot {m}}} and Lorentz force F {\displaystyle F} generated by magnetic field in flow: where Σ = σ ρ {\displaystyle \Sigma ={\frac {\sigma }{\rho }}} is the specific electrical conductivity equals to the ratio of the electrical conductivity σ {\displaystyle \sigma } and the mass density of fluid ρ {\displaystyle \rho } . K {\displaystyle K} is a calibration factor that depends on the geometry of the LOFOS system. From equation above the cumulative mass during operating time is determined as where F ~ {\displaystyle {\tilde {F}}} is the integral of Lorentz force within the time process. From this equation and considering the specific electrical conductivity formula, one can derive the final equation to compute the electrical conductivity for the fluid, in the form === Time-of-flight Lorentz force velocimetry === Time-of-flight Lorentz force velocimetry, is intended for contactless determination of flow rate in conductive fluids. It can be successfully used even in case when such material properties as electrical conductivity or density are not precisely known under specific outer conditions. The last reason makes time-of-flight LFV especially important for industry application. According to time-of-flight LFV (Fig. 9) two coherent measurement systems are mounted on a channel one by one. The measurement is based on getting of cross-correlating function of signals, which are registered by two magnetic measurement's system. Every system consists of permanent magnet and force sensor, so inducing of Lorentz force and measurement of the reaction force are made simultaneously. Any cross-correlation function is useful only in case of qualitative difference between signals and for creating the difference in this case turbulent fluctuations are used. Before reaching of measurement zone of a channel liquid passes artificial vortex generator that induces strong disturbances in it. And when such fluctuation-vortex reaches magnetic field of measurement system we can observe a peak on its force-time characteristic while second system still measures stable flow. Then according to the time between peaks and the distance between measurement system observer can estimate mean velocity and, hence, flow rate of the liquid by equation: where D {\displaystyle D} is the distance between magnet system, τ {\displaystyle \tau } the time delay between recorded peaks, and k {\displaystyle k} is obtained experimentally for every specific liquid, as shown in figure 9. == Lorentz force eddy current testing == A different, albeit physically closely related challenge is the detection of deeply lying flaws and inhomogeneities in electrically conducting solid materials. In the traditional version of eddy current testing an alternating (AC) magnetic field is used to induce eddy currents inside the material to be investigated. If the material contains a crack or flaw which make the spatial distribution of the electrical conductivity nonuniform, the path of the eddy currents is perturbed and the impedance of the coil which generates the AC magnetic field is modified. By measuring the impedance of this coil, a crack can hence be detected. Since the eddy currents are generated by an AC magnetic field, their penetration into the subsurface region of the material is limited by the skin effect. The applicability of the traditional version of eddy current testing is therefore limited to the analysis of the immediate vicinity of the surface of a material, usually of the order of one millimeter. Attempts to overcome this fundamental limitation using low frequency coils and superconducting magnetic field sensors have not led to widespread applications. A recent technique, referred to as Lorentz force eddy current testing (LET), exploits the advantages of applying DC magnetic fields and relative motion providing deep and relatively fast testing of electrically conducting materials. In principle, LET represents a modification of the traditional eddy current testing from which it differs in two aspects, namely (i) how eddy currents are induced and (ii) how their perturbation is detected. In LET eddy currents are generated by providing the relative motion between the conductor under test and a permanent magnet (see figure 10). If the magnet is passing by a defect, the Lorentz force acting on it shows a distortion whose detection is the key for the LET working principle. If the object is free of defects, the resulting Lorentz force remains constant. == Advantages & Limitations == The advantages of LFV are LFV is a non-contact techniques of flow rate measurement. LFV can be successfully applied for aggressive and high-temperature fluids like liquid metals. Mean flow rate or mean velocity of fluid can be obtained without depending on flow's inhomogeneities and zones of turbulence. The limitations of the LFV are Necessity of temperature control of measurement system because of strong dependence of magnet's magnetic field on temperature. High temperature could cause irretrievable loss of the magnetic properties of permanent magnet (Curie temperature). Restriction of measurement zone by permanent magnet's dimensions. Necessity of liquid level's control in case of work with open channel. Rapid decay of the magnetic fields give rise to tiny forces on the magnet system. == See also == Magnetohydrodynamics Lorentz force == References == == External links == Official web page of Lorentz Force Velocimetry and Lorentz Force Eddy Current Testing Group
Wikipedia/Lorentz_force_velocimetry
The Cauchy momentum equation is a vector partial differential equation put forth by Augustin-Louis Cauchy that describes the non-relativistic momentum transport in any continuum. == Main equation == In convective (or Lagrangian) form the Cauchy momentum equation is written as: D u D t = 1 ρ ∇ ⋅ σ + f {\displaystyle {\frac {D\mathbf {u} }{Dt}}={\frac {1}{\rho }}\nabla \cdot {\boldsymbol {\sigma }}+\mathbf {f} } where u {\displaystyle \mathbf {u} } is the flow velocity vector field, which depends on time and space, (unit: m / s {\displaystyle \mathrm {m/s} } ) t {\displaystyle t} is time, (unit: s {\displaystyle \mathrm {s} } ) D u D t {\displaystyle {\frac {D\mathbf {u} }{Dt}}} is the material derivative of u {\displaystyle \mathbf {u} } , equal to ∂ t u + u ⋅ ∇ u {\displaystyle \partial _{t}\mathbf {u} +\mathbf {u} \cdot \nabla \mathbf {u} } , (unit: m / s 2 {\displaystyle \mathrm {m/s^{2}} } ) ρ {\displaystyle \rho } is the density at a given point of the continuum (for which the continuity equation holds), (unit: k g / m 3 {\displaystyle \mathrm {kg/m^{3}} } ) σ {\displaystyle {\boldsymbol {\sigma }}} is the stress tensor, (unit: P a = N / m 2 = k g ⋅ m − 1 ⋅ s − 2 {\displaystyle \mathrm {Pa=N/m^{2}=kg\cdot m^{-1}\cdot s^{-2}} } ) f = [ f x f y f z ] {\displaystyle \mathbf {f} ={\begin{bmatrix}f_{x}\\f_{y}\\f_{z}\end{bmatrix}}} is a vector containing all of the accelerations caused by body forces (sometimes simply gravitational acceleration), (unit: m / s 2 {\displaystyle \mathrm {m/s^{2}} } ) ∇ ⋅ σ = [ ∂ σ x x ∂ x + ∂ σ y x ∂ y + ∂ σ z x ∂ z ∂ σ x y ∂ x + ∂ σ y y ∂ y + ∂ σ z y ∂ z ∂ σ x z ∂ x + ∂ σ y z ∂ y + ∂ σ z z ∂ z ] {\displaystyle \nabla \cdot {\boldsymbol {\sigma }}={\begin{bmatrix}{\dfrac {\partial \sigma _{xx}}{\partial x}}+{\dfrac {\partial \sigma _{yx}}{\partial y}}+{\dfrac {\partial \sigma _{zx}}{\partial z}}\\{\dfrac {\partial \sigma _{xy}}{\partial x}}+{\dfrac {\partial \sigma _{yy}}{\partial y}}+{\dfrac {\partial \sigma _{zy}}{\partial z}}\\{\dfrac {\partial \sigma _{xz}}{\partial x}}+{\dfrac {\partial \sigma _{yz}}{\partial y}}+{\dfrac {\partial \sigma _{zz}}{\partial z}}\\\end{bmatrix}}} is the divergence of stress tensor. (unit: P a / m = k g ⋅ m − 2 ⋅ s − 2 {\displaystyle \mathrm {Pa/m=kg\cdot m^{-2}\cdot s^{-2}} } ) Commonly used SI units are given in parentheses although the equations are general in nature and other units can be entered into them or units can be removed at all by nondimensionalization. Note that only we use column vectors (in the Cartesian coordinate system) above for clarity, but the equation is written using physical components (which are neither covariants ("column") nor contravariants ("row") ). However, if we chose a non-orthogonal curvilinear coordinate system, then we should calculate and write equations in covariant ("row vectors") or contravariant ("column vectors") form. After an appropriate change of variables, it can also be written in conservation form: ∂ j ∂ t + ∇ ⋅ F = s {\displaystyle {\frac {\partial \mathbf {j} }{\partial t}}+\nabla \cdot \mathbf {F} =\mathbf {s} } where j is the momentum density at a given space-time point, F is the flux associated to the momentum density, and s contains all of the body forces per unit volume. == Differential derivation == Let us start with the generalized momentum conservation principle which can be written as follows: "The change in system momentum is proportional to the resulting force acting on this system". It is expressed by the formula: p ( t + Δ t ) − p ( t ) = Δ t F ¯ {\displaystyle \mathbf {p} (t+\Delta t)-\mathbf {p} (t)=\Delta t{\bar {\mathbf {F} }}} where p ( t ) {\displaystyle \mathbf {p} (t)} is momentum at time t, and F ¯ {\displaystyle {\bar {\mathbf {F} }}} is force averaged over Δ t {\displaystyle \Delta t} . After dividing by Δ t {\displaystyle \Delta t} and passing to the limit Δ t → 0 {\displaystyle \Delta t\to 0} we get (derivative): d p d t = F {\displaystyle {\frac {d\mathbf {p} }{dt}}=\mathbf {F} } Let us analyse each side of the equation above. === Right side === We split the forces into body forces F m {\displaystyle \mathbf {F} _{m}} and surface forces F p {\displaystyle \mathbf {F} _{p}} F = F p + F m {\displaystyle \mathbf {F} =\mathbf {F} _{p}+\mathbf {F} _{m}} Surface forces act on walls of the cubic fluid element. For each wall, the X component of these forces was marked in the figure with a cubic element (in the form of a product of stress and surface area e.g. − σ x x d y d z {\displaystyle -\sigma _{xx}\,dy\,dz} with units P a ⋅ m ⋅ m = N m 2 ⋅ m 2 = N {\textstyle \mathrm {Pa\cdot m\cdot m={\frac {N}{m^{2}}}\cdot m^{2}=N} } ). Adding forces (their X components) acting on each of the cube walls, we get: F p x = ( σ x x + ∂ σ x x ∂ x d x ) d y d z − σ x x d y d z + ( σ y x + ∂ σ y x ∂ y d y ) d x d z − σ y x d x d z + ( σ z x + ∂ σ z x ∂ z d z ) d x d y − σ z x d x d y {\displaystyle F_{p}^{x}=\left(\sigma _{xx}+{\frac {\partial \sigma _{xx}}{\partial x}}dx\right)dy\,dz-\sigma _{xx}dy\,dz+\left(\sigma _{yx}+{\frac {\partial \sigma _{yx}}{\partial y}}dy\right)dx\,dz-\sigma _{yx}dx\,dz+\left(\sigma _{zx}+{\frac {\partial \sigma _{zx}}{\partial z}}dz\right)dx\,dy-\sigma _{zx}dx\,dy} After ordering F p x {\displaystyle F_{p}^{x}} and performing similar reasoning for components F p y , F p z {\displaystyle F_{p}^{y},F_{p}^{z}} (they have not been shown in the figure, but these would be vectors parallel to the Y and Z axes, respectively) we get: F p x = ∂ σ x x ∂ x d x d y d z + ∂ σ y x ∂ y d y d x d z + ∂ σ z x ∂ z d z d x d y F p y = ∂ σ x y ∂ x d x d y d z + ∂ σ y y ∂ y d y d x d z + ∂ σ z y ∂ z d z d x d y F p z = ∂ σ x z ∂ x d x d y d z + ∂ σ y z ∂ y d y d x d z + ∂ σ z z ∂ z d z d x d y {\displaystyle {\begin{aligned}F_{p}^{x}&={\frac {\partial \sigma _{xx}}{\partial x}}\,dx\,dy\,dz+{\frac {\partial \sigma _{yx}}{\partial y}}\,dy\,dx\,dz+{\frac {\partial \sigma _{zx}}{\partial z}}\,dz\,dx\,dy\\[6pt]F_{p}^{y}&={\frac {\partial \sigma _{xy}}{\partial x}}\,dx\,dy\,dz+{\frac {\partial \sigma _{yy}}{\partial y}}\,dy\,dx\,dz+{\frac {\partial \sigma _{zy}}{\partial z}}\,dz\,dx\,dy\\[6pt]F_{p}^{z}&={\frac {\partial \sigma _{xz}}{\partial x}}\,dx\,dy\,dz+{\frac {\partial \sigma _{yz}}{\partial y}}\,dy\,dx\,dz+{\frac {\partial \sigma _{zz}}{\partial z}}\,dz\,dx\,dy{\vphantom {\begin{matrix}\\\\\end{matrix}}}\end{aligned}}} We can then write it in the symbolic operational form: F p = ( ∇ ⋅ σ ) d x d y d z {\displaystyle \mathbf {F} _{p}=(\nabla \cdot {\boldsymbol {\sigma }})\,dx\,dy\,dz} There are mass forces acting on the inside of the control volume. We can write them using the acceleration field f {\displaystyle \mathbf {f} } (e.g. gravitational acceleration): F m = f ρ d x d y d z {\displaystyle \mathbf {F} _{m}=\mathbf {f} \rho \,dx\,dy\,dz} === Left side === Let us calculate momentum of the cube: p = u m = u ρ d x d y d z {\displaystyle \mathbf {p} =\mathbf {u} m=\mathbf {u} \rho \,dx\,dy\,dz} Because we assume that tested mass (cube) m = ρ d x d y d z {\displaystyle m=\rho \,dx\,dy\,dz} is constant in time, so d p d t = d u d t ρ d x d y d z {\displaystyle {\frac {d\mathbf {p} }{dt}}={\frac {d\mathbf {u} }{dt}}\rho \,dx\,dy\,dz} === Left and right side comparison === We have d p d t = F {\displaystyle {\frac {d\mathbf {p} }{dt}}=\mathbf {F} } then d p d t = F p + F m {\displaystyle {\frac {d\mathbf {p} }{dt}}=\mathbf {F} _{p}+\mathbf {F} _{m}} then d u d t ρ d x d y d z = ( ∇ ⋅ σ ) d x d y d z + f ρ d x d y d z {\displaystyle {\frac {d\mathbf {u} }{dt}}\rho \,dx\,dy\,dz=(\nabla \cdot {\boldsymbol {\sigma }})dx\,dy\,dz+\mathbf {f} \rho \,dx\,dy\,dz} Divide both sides by ρ d x d y d z {\displaystyle \rho \,dx\,dy\,dz} , and because d u d t = D u D t {\textstyle {\frac {d\mathbf {u} }{dt}}={\frac {D\mathbf {u} }{Dt}}} we get: D u D t = 1 ρ ∇ ⋅ σ + f {\displaystyle {\frac {D\mathbf {u} }{Dt}}={\frac {1}{\rho }}\nabla \cdot {\boldsymbol {\sigma }}+\mathbf {f} } which finishes the derivation. == Integral derivation == Applying Newton's second law (ith component) to a control volume in the continuum being modeled gives: m a i = F i {\displaystyle ma_{i}=F_{i}} Then, based on the Reynolds transport theorem and using material derivative notation, one can write ∫ Ω ρ D u i D t d V = ∫ Ω ∇ j σ i j d V + ∫ Ω ρ f i d V ∫ Ω ( ρ D u i D t − ∇ j σ i j − ρ f i ) d V = 0 ρ D u i D t − ∇ j σ i j − ρ f i = 0 D u i D t − ∇ j σ i j ρ − f i = 0 {\displaystyle {\begin{aligned}\int _{\Omega }\rho {\frac {Du_{i}}{Dt}}\,dV&=\int _{\Omega }\nabla _{j}\sigma _{i}^{j}\,dV+\int _{\Omega }\rho f_{i}\,dV\\\int _{\Omega }\left(\rho {\frac {Du_{i}}{Dt}}-\nabla _{j}\sigma _{i}^{j}-\rho f_{i}\right)\,dV&=0\\\rho {\frac {Du_{i}}{Dt}}-\nabla _{j}\sigma _{i}^{j}-\rho f_{i}&=0\\{\frac {Du_{i}}{Dt}}-{\frac {\nabla _{j}\sigma _{i}^{j}}{\rho }}-f_{i}&=0\end{aligned}}} where Ω represents the control volume. Since this equation must hold for any control volume, it must be true that the integrand is zero, from this the Cauchy momentum equation follows. The main step (not done above) in deriving this equation is establishing that the derivative of the stress tensor is one of the forces that constitutes Fi. == Conservation form == The Cauchy momentum equation can also be put in the following form: simply by defining: j = ρ u F = ρ u ⊗ u − σ s = ρ f {\displaystyle {\begin{aligned}{\mathbf {j} }&=\rho \mathbf {u} \\{\mathbf {F} }&=\rho \mathbf {u} \otimes \mathbf {u} -{\boldsymbol {\sigma }}\\{\mathbf {s} }&=\rho \mathbf {f} \end{aligned}}} where j is the momentum density at the point considered in the continuum (for which the continuity equation holds), F is the flux associated to the momentum density, and s contains all of the body forces per unit volume. u ⊗ u is the outer product of the velocity with itself. Here j and s have same number of dimensions N as the flow speed and the body acceleration, while F, being a tensor, has N2. In the Eulerian forms it is apparent that the assumption of no deviatoric stress brings Cauchy equations to the Euler equations. == Convective acceleration == A significant feature of the Navier–Stokes equations is the presence of convective acceleration: the effect of time-independent acceleration of a flow with respect to space. While individual continuum particles indeed experience time dependent acceleration, the convective acceleration of the flow field is a spatial effect, one example being fluid speeding up in a nozzle. Regardless of what kind of continuum is being dealt with, convective acceleration is a nonlinear effect. Convective acceleration is present in most flows (exceptions include one-dimensional incompressible flow), but its dynamic effect is disregarded in creeping flow (also called Stokes flow). Convective acceleration is represented by the nonlinear quantity u ⋅ ∇u, which may be interpreted either as (u ⋅ ∇)u or as u ⋅ (∇u), with ∇u the tensor derivative of the velocity vector u. Both interpretations give the same result. === Advection operator vs tensor derivative === The convective acceleration (u ⋅ ∇)u can be thought of as the advection operator u ⋅ ∇ acting on the velocity field u. This contrasts with the expression in terms of tensor derivative ∇u, which is the component-wise derivative of the velocity vector defined by [∇u]mi = ∂m vi, so that [ u ⋅ ( ∇ u ) ] i = ∑ m v m ∂ m v i = [ ( u ⋅ ∇ ) u ] i . {\displaystyle \left[\mathbf {u} \cdot \left(\nabla \mathbf {u} \right)\right]_{i}=\sum _{m}v_{m}\partial _{m}v_{i}=\left[(\mathbf {u} \cdot \nabla )\mathbf {u} \right]_{i}\,.} === Lamb form === The vector calculus identity of the cross product of a curl holds: v × ( ∇ × a ) = ∇ a ( v ⋅ a ) − v ⋅ ∇ a {\displaystyle \mathbf {v} \times \left(\nabla \times \mathbf {a} \right)=\nabla _{a}\left(\mathbf {v} \cdot \mathbf {a} \right)-\mathbf {v} \cdot \nabla \mathbf {a} } where the Feynman subscript notation ∇a is used, which means the subscripted gradient operates only on the factor a. Lamb in his famous classical book Hydrodynamics (1895), used this identity to change the convective term of the flow velocity in rotational form, i.e. without a tensor derivative: u ⋅ ∇ u = ∇ ( ‖ u ‖ 2 2 ) + ( ∇ × u ) × u {\displaystyle \mathbf {u} \cdot \nabla \mathbf {u} =\nabla \left({\frac {\|\mathbf {u} \|^{2}}{2}}\right)+\left(\nabla \times \mathbf {u} \right)\times \mathbf {u} } where the vector l = ( ∇ × u ) × u {\displaystyle \mathbf {l} =\left(\nabla \times \mathbf {u} \right)\times \mathbf {u} } is called the Lamb vector. The Cauchy momentum equation becomes: ∂ u ∂ t + 1 2 ∇ ( u 2 ) + ( ∇ × u ) × u = 1 ρ ∇ ⋅ σ + f {\displaystyle {\frac {\partial \mathbf {u} }{\partial t}}+{\frac {1}{2}}\nabla \left(u^{2}\right)+(\nabla \times \mathbf {u} )\times \mathbf {u} ={\frac {1}{\rho }}\nabla \cdot {\boldsymbol {\sigma }}+\mathbf {f} } Using the identity: ∇ ⋅ ( σ ρ ) = 1 ρ ∇ ⋅ σ − 1 ρ 2 σ ⋅ ∇ ρ {\displaystyle \nabla \cdot \left({\frac {\boldsymbol {\sigma }}{\rho }}\right)={\frac {1}{\rho }}\nabla \cdot {\boldsymbol {\sigma }}-{\frac {1}{\rho ^{2}}}{\boldsymbol {\sigma }}\cdot \nabla \rho } the Cauchy equation becomes: ∇ ⋅ ( 1 2 u 2 − σ ρ ) − f = 1 ρ 2 σ ⋅ ∇ ρ + u × ( ∇ × u ) − ∂ u ∂ t {\displaystyle \nabla \cdot \left({\frac {1}{2}}u^{2}-{\frac {\boldsymbol {\sigma }}{\rho }}\right)-\mathbf {f} ={\frac {1}{\rho ^{2}}}{\boldsymbol {\sigma }}\cdot \nabla \rho +\mathbf {u} \times (\nabla \times \mathbf {u} )-{\frac {\partial \mathbf {u} }{\partial t}}} In fact, in case of an external conservative field, by defining its potential φ: ∇ ⋅ ( 1 2 u 2 + ϕ − σ ρ ) = 1 ρ 2 σ ⋅ ∇ ρ + u × ( ∇ × u ) − ∂ u ∂ t {\displaystyle \nabla \cdot \left({\frac {1}{2}}u^{2}+\phi -{\frac {\boldsymbol {\sigma }}{\rho }}\right)={\frac {1}{\rho ^{2}}}{\boldsymbol {\sigma }}\cdot \nabla \rho +\mathbf {u} \times (\nabla \times \mathbf {u} )-{\frac {\partial \mathbf {u} }{\partial t}}} In case of a steady flow the time derivative of the flow velocity disappears, so the momentum equation becomes: ∇ ⋅ ( 1 2 u 2 + ϕ − σ ρ ) = 1 ρ 2 σ ⋅ ∇ ρ + u × ( ∇ × u ) {\displaystyle \nabla \cdot \left({\frac {1}{2}}u^{2}+\phi -{\frac {\boldsymbol {\sigma }}{\rho }}\right)={\frac {1}{\rho ^{2}}}{\boldsymbol {\sigma }}\cdot \nabla \rho +\mathbf {u} \times (\nabla \times \mathbf {u} )} And by projecting the momentum equation on the flow direction, i.e. along a streamline, the cross product disappears due to a vector calculus identity of the triple scalar product: u ⋅ ∇ ⋅ ( 1 2 u 2 + ϕ − σ ρ ) = 1 ρ 2 u ⋅ ( σ ⋅ ∇ ρ ) {\displaystyle \mathbf {u} \cdot \nabla \cdot \left({\frac {1}{2}}u^{2}+\phi -{\frac {\boldsymbol {\sigma }}{\rho }}\right)={\frac {1}{\rho ^{2}}}\mathbf {u} \cdot ({\boldsymbol {\sigma }}\cdot \nabla \rho )} If the stress tensor is isotropic, then only the pressure enters: σ = − p I {\displaystyle {\boldsymbol {\sigma }}=-p\mathbf {I} } (where I is the identity tensor), and the Euler momentum equation in the steady incompressible case becomes: u ⋅ ∇ ( 1 2 u 2 + ϕ + p ρ ) + p ρ 2 u ⋅ ∇ ρ = 0 {\displaystyle \mathbf {u} \cdot \nabla \left({\frac {1}{2}}u^{2}+\phi +{\frac {p}{\rho }}\right)+{\frac {p}{\rho ^{2}}}\mathbf {u} \cdot \nabla \rho =0} In the steady incompressible case the mass equation is simply: u ⋅ ∇ ρ = 0 , {\displaystyle \mathbf {u} \cdot \nabla \rho =0\,,} that is, the mass conservation for a steady incompressible flow states that the density along a streamline is constant. This leads to a considerable simplification of the Euler momentum equation: u ⋅ ∇ ( 1 2 u 2 + ϕ + p ρ ) = 0 {\displaystyle \mathbf {u} \cdot \nabla \left({\frac {1}{2}}u^{2}+\phi +{\frac {p}{\rho }}\right)=0} The convenience of defining the total head for an inviscid liquid flow is now apparent: b l ≡ 1 2 u 2 + ϕ + p ρ , {\displaystyle b_{l}\equiv {\frac {1}{2}}u^{2}+\phi +{\frac {p}{\rho }}\,,} in fact, the above equation can be simply written as: u ⋅ ∇ b l = 0 {\displaystyle \mathbf {u} \cdot \nabla b_{l}=0} That is, the momentum balance for a steady inviscid and incompressible flow in an external conservative field states that the total head along a streamline is constant. === Irrotational flows === The Lamb form is also useful in irrotational flow, where the curl of the velocity (called vorticity) ω = ∇ × u is equal to zero. In that case, the convection term in D u / D t {\displaystyle D\mathbf {u} /Dt} reduces to u ⋅ ∇ u = ∇ ( ‖ u ‖ 2 2 ) . {\displaystyle \mathbf {u} \cdot \nabla \mathbf {u} =\nabla \left({\frac {\|\mathbf {u} \|^{2}}{2}}\right).} == Stresses == The effect of stress in the continuum flow is represented by the ∇p and ∇ ⋅ τ terms; these are gradients of surface forces, analogous to stresses in a solid. Here ∇p is the pressure gradient and arises from the isotropic part of the Cauchy stress tensor. This part is given by the normal stresses that occur in almost all situations. The anisotropic part of the stress tensor gives rise to ∇ ⋅ τ, which usually describes viscous forces; for incompressible flow, this is only a shear effect. Thus, τ is the deviatoric stress tensor, and the stress tensor is equal to: σ = − p I + τ {\displaystyle {\boldsymbol {\sigma }}=-p\mathbf {I} +{\boldsymbol {\tau }}} where I is the identity matrix in the space considered and τ the shear tensor. All non-relativistic momentum conservation equations, such as the Navier–Stokes equation, can be derived by beginning with the Cauchy momentum equation and specifying the stress tensor through a constitutive relation. By expressing the shear tensor in terms of viscosity and fluid velocity, and assuming constant density and viscosity, the Cauchy momentum equation will lead to the Navier–Stokes equations. By assuming inviscid flow, the Navier–Stokes equations can further simplify to the Euler equations. The divergence of the stress tensor can be written as ∇ ⋅ σ = − ∇ p + ∇ ⋅ τ . {\displaystyle \nabla \cdot {\boldsymbol {\sigma }}=-\nabla p+\nabla \cdot {\boldsymbol {\tau }}.} The effect of the pressure gradient on the flow is to accelerate the flow in the direction from high pressure to low pressure. As written in the Cauchy momentum equation, the stress terms p and τ are yet unknown, so this equation alone cannot be used to solve problems. Besides the equations of motion—Newton's second law—a force model is needed relating the stresses to the flow motion. For this reason, assumptions based on natural observations are often applied to specify the stresses in terms of the other flow variables, such as velocity and density. == External forces == The vector field f represents body forces per unit mass. Typically, these consist of only gravity acceleration, but may include others, such as electromagnetic forces. In non-inertial coordinate frames, other "inertial accelerations" associated with rotating coordinates may arise. Often, these forces may be represented as the gradient of some scalar quantity χ, with f = ∇χ in which case they are called conservative forces. Gravity in the z direction, for example, is the gradient of −ρgz. Because pressure from such gravitation arises only as a gradient, we may include it in the pressure term as a body force h = p − χ. The pressure and force terms on the right-hand side of the Navier–Stokes equation become − ∇ p + f = − ∇ p + ∇ χ = − ∇ ( p − χ ) = − ∇ h . {\displaystyle -\nabla p+\mathbf {f} =-\nabla p+\nabla \chi =-\nabla \left(p-\chi \right)=-\nabla h.} It is also possible to include external influences into the stress term σ {\displaystyle {\boldsymbol {\sigma }}} rather than the body force term. This may even include antisymmetric stresses (inputs of angular momentum), in contrast to the usually symmetrical internal contributions to the stress tensor. == Nondimensionalisation == In order to make the equations dimensionless, a characteristic length r0 and a characteristic velocity u0 need to be defined. These should be chosen such that the dimensionless variables are all of order one. The following dimensionless variables are thus obtained: ρ ∗ ≡ ρ ρ 0 u ∗ ≡ u u 0 r ∗ ≡ r r 0 t ∗ ≡ u 0 r 0 t ∇ ∗ ≡ r 0 ∇ f ∗ ≡ f f 0 p ∗ ≡ p p 0 τ ∗ ≡ τ τ 0 {\displaystyle {\begin{aligned}\rho ^{*}&\equiv {\frac {\rho }{\rho _{0}}}&u^{*}&\equiv {\frac {u}{u_{0}}}&r^{*}&\equiv {\frac {r}{r_{0}}}&t^{*}&\equiv {\frac {u_{0}}{r_{0}}}t\\[6pt]\nabla ^{*}&\equiv r_{0}\nabla &\mathbf {f} ^{*}&\equiv {\frac {\mathbf {f} }{f_{0}}}&p^{*}&\equiv {\frac {p}{p_{0}}}&{\boldsymbol {\tau }}^{*}&\equiv {\frac {\boldsymbol {\tau }}{\tau _{0}}}\end{aligned}}} Substitution of these inverted relations in the Euler momentum equations yields: ρ 0 u 0 2 r 0 ∂ ρ ∗ u ∗ ∂ t ∗ + ∇ ∗ r 0 ⋅ ( ρ 0 u 0 2 ρ ∗ u ∗ ⊗ u ∗ + p 0 p ∗ ) = − τ 0 r 0 ∇ ∗ ⋅ τ ∗ + f 0 f ∗ {\displaystyle {\frac {\rho _{0}u_{0}^{2}}{r_{0}}}{\frac {\partial \rho ^{*}\mathbf {u} ^{*}}{\partial t^{*}}}+{\frac {\nabla ^{*}}{r_{0}}}\cdot \left(\rho _{0}u_{0}^{2}\rho ^{*}\mathbf {u} ^{*}\otimes \mathbf {u} ^{*}+p_{0}p^{*}\right)=-{\frac {\tau _{0}}{r_{0}}}\nabla ^{*}\cdot {\boldsymbol {\tau }}^{*}+f_{0}\mathbf {f} ^{*}} and by dividing for the first coefficient: ∂ ρ ∗ u ∗ ∂ t ∗ + ∇ ∗ ⋅ ( ρ ∗ u ∗ ⊗ u ∗ + p 0 ρ 0 u 0 2 p ∗ ) = − τ 0 ρ 0 u 0 2 ∇ ∗ ⋅ τ ∗ + f 0 r 0 u 0 2 f ∗ {\displaystyle {\frac {\partial \mathbf {\rho } ^{*}u^{*}}{\partial t^{*}}}+\nabla ^{*}\cdot \left(\rho ^{*}\mathbf {u} ^{*}\otimes \mathbf {u} ^{*}+{\frac {p_{0}}{\rho _{0}u_{0}^{2}}}p^{*}\right)=-{\frac {\tau _{0}}{\rho _{0}u_{0}^{2}}}\nabla ^{*}\cdot {\boldsymbol {\tau }}^{*}+{\frac {f_{0}r_{0}}{u_{0}^{2}}}\mathbf {f} ^{*}} Now defining the Froude number: F r = u 0 2 f 0 r 0 , {\displaystyle \mathrm {Fr} ={\frac {u_{0}^{2}}{f_{0}r_{0}}},} the Euler number: E u = p 0 ρ 0 u 0 2 , {\displaystyle \mathrm {Eu} ={\frac {p_{0}}{\rho _{0}u_{0}^{2}}},} and the coefficient of skin-friction or the one usually referred as 'drag coefficient' in the field of aerodynamics: C f = 2 τ 0 ρ 0 u 0 2 , {\displaystyle C_{\mathrm {f} }={\frac {2\tau _{0}}{\rho _{0}u_{0}^{2}}},} by passing respectively to the conservative variables, i.e. the momentum density and the force density: j = ρ u g = ρ f {\displaystyle {\begin{aligned}\mathbf {j} &=\rho \mathbf {u} \\\mathbf {g} &=\rho \mathbf {f} \end{aligned}}} the equations are finally expressed (now omitting the indexes): Cauchy equations in the Froude limit Fr → ∞ (corresponding to negligible external field) are named free Cauchy equations: and can be eventually conservation equations. The limit of high Froude numbers (low external field) is thus notable for such equations and is studied with perturbation theory. Finally in convective form the equations are: == 3D explicit convective forms == === Cartesian 3D coordinates === For asymmetric stress tensors, equations in general take the following forms: x : ∂ u x ∂ t + u x ∂ u x ∂ x + u y ∂ u x ∂ y + u z ∂ u x ∂ z = 1 ρ ( ∂ σ x x ∂ x + ∂ σ y x ∂ y + ∂ σ z x ∂ z ) + f x y : ∂ u y ∂ t + u x ∂ u y ∂ x + u y ∂ u y ∂ y + u z ∂ u y ∂ z = 1 ρ ( ∂ σ x y ∂ x + ∂ σ y y ∂ y + ∂ σ z y ∂ z ) + f y z : ∂ u z ∂ t + u x ∂ u z ∂ x + u y ∂ u z ∂ y + u z ∂ u z ∂ z = 1 ρ ( ∂ σ x z ∂ x + ∂ σ y z ∂ y + ∂ σ z z ∂ z ) + f z {\displaystyle {\begin{aligned}x&:&{\frac {\partial u_{x}}{\partial t}}+u_{x}{\frac {\partial u_{x}}{\partial x}}+u_{y}{\frac {\partial u_{x}}{\partial y}}+u_{z}{\frac {\partial u_{x}}{\partial z}}&={\frac {1}{\rho }}\left({\frac {\partial \sigma _{xx}}{\partial x}}+{\frac {\partial \sigma _{yx}}{\partial y}}+{\frac {\partial \sigma _{zx}}{\partial z}}\right)+f_{x}\\[8pt]y&:&{\frac {\partial u_{y}}{\partial t}}+u_{x}{\frac {\partial u_{y}}{\partial x}}+u_{y}{\frac {\partial u_{y}}{\partial y}}+u_{z}{\frac {\partial u_{y}}{\partial z}}&={\frac {1}{\rho }}\left({\frac {\partial \sigma _{xy}}{\partial x}}+{\frac {\partial \sigma _{yy}}{\partial y}}+{\frac {\partial \sigma _{zy}}{\partial z}}\right)+f_{y}\\[8pt]z&:&{\frac {\partial u_{z}}{\partial t}}+u_{x}{\frac {\partial u_{z}}{\partial x}}+u_{y}{\frac {\partial u_{z}}{\partial y}}+u_{z}{\frac {\partial u_{z}}{\partial z}}&={\frac {1}{\rho }}\left({\frac {\partial \sigma _{xz}}{\partial x}}+{\frac {\partial \sigma _{yz}}{\partial y}}+{\frac {\partial \sigma _{zz}}{\partial z}}\right)+f_{z}\end{aligned}}} === Cylindrical 3D coordinates === Below, we write the main equation in pressure-tau form assuming that the stress tensor is symmetrical ( σ i j = σ j i ⟹ τ i j = τ j i {\displaystyle \sigma _{ij}=\sigma _{ji}\Longrightarrow \tau _{ij}=\tau _{ji}} ): r : ∂ u r ∂ t + u r ∂ u r ∂ r + u ϕ r ∂ u r ∂ ϕ + u z ∂ u r ∂ z − u ϕ 2 r = − 1 ρ ∂ P ∂ r + 1 r ρ ∂ ( r τ r r ) ∂ r + 1 r ρ ∂ τ ϕ r ∂ ϕ + 1 ρ ∂ τ z r ∂ z − τ ϕ ϕ r ρ + f r ϕ : ∂ u ϕ ∂ t + u r ∂ u ϕ ∂ r + u ϕ r ∂ u ϕ ∂ ϕ + u z ∂ u ϕ ∂ z + u r u ϕ r = − 1 r ρ ∂ P ∂ ϕ + 1 r ρ ∂ τ ϕ ϕ ∂ ϕ + 1 r 2 ρ ∂ ( r 2 τ r ϕ ) ∂ r + 1 ρ ∂ τ z ϕ ∂ z + f ϕ z : ∂ u z ∂ t + u r ∂ u z ∂ r + u ϕ r ∂ u z ∂ ϕ + u z ∂ u z ∂ z = − 1 ρ ∂ P ∂ z + 1 ρ ∂ τ z z ∂ z + 1 r ρ ∂ τ ϕ z ∂ ϕ + 1 r ρ ∂ ( r τ r z ) ∂ r + f z {\displaystyle {\begin{aligned}r&:&{\frac {\partial u_{r}}{\partial t}}+u_{r}{\frac {\partial u_{r}}{\partial r}}+{\frac {u_{\phi }}{r}}{\frac {\partial u_{r}}{\partial \phi }}+u_{z}{\frac {\partial u_{r}}{\partial z}}-{\frac {u_{\phi }^{2}}{r}}&=-{\frac {1}{\rho }}{\frac {\partial P}{\partial r}}+{\frac {1}{r\rho }}{\frac {\partial \left(r\tau _{rr}\right)}{\partial r}}+{\frac {1}{r\rho }}{\frac {\partial \tau _{\phi r}}{\partial \phi }}+{\frac {1}{\rho }}{\frac {\partial \tau _{zr}}{\partial z}}-{\frac {\tau _{\phi \phi }}{r\rho }}+f_{r}\\[8pt]\phi &:&{\frac {\partial u_{\phi }}{\partial t}}+u_{r}{\frac {\partial u_{\phi }}{\partial r}}+{\frac {u_{\phi }}{r}}{\frac {\partial u_{\phi }}{\partial \phi }}+u_{z}{\frac {\partial u_{\phi }}{\partial z}}+{\frac {u_{r}u_{\phi }}{r}}&=-{\frac {1}{r\rho }}{\frac {\partial P}{\partial \phi }}+{\frac {1}{r\rho }}{\frac {\partial \tau _{\phi \phi }}{\partial \phi }}+{\frac {1}{r^{2}\rho }}{\frac {\partial \left(r^{2}\tau _{r\phi }\right)}{\partial r}}+{\frac {1}{\rho }}{\frac {\partial \tau _{z\phi }}{\partial z}}+f_{\phi }\\[8pt]z&:&{\frac {\partial u_{z}}{\partial t}}+u_{r}{\frac {\partial u_{z}}{\partial r}}+{\frac {u_{\phi }}{r}}{\frac {\partial u_{z}}{\partial \phi }}+u_{z}{\frac {\partial u_{z}}{\partial z}}&=-{\frac {1}{\rho }}{\frac {\partial P}{\partial z}}+{\frac {1}{\rho }}{\frac {\partial \tau _{zz}}{\partial z}}+{\frac {1}{r\rho }}{\frac {\partial \tau _{\phi z}}{\partial \phi }}+{\frac {1}{r\rho }}{\frac {\partial \left(r\tau _{rz}\right)}{\partial r}}+f_{z}\end{aligned}}} == See also == Euler equations (fluid dynamics) Navier–Stokes equations Burnett equations Chapman–Enskog expansion == Notes == == References ==
Wikipedia/Cauchy_momentum_equation
In magnetohydrodynamics, the induction equation is a partial differential equation that relates the magnetic field and velocity of an electrically conductive fluid such as a plasma. It can be derived from Maxwell's equations and Ohm's law, and plays a major role in plasma physics and astrophysics, especially in dynamo theory. == Mathematical statement == Maxwell's equations describing the Faraday's and Ampere's laws read: ∇ × E = − ∂ B ∂ t , {\displaystyle \nabla \times \mathbf {E} =-{\partial \mathbf {B} \over \partial t},} and ∇ × B = μ 0 J , {\displaystyle \nabla \times \mathbf {B} =\mu _{0}\mathbf {J} ,} where: E {\displaystyle \mathbf {E} } is the electric field. B {\displaystyle \mathbf {B} } is the magnetic field. μ 0 {\displaystyle \mu _{0}} is the vacuum permeability. J {\displaystyle \mathbf {J} } is the electric current density. The displacement current can be neglected in a plasma as it is negligible compared to the current carried by the free charges. The only exception to this is for exceptionally high frequency phenomena: for example, for a plasma with a typical electrical conductivity of 107 mho/m, the displacement current is smaller than the free current by a factor of 103 for frequencies below 2×1014 Hz. The electric field can be related to the current density using the Ohm's law: E + v × B = J / σ {\displaystyle \mathbf {E} +\mathbf {v} \times \mathbf {B} =\mathbf {J} /\sigma } where v {\displaystyle \mathbf {v} } is the velocity field. σ {\displaystyle \sigma } is the electric conductivity of the fluid. Combining these three equations, eliminating E {\displaystyle \mathbf {E} } and J {\displaystyle \mathbf {J} } , yields the induction equation for an electrically resistive fluid: ∂ B ∂ t = η ∇ 2 B + ∇ × ( v × B ) . {\displaystyle {\partial \mathbf {B} \over \partial t}=\eta \nabla ^{2}\mathbf {B} +\nabla \times (\mathbf {v} \times \mathbf {B} ).} Here η = 1 / μ 0 σ {\displaystyle \eta =1/\mu _{0}\sigma } is the magnetic diffusivity (in the literature, the electrical resistivity, defined as 1 / σ {\displaystyle 1/\sigma } , is often identified with the magnetic diffusivity). If the fluid moves with a typical speed V {\displaystyle V} and a typical length scale L {\displaystyle L} , then η ∇ 2 B ∼ η B L 2 , ∇ × ( v × B ) ∼ V B L . {\displaystyle \eta \nabla ^{2}\mathbf {B} \sim {\eta B \over L^{2}},\nabla \times (\mathbf {v} \times \mathbf {B} )\sim {VB \over L}.} The ratio of these quantities, which is a dimensionless parameter, is called the magnetic Reynolds number: R m = L V η . {\displaystyle R_{m}={LV \over \eta }.} == Perfectly-conducting limit == For a fluid with infinite electric conductivity, η → 0 {\displaystyle \eta \to 0} , the first term in the induction equation vanishes. This is equivalent to a very large magnetic Reynolds number. For example, it can be of order 109 in a typical star. In this case, the fluid can be called a perfect or ideal fluid. So, the induction equation for an ideal conductive fluid such as most astrophysical plasmas is ∂ B ∂ t = ∇ × ( v × B ) . {\displaystyle {\partial \mathbf {B} \over \partial t}=\nabla \times (\mathbf {v} \times \mathbf {B} ).} This is taken to be a good approximation in dynamo theory, used to explain the magnetic field evolution in the astrophysical environments such as stars, galaxies and accretion discs. == Convective limit == More generally, the equation for the perfectly-conducting limit applies in regions of large spatial scale rather than infinite electric conductivity, (i.e., η → 0 {\displaystyle \eta \to 0} ), as this also makes the magnetic Reynolds number very large such that the diffusion term can be neglected. This limit is called "ideal-MHD" and its most important theorem is Alfvén's theorem (also called the frozen-in flux theorem). == Diffusive limit == For very small magnetic Reynolds numbers, the diffusive term overcomes the convective term. For example, in an electrically resistive fluid with large values of η {\displaystyle \eta } , the magnetic field is diffused away very fast, and the Alfvén's Theorem cannot be applied. This means magnetic energy is dissipated to heat and other types of energy. The induction equation then reads ∂ B ∂ t = η ∇ 2 B . {\displaystyle {\partial \mathbf {B} \over \partial t}=\eta \nabla ^{2}\mathbf {B} .} It is common to define a dissipation time scale τ d = L 2 / η {\displaystyle \tau _{d}=L^{2}/\eta } which is the time scale for the dissipation of magnetic energy over a length scale L {\displaystyle L} . == See also == Alfvén's Theorem Magnetohydrodynamics Maxwell's equations == References ==
Wikipedia/Induction_equation
The following are important identities involving derivatives and integrals in vector calculus. == Operator notation == === Gradient === For a function f ( x , y , z ) {\displaystyle f(x,y,z)} in three-dimensional Cartesian coordinate variables, the gradient is the vector field: grad ⁡ ( f ) = ∇ f = ( ∂ ∂ x , ∂ ∂ y , ∂ ∂ z ) f = ∂ f ∂ x i + ∂ f ∂ y j + ∂ f ∂ z k {\displaystyle \operatorname {grad} (f)=\nabla f={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x}},\ {\frac {\partial }{\partial y}},\ {\frac {\partial }{\partial z}}\end{pmatrix}}f={\frac {\partial f}{\partial x}}\mathbf {i} +{\frac {\partial f}{\partial y}}\mathbf {j} +{\frac {\partial f}{\partial z}}\mathbf {k} } where i, j, k are the standard unit vectors for the x, y, z-axes. More generally, for a function of n variables ψ ( x 1 , … , x n ) {\displaystyle \psi (x_{1},\ldots ,x_{n})} , also called a scalar field, the gradient is the vector field: ∇ ψ = ( ∂ ∂ x 1 , … , ∂ ∂ x n ) ψ = ∂ ψ ∂ x 1 e 1 + ⋯ + ∂ ψ ∂ x n e n {\displaystyle \nabla \psi ={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x_{1}}},\ldots ,{\frac {\partial }{\partial x_{n}}}\end{pmatrix}}\psi ={\frac {\partial \psi }{\partial x_{1}}}\mathbf {e} _{1}+\dots +{\frac {\partial \psi }{\partial x_{n}}}\mathbf {e} _{n}} where e i ( i = 1 , 2 , . . . , n ) {\displaystyle \mathbf {e} _{i}\,(i=1,2,...,n)} are mutually orthogonal unit vectors. As the name implies, the gradient is proportional to, and points in the direction of, the function's most rapid (positive) change. For a vector field A = ( A 1 , … , A n ) {\displaystyle \mathbf {A} =\left(A_{1},\ldots ,A_{n}\right)} , also called a tensor field of order 1, the gradient or total derivative is the n × n Jacobian matrix: J A = d A = ( ∇ A ) T = ( ∂ A i ∂ x j ) i j . {\displaystyle \mathbf {J} _{\mathbf {A} }=d\mathbf {A} =(\nabla \!\mathbf {A} )^{\textsf {T}}=\left({\frac {\partial A_{i}}{\partial x_{j}}}\right)_{\!ij}.} For a tensor field T {\displaystyle \mathbf {T} } of any order k, the gradient grad ⁡ ( T ) = d T = ( ∇ T ) T {\displaystyle \operatorname {grad} (\mathbf {T} )=d\mathbf {T} =(\nabla \mathbf {T} )^{\textsf {T}}} is a tensor field of order k + 1. For a tensor field T {\displaystyle \mathbf {T} } of order k > 0, the tensor field ∇ T {\displaystyle \nabla \mathbf {T} } of order k + 1 is defined by the recursive relation ( ∇ T ) ⋅ C = ∇ ( T ⋅ C ) {\displaystyle (\nabla \mathbf {T} )\cdot \mathbf {C} =\nabla (\mathbf {T} \cdot \mathbf {C} )} where C {\displaystyle \mathbf {C} } is an arbitrary constant vector. === Divergence === In Cartesian coordinates, the divergence of a continuously differentiable vector field F = F x i + F y j + F z k {\displaystyle \mathbf {F} =F_{x}\mathbf {i} +F_{y}\mathbf {j} +F_{z}\mathbf {k} } is the scalar-valued function: div ⁡ F = ∇ ⋅ F = ( ∂ ∂ x , ∂ ∂ y , ∂ ∂ z ) ⋅ ( F x , F y , F z ) = ∂ F x ∂ x + ∂ F y ∂ y + ∂ F z ∂ z . {\displaystyle \operatorname {div} \mathbf {F} =\nabla \cdot \mathbf {F} ={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x}},\ {\frac {\partial }{\partial y}},\ {\frac {\partial }{\partial z}}\end{pmatrix}}\cdot {\begin{pmatrix}F_{x},\ F_{y},\ F_{z}\end{pmatrix}}={\frac {\partial F_{x}}{\partial x}}+{\frac {\partial F_{y}}{\partial y}}+{\frac {\partial F_{z}}{\partial z}}.} As the name implies, the divergence is a (local) measure of the degree to which vectors in the field diverge. The divergence of a tensor field T {\displaystyle \mathbf {T} } of non-zero order k is written as div ⁡ ( T ) = ∇ ⋅ T {\displaystyle \operatorname {div} (\mathbf {T} )=\nabla \cdot \mathbf {T} } , a contraction of a tensor field of order k − 1. Specifically, the divergence of a vector is a scalar. The divergence of a higher-order tensor field may be found by decomposing the tensor field into a sum of outer products and using the identity, ∇ ⋅ ( A ⊗ T ) = T ( ∇ ⋅ A ) + ( A ⋅ ∇ ) T {\displaystyle \nabla \cdot \left(\mathbf {A} \otimes \mathbf {T} \right)=\mathbf {T} (\nabla \cdot \mathbf {A} )+(\mathbf {A} \cdot \nabla )\mathbf {T} } where A ⋅ ∇ {\displaystyle \mathbf {A} \cdot \nabla } is the directional derivative in the direction of A {\displaystyle \mathbf {A} } multiplied by its magnitude. Specifically, for the outer product of two vectors, ∇ ⋅ ( A B T ) = B ( ∇ ⋅ A ) + ( A ⋅ ∇ ) B . {\displaystyle \nabla \cdot \left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)=\mathbf {B} (\nabla \cdot \mathbf {A} )+(\mathbf {A} \cdot \nabla )\mathbf {B} .} For a tensor field T {\displaystyle \mathbf {T} } of order k > 1, the tensor field ∇ ⋅ T {\displaystyle \nabla \cdot \mathbf {T} } of order k − 1 is defined by the recursive relation ( ∇ ⋅ T ) ⋅ C = ∇ ⋅ ( T ⋅ C ) {\displaystyle (\nabla \cdot \mathbf {T} )\cdot \mathbf {C} =\nabla \cdot (\mathbf {T} \cdot \mathbf {C} )} where C {\displaystyle \mathbf {C} } is an arbitrary constant vector. === Curl === In Cartesian coordinates, for F = F x i + F y j + F z k {\displaystyle \mathbf {F} =F_{x}\mathbf {i} +F_{y}\mathbf {j} +F_{z}\mathbf {k} } the curl is the vector field: curl ⁡ F = ∇ × F = ( ∂ ∂ x , ∂ ∂ y , ∂ ∂ z ) × ( F x , F y , F z ) = | i j k ∂ ∂ x ∂ ∂ y ∂ ∂ z F x F y F z | = ( ∂ F z ∂ y − ∂ F y ∂ z ) i + ( ∂ F x ∂ z − ∂ F z ∂ x ) j + ( ∂ F y ∂ x − ∂ F x ∂ y ) k {\displaystyle {\begin{aligned}\operatorname {curl} \mathbf {F} &=\nabla \times \mathbf {F} ={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x}},\ {\frac {\partial }{\partial y}},\ {\frac {\partial }{\partial z}}\end{pmatrix}}\times {\begin{pmatrix}F_{x},\ F_{y},\ F_{z}\end{pmatrix}}={\begin{vmatrix}\mathbf {i} &\mathbf {j} &\mathbf {k} \\{\frac {\partial }{\partial x}}&{\frac {\partial }{\partial y}}&{\frac {\partial }{\partial z}}\\F_{x}&F_{y}&F_{z}\end{vmatrix}}\\[1em]&=\left({\frac {\partial F_{z}}{\partial y}}-{\frac {\partial F_{y}}{\partial z}}\right)\mathbf {i} +\left({\frac {\partial F_{x}}{\partial z}}-{\frac {\partial F_{z}}{\partial x}}\right)\mathbf {j} +\left({\frac {\partial F_{y}}{\partial x}}-{\frac {\partial F_{x}}{\partial y}}\right)\mathbf {k} \end{aligned}}} where i, j, and k are the unit vectors for the x-, y-, and z-axes, respectively. As the name implies the curl is a measure of how much nearby vectors tend in a circular direction. In Einstein notation, the vector field F = ( F 1 , F 2 , F 3 ) {\displaystyle \mathbf {F} ={\begin{pmatrix}F_{1},\ F_{2},\ F_{3}\end{pmatrix}}} has curl given by: ∇ × F = ε i j k e i ∂ F k ∂ x j {\displaystyle \nabla \times \mathbf {F} =\varepsilon ^{ijk}\mathbf {e} _{i}{\frac {\partial F_{k}}{\partial x_{j}}}} where ε {\displaystyle \varepsilon } = ±1 or 0 is the Levi-Civita parity symbol. For a tensor field T {\displaystyle \mathbf {T} } of order k > 1, the tensor field ∇ × T {\displaystyle \nabla \times \mathbf {T} } of order k is defined by the recursive relation ( ∇ × T ) ⋅ C = ∇ × ( T ⋅ C ) {\displaystyle (\nabla \times \mathbf {T} )\cdot \mathbf {C} =\nabla \times (\mathbf {T} \cdot \mathbf {C} )} where C {\displaystyle \mathbf {C} } is an arbitrary constant vector. A tensor field of order greater than one may be decomposed into a sum of outer products, and then the following identity may be used: ∇ × ( A ⊗ T ) = ( ∇ × A ) ⊗ T − A × ( ∇ T ) . {\displaystyle \nabla \times \left(\mathbf {A} \otimes \mathbf {T} \right)=(\nabla \times \mathbf {A} )\otimes \mathbf {T} -\mathbf {A} \times (\nabla \mathbf {T} ).} Specifically, for the outer product of two vectors, ∇ × ( A B T ) = ( ∇ × A ) B T − A × ( ∇ B ) . {\displaystyle \nabla \times \left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)=(\nabla \times \mathbf {A} )\mathbf {B} ^{\textsf {T}}-\mathbf {A} \times (\nabla \mathbf {B} ).} === Laplacian === In Cartesian coordinates, the Laplacian of a function f ( x , y , z ) {\displaystyle f(x,y,z)} is Δ f = ∇ 2 f = ( ∇ ⋅ ∇ ) f = ∂ 2 f ∂ x 2 + ∂ 2 f ∂ y 2 + ∂ 2 f ∂ z 2 . {\displaystyle \Delta f=\nabla ^{2}\!f=(\nabla \cdot \nabla )f={\frac {\partial ^{2}\!f}{\partial x^{2}}}+{\frac {\partial ^{2}\!f}{\partial y^{2}}}+{\frac {\partial ^{2}\!f}{\partial z^{2}}}.} The Laplacian is a measure of how much a function is changing over a small sphere centered at the point. When the Laplacian is equal to 0, the function is called a harmonic function. That is, Δ f = 0. {\displaystyle \Delta f=0.} For a tensor field, T {\displaystyle \mathbf {T} } , the Laplacian is generally written as: Δ T = ∇ 2 T = ( ∇ ⋅ ∇ ) T {\displaystyle \Delta \mathbf {T} =\nabla ^{2}\mathbf {T} =(\nabla \cdot \nabla )\mathbf {T} } and is a tensor field of the same order. For a tensor field T {\displaystyle \mathbf {T} } of order k > 0, the tensor field ∇ 2 T {\displaystyle \nabla ^{2}\mathbf {T} } of order k is defined by the recursive relation ( ∇ 2 T ) ⋅ C = ∇ 2 ( T ⋅ C ) {\displaystyle \left(\nabla ^{2}\mathbf {T} \right)\cdot \mathbf {C} =\nabla ^{2}(\mathbf {T} \cdot \mathbf {C} )} where C {\displaystyle \mathbf {C} } is an arbitrary constant vector. === Special notations === In Feynman subscript notation, ∇ B ( A ⋅ B ) = A × ( ∇ × B ) + ( A ⋅ ∇ ) B {\displaystyle \nabla _{\mathbf {B} }\!\left(\mathbf {A{\cdot }B} \right)=\mathbf {A} {\times }\!\left(\nabla {\times }\mathbf {B} \right)+\left(\mathbf {A} {\cdot }\nabla \right)\mathbf {B} } where the notation ∇B means the subscripted gradient operates on only the factor B. More general but similar is the Hestenes overdot notation in geometric algebra. The above identity is then expressed as: ∇ ˙ ( A ⋅ B ˙ ) = A × ( ∇ × B ) + ( A ⋅ ∇ ) B {\displaystyle {\dot {\nabla }}\left(\mathbf {A} {\cdot }{\dot {\mathbf {B} }}\right)=\mathbf {A} {\times }\!\left(\nabla {\times }\mathbf {B} \right)+\left(\mathbf {A} {\cdot }\nabla \right)\mathbf {B} } where overdots define the scope of the vector derivative. The dotted vector, in this case B, is differentiated, while the (undotted) A is held constant. The utility of the Feynman subscript notation lies in its use in the derivation of vector and tensor derivative identities, as in the following example which uses the algebraic identity C⋅(A×B) = (C×A)⋅B: ∇ ⋅ ( A × B ) = ∇ A ⋅ ( A × B ) + ∇ B ⋅ ( A × B ) = ( ∇ A × A ) ⋅ B + ( ∇ B × A ) ⋅ B = ( ∇ A × A ) ⋅ B − ( A × ∇ B ) ⋅ B = ( ∇ A × A ) ⋅ B − A ⋅ ( ∇ B × B ) = ( ∇ × A ) ⋅ B − A ⋅ ( ∇ × B ) {\displaystyle {\begin{aligned}\nabla \cdot (\mathbf {A} \times \mathbf {B} )&=\nabla _{\mathbf {A} }\cdot (\mathbf {A} \times \mathbf {B} )+\nabla _{\mathbf {B} }\cdot (\mathbf {A} \times \mathbf {B} )\\[2pt]&=(\nabla _{\mathbf {A} }\times \mathbf {A} )\cdot \mathbf {B} +(\nabla _{\mathbf {B} }\times \mathbf {A} )\cdot \mathbf {B} \\[2pt]&=(\nabla _{\mathbf {A} }\times \mathbf {A} )\cdot \mathbf {B} -(\mathbf {A} \times \nabla _{\mathbf {B} })\cdot \mathbf {B} \\[2pt]&=(\nabla _{\mathbf {A} }\times \mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\nabla _{\mathbf {B} }\times \mathbf {B} )\\[2pt]&=(\nabla \times \mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\nabla \times \mathbf {B} )\end{aligned}}} An alternative method is to use the Cartesian components of the del operator as follows (with implicit summation over the index i): ∇ ⋅ ( A × B ) = e i ∂ i ⋅ ( A × B ) = e i ⋅ ∂ i ( A × B ) = e i ⋅ ( ∂ i A × B + A × ∂ i B ) = e i ⋅ ( ∂ i A × B ) + e i ⋅ ( A × ∂ i B ) = ( e i × ∂ i A ) ⋅ B + ( e i × A ) ⋅ ∂ i B = ( e i × ∂ i A ) ⋅ B − ( A × e i ) ⋅ ∂ i B = ( e i × ∂ i A ) ⋅ B − A ⋅ ( e i × ∂ i B ) = ( e i ∂ i × A ) ⋅ B − A ⋅ ( e i ∂ i × B ) = ( ∇ × A ) ⋅ B − A ⋅ ( ∇ × B ) {\displaystyle {\begin{aligned}\nabla \cdot (\mathbf {A} \times \mathbf {B} )&=\mathbf {e} _{i}\partial _{i}\cdot (\mathbf {A} \times \mathbf {B} )\\[2pt]&=\mathbf {e} _{i}\cdot \partial _{i}(\mathbf {A} \times \mathbf {B} )\\[2pt]&=\mathbf {e} _{i}\cdot (\partial _{i}\mathbf {A} \times \mathbf {B} +\mathbf {A} \times \partial _{i}\mathbf {B} )\\[2pt]&=\mathbf {e} _{i}\cdot (\partial _{i}\mathbf {A} \times \mathbf {B} )+\mathbf {e} _{i}\cdot (\mathbf {A} \times \partial _{i}\mathbf {B} )\\[2pt]&=(\mathbf {e} _{i}\times \partial _{i}\mathbf {A} )\cdot \mathbf {B} +(\mathbf {e} _{i}\times \mathbf {A} )\cdot \partial _{i}\mathbf {B} \\[2pt]&=(\mathbf {e} _{i}\times \partial _{i}\mathbf {A} )\cdot \mathbf {B} -(\mathbf {A} \times \mathbf {e} _{i})\cdot \partial _{i}\mathbf {B} \\[2pt]&=(\mathbf {e} _{i}\times \partial _{i}\mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\mathbf {e} _{i}\times \partial _{i}\mathbf {B} )\\[2pt]&=(\mathbf {e} _{i}\partial _{i}\times \mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\mathbf {e} _{i}\partial _{i}\times \mathbf {B} )\\[2pt]&=(\nabla \times \mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\nabla \times \mathbf {B} )\end{aligned}}} Another method of deriving vector and tensor derivative identities is to replace all occurrences of a vector in an algebraic identity by the del operator, provided that no variable occurs both inside and outside the scope of an operator or both inside the scope of one operator in a term and outside the scope of another operator in the same term (i.e., the operators must be nested). The validity of this rule follows from the validity of the Feynman method, for one may always substitute a subscripted del and then immediately drop the subscript under the condition of the rule. For example, from the identity A⋅(B×C) = (A×B)⋅C we may derive A⋅(∇×C) = (A×∇)⋅C but not ∇⋅(B×C) = (∇×B)⋅C, nor from A⋅(B×A) = 0 may we derive A⋅(∇×A) = 0. On the other hand, a subscripted del operates on all occurrences of the subscript in the term, so that A⋅(∇A×A) = ∇A⋅(A×A) = ∇⋅(A×A) = 0. Also, from A×(A×C) = A(A⋅C) − (A⋅A)C we may derive ∇×(∇×C) = ∇(∇⋅C) − ∇2C, but from (Aψ)⋅(Aφ) = (A⋅A)(ψφ) we may not derive (∇ψ)⋅(∇φ) = ∇2(ψφ). A subscript c on a quantity indicates that it is temporarily considered to be a constant. Since a constant is not a variable, when the substitution rule (see the preceding paragraph) is used it, unlike a variable, may be moved into or out of the scope of a del operator, as in the following example: ∇ ⋅ ( A × B ) = ∇ ⋅ ( A × B c ) + ∇ ⋅ ( A c × B ) = ∇ ⋅ ( A × B c ) − ∇ ⋅ ( B × A c ) = ( ∇ × A ) ⋅ B c − ( ∇ × B ) ⋅ A c = ( ∇ × A ) ⋅ B − ( ∇ × B ) ⋅ A {\displaystyle {\begin{aligned}\nabla \cdot (\mathbf {A} \times \mathbf {B} )&=\nabla \cdot (\mathbf {A} \times \mathbf {B} _{\mathrm {c} })+\nabla \cdot (\mathbf {A} _{\mathrm {c} }\times \mathbf {B} )\\[2pt]&=\nabla \cdot (\mathbf {A} \times \mathbf {B} _{\mathrm {c} })-\nabla \cdot (\mathbf {B} \times \mathbf {A} _{\mathrm {c} })\\[2pt]&=(\nabla \times \mathbf {A} )\cdot \mathbf {B} _{\mathrm {c} }-(\nabla \times \mathbf {B} )\cdot \mathbf {A} _{\mathrm {c} }\\[2pt]&=(\nabla \times \mathbf {A} )\cdot \mathbf {B} -(\nabla \times \mathbf {B} )\cdot \mathbf {A} \end{aligned}}} Another way to indicate that a quantity is a constant is to affix it as a subscript to the scope of a del operator, as follows: ∇ ( A ⋅ B ) A = A × ( ∇ × B ) + ( A ⋅ ∇ ) B {\displaystyle \nabla \left(\mathbf {A{\cdot }B} \right)_{\mathbf {A} }=\mathbf {A} {\times }\!\left(\nabla {\times }\mathbf {B} \right)+\left(\mathbf {A} {\cdot }\nabla \right)\mathbf {B} } For the remainder of this article, Feynman subscript notation will be used where appropriate. == First derivative identities == For scalar fields ψ {\displaystyle \psi } , ϕ {\displaystyle \phi } and vector fields A {\displaystyle \mathbf {A} } , B {\displaystyle \mathbf {B} } , we have the following derivative identities. === Distributive properties === ∇ ( ψ + ϕ ) = ∇ ψ + ∇ ϕ ∇ ( A + B ) = ∇ A + ∇ B ∇ ⋅ ( A + B ) = ∇ ⋅ A + ∇ ⋅ B ∇ × ( A + B ) = ∇ × A + ∇ × B {\displaystyle {\begin{aligned}\nabla (\psi +\phi )&=\nabla \psi +\nabla \phi \\\nabla (\mathbf {A} +\mathbf {B} )&=\nabla \mathbf {A} +\nabla \mathbf {B} \\\nabla \cdot (\mathbf {A} +\mathbf {B} )&=\nabla \cdot \mathbf {A} +\nabla \cdot \mathbf {B} \\\nabla \times (\mathbf {A} +\mathbf {B} )&=\nabla \times \mathbf {A} +\nabla \times \mathbf {B} \end{aligned}}} === First derivative associative properties === ( A ⋅ ∇ ) ψ = A ⋅ ( ∇ ψ ) ( A ⋅ ∇ ) B = A ⋅ ( ∇ B ) ( A × ∇ ) ψ = A × ( ∇ ψ ) ( A × ∇ ) B = A × ( ∇ B ) {\displaystyle {\begin{aligned}(\mathbf {A} \cdot \nabla )\psi &=\mathbf {A} \cdot (\nabla \psi )\\(\mathbf {A} \cdot \nabla )\mathbf {B} &=\mathbf {A} \cdot (\nabla \mathbf {B} )\\(\mathbf {A} \times \nabla )\psi &=\mathbf {A} \times (\nabla \psi )\\(\mathbf {A} \times \nabla )\mathbf {B} &=\mathbf {A} \times (\nabla \mathbf {B} )\end{aligned}}} === Product rule for multiplication by a scalar === We have the following generalizations of the product rule in single-variable calculus. ∇ ( ψ ϕ ) = ϕ ∇ ψ + ψ ∇ ϕ ∇ ( ψ A ) = ( ∇ ψ ) A T + ψ ∇ A = ∇ ψ ⊗ A + ψ ∇ A ∇ ⋅ ( ψ A ) = ψ ∇ ⋅ A + ( ∇ ψ ) ⋅ A ∇ × ( ψ A ) = ψ ∇ × A + ( ∇ ψ ) × A ∇ 2 ( ψ ϕ ) = ψ ∇ 2 ϕ + 2 ∇ ψ ⋅ ∇ ϕ + ϕ ∇ 2 ψ {\displaystyle {\begin{aligned}\nabla (\psi \phi )&=\phi \,\nabla \psi +\psi \,\nabla \phi \\\nabla (\psi \mathbf {A} )&=(\nabla \psi )\mathbf {A} ^{\textsf {T}}+\psi \nabla \mathbf {A} \ =\ \nabla \psi \otimes \mathbf {A} +\psi \,\nabla \mathbf {A} \\\nabla \cdot (\psi \mathbf {A} )&=\psi \,\nabla {\cdot }\mathbf {A} +(\nabla \psi )\,{\cdot }\mathbf {A} \\\nabla {\times }(\psi \mathbf {A} )&=\psi \,\nabla {\times }\mathbf {A} +(\nabla \psi ){\times }\mathbf {A} \\\nabla ^{2}(\psi \phi )&=\psi \,\nabla ^{2\!}\phi +2\,\nabla \!\psi \cdot \!\nabla \phi +\phi \,\nabla ^{2\!}\psi \end{aligned}}} === Quotient rule for division by a scalar === ∇ ( ψ ϕ ) = ϕ ∇ ψ − ψ ∇ ϕ ϕ 2 ∇ ( A ϕ ) = ϕ ∇ A − ∇ ϕ ⊗ A ϕ 2 ∇ ⋅ ( A ϕ ) = ϕ ∇ ⋅ A − ∇ ϕ ⋅ A ϕ 2 ∇ × ( A ϕ ) = ϕ ∇ × A − ∇ ϕ × A ϕ 2 ∇ 2 ( ψ ϕ ) = ϕ ∇ 2 ψ − 2 ϕ ∇ ( ψ ϕ ) ⋅ ∇ ϕ − ψ ∇ 2 ϕ ϕ 2 {\displaystyle {\begin{aligned}\nabla \left({\frac {\psi }{\phi }}\right)&={\frac {\phi \,\nabla \psi -\psi \,\nabla \phi }{\phi ^{2}}}\\[1em]\nabla \left({\frac {\mathbf {A} }{\phi }}\right)&={\frac {\phi \,\nabla \mathbf {A} -\nabla \phi \otimes \mathbf {A} }{\phi ^{2}}}\\[1em]\nabla \cdot \left({\frac {\mathbf {A} }{\phi }}\right)&={\frac {\phi \,\nabla {\cdot }\mathbf {A} -\nabla \!\phi \cdot \mathbf {A} }{\phi ^{2}}}\\[1em]\nabla \times \left({\frac {\mathbf {A} }{\phi }}\right)&={\frac {\phi \,\nabla {\times }\mathbf {A} -\nabla \!\phi \,{\times }\,\mathbf {A} }{\phi ^{2}}}\\[1em]\nabla ^{2}\left({\frac {\psi }{\phi }}\right)&={\frac {\phi \,\nabla ^{2\!}\psi -2\,\phi \,\nabla \!\left({\frac {\psi }{\phi }}\right)\cdot \!\nabla \phi -\psi \,\nabla ^{2\!}\phi }{\phi ^{2}}}\end{aligned}}} === Chain rule === Let f ( x ) {\displaystyle f(x)} be a one-variable function from scalars to scalars, r ( t ) = ( x 1 ( t ) , … , x n ( t ) ) {\displaystyle \mathbf {r} (t)=(x_{1}(t),\ldots ,x_{n}(t))} a parametrized curve, ϕ : R n → R {\displaystyle \phi \!:\mathbb {R} ^{n}\to \mathbb {R} } a function from vectors to scalars, and A : R n → R n {\displaystyle \mathbf {A} \!:\mathbb {R} ^{n}\to \mathbb {R} ^{n}} a vector field. We have the following special cases of the multi-variable chain rule. ∇ ( f ∘ ϕ ) = ( f ′ ∘ ϕ ) ∇ ϕ ( r ∘ f ) ′ = ( r ′ ∘ f ) f ′ ( ϕ ∘ r ) ′ = ( ∇ ϕ ∘ r ) ⋅ r ′ ( A ∘ r ) ′ = r ′ ⋅ ( ∇ A ∘ r ) ∇ ( ϕ ∘ A ) = ( ∇ A ) ⋅ ( ∇ ϕ ∘ A ) ∇ ⋅ ( r ∘ ϕ ) = ∇ ϕ ⋅ ( r ′ ∘ ϕ ) ∇ × ( r ∘ ϕ ) = ∇ ϕ × ( r ′ ∘ ϕ ) {\displaystyle {\begin{aligned}\nabla (f\circ \phi )&=\left(f'\circ \phi \right)\nabla \phi \\(\mathbf {r} \circ f)'&=(\mathbf {r} '\circ f)f'\\(\phi \circ \mathbf {r} )'&=(\nabla \phi \circ \mathbf {r} )\cdot \mathbf {r} '\\(\mathbf {A} \circ \mathbf {r} )'&=\mathbf {r} '\cdot (\nabla \mathbf {A} \circ \mathbf {r} )\\\nabla (\phi \circ \mathbf {A} )&=(\nabla \mathbf {A} )\cdot (\nabla \phi \circ \mathbf {A} )\\\nabla \cdot (\mathbf {r} \circ \phi )&=\nabla \phi \cdot (\mathbf {r} '\circ \phi )\\\nabla \times (\mathbf {r} \circ \phi )&=\nabla \phi \times (\mathbf {r} '\circ \phi )\end{aligned}}} For a vector transformation x : R n → R n {\displaystyle \mathbf {x} \!:\mathbb {R} ^{n}\to \mathbb {R} ^{n}} we have: ∇ ⋅ ( A ∘ x ) = t r ( ( ∇ x ) ⋅ ( ∇ A ∘ x ) ) {\displaystyle \nabla \cdot (\mathbf {A} \circ \mathbf {x} )=\mathrm {tr} \left((\nabla \mathbf {x} )\cdot (\nabla \mathbf {A} \circ \mathbf {x} )\right)} Here we take the trace of the dot product of two second-order tensors, which corresponds to the product of their matrices. === Dot product rule === ∇ ( A ⋅ B ) = ( A ⋅ ∇ ) B + ( B ⋅ ∇ ) A + A × ( ∇ × B ) + B × ( ∇ × A ) = A ⋅ J B + B ⋅ J A = ( ∇ B ) ⋅ A + ( ∇ A ) ⋅ B {\displaystyle {\begin{aligned}\nabla (\mathbf {A} \cdot \mathbf {B} )&\ =\ (\mathbf {A} \cdot \nabla )\mathbf {B} \,+\,(\mathbf {B} \cdot \nabla )\mathbf {A} \,+\,\mathbf {A} {\times }(\nabla {\times }\mathbf {B} )\,+\,\mathbf {B} {\times }(\nabla {\times }\mathbf {A} )\\&\ =\ \mathbf {A} \cdot \mathbf {J} _{\mathbf {B} }+\mathbf {B} \cdot \mathbf {J} _{\mathbf {A} }\ =\ (\nabla \mathbf {B} )\cdot \mathbf {A} \,+\,(\nabla \mathbf {A} )\cdot \mathbf {B} \end{aligned}}} where J A = ( ∇ A ) T = ( ∂ A i / ∂ x j ) i j {\displaystyle \mathbf {J} _{\mathbf {A} }=(\nabla \!\mathbf {A} )^{\textsf {T}}=(\partial A_{i}/\partial x_{j})_{ij}} denotes the Jacobian matrix of the vector field A = ( A 1 , … , A n ) {\displaystyle \mathbf {A} =(A_{1},\ldots ,A_{n})} . Alternatively, using Feynman subscript notation, ∇ ( A ⋅ B ) = ∇ A ( A ⋅ B ) + ∇ B ( A ⋅ B ) . {\displaystyle \nabla (\mathbf {A} \cdot \mathbf {B} )=\nabla _{\mathbf {A} }(\mathbf {A} \cdot \mathbf {B} )+\nabla _{\mathbf {B} }(\mathbf {A} \cdot \mathbf {B} )\ .} See these notes. As a special case, when A = B, 1 2 ∇ ( A ⋅ A ) = A ⋅ J A = ( ∇ A ) ⋅ A = ( A ⋅ ∇ ) A + A × ( ∇ × A ) = A ∇ A . {\displaystyle {\tfrac {1}{2}}\nabla \left(\mathbf {A} \cdot \mathbf {A} \right)\ =\ \mathbf {A} \cdot \mathbf {J} _{\mathbf {A} }\ =\ (\nabla \mathbf {A} )\cdot \mathbf {A} \ =\ (\mathbf {A} {\cdot }\nabla )\mathbf {A} \,+\,\mathbf {A} {\times }(\nabla {\times }\mathbf {A} )\ =\ A\nabla A.} The generalization of the dot product formula to Riemannian manifolds is a defining property of a Riemannian connection, which differentiates a vector field to give a vector-valued 1-form. === Cross product rule === ∇ ( A × B ) = ( ∇ A ) × B − ( ∇ B ) × A ∇ ⋅ ( A × B ) = ( ∇ × A ) ⋅ B − A ⋅ ( ∇ × B ) ∇ × ( A × B ) = A ( ∇ ⋅ B ) − B ( ∇ ⋅ A ) + ( B ⋅ ∇ ) A − ( A ⋅ ∇ ) B = A ( ∇ ⋅ B ) + ( B ⋅ ∇ ) A − ( B ( ∇ ⋅ A ) + ( A ⋅ ∇ ) B ) = ∇ ⋅ ( A B T ) − ∇ ⋅ ( B A T ) = ∇ ⋅ ( A B T − B A T ) A × ( ∇ × B ) = ∇ B ( A ⋅ B ) − ( A ⋅ ∇ ) B = A ⋅ J B − ( A ⋅ ∇ ) B = ( ∇ B ) ⋅ A − A ⋅ ( ∇ B ) = A ⋅ ( J B − J B T ) ( A × ∇ ) × B = ( ∇ B ) ⋅ A − A ( ∇ ⋅ B ) = A × ( ∇ × B ) + ( A ⋅ ∇ ) B − A ( ∇ ⋅ B ) ( A × ∇ ) ⋅ B = A ⋅ ( ∇ × B ) {\displaystyle {\begin{aligned}\nabla (\mathbf {A} \times \mathbf {B} )&\ =\ (\nabla \mathbf {A} )\times \mathbf {B} \,-\,(\nabla \mathbf {B} )\times \mathbf {A} \\[5pt]\nabla \cdot (\mathbf {A} \times \mathbf {B} )&\ =\ (\nabla {\times }\mathbf {A} )\cdot \mathbf {B} \,-\,\mathbf {A} \cdot (\nabla {\times }\mathbf {B} )\\[5pt]\nabla \times (\mathbf {A} \times \mathbf {B} )&\ =\ \mathbf {A} (\nabla {\cdot }\mathbf {B} )\,-\,\mathbf {B} (\nabla {\cdot }\mathbf {A} )\,+\,(\mathbf {B} {\cdot }\nabla )\mathbf {A} \,-\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} \\[2pt]&\ =\ \mathbf {A} (\nabla {\cdot }\mathbf {B} )\,+\,(\mathbf {B} {\cdot }\nabla )\mathbf {A} \,-\,(\mathbf {B} (\nabla {\cdot }\mathbf {A} )\,+\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} )\\[2pt]&\ =\ \nabla {\cdot }\left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)\,-\,\nabla {\cdot }\left(\mathbf {B} \mathbf {A} ^{\textsf {T}}\right)\\[2pt]&\ =\ \nabla {\cdot }\left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\,-\,\mathbf {B} \mathbf {A} ^{\textsf {T}}\right)\\[5pt]\mathbf {A} \times (\nabla \times \mathbf {B} )&\ =\ \nabla _{\mathbf {B} }(\mathbf {A} {\cdot }\mathbf {B} )\,-\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} \\[2pt]&\ =\ \mathbf {A} \cdot \mathbf {J} _{\mathbf {B} }\,-\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} \\[2pt]&\ =\ (\nabla \mathbf {B} )\cdot \mathbf {A} \,-\,\mathbf {A} \cdot (\nabla \mathbf {B} )\\[2pt]&\ =\ \mathbf {A} \cdot (\mathbf {J} _{\mathbf {B} }\,-\,\mathbf {J} _{\mathbf {B} }^{\textsf {T}})\\[5pt](\mathbf {A} \times \nabla )\times \mathbf {B} &\ =\ (\nabla \mathbf {B} )\cdot \mathbf {A} \,-\,\mathbf {A} (\nabla {\cdot }\mathbf {B} )\\[2pt]&\ =\ \mathbf {A} \times (\nabla \times \mathbf {B} )\,+\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} \,-\,\mathbf {A} (\nabla {\cdot }\mathbf {B} )\\[5pt](\mathbf {A} \times \nabla )\cdot \mathbf {B} &\ =\ \mathbf {A} \cdot (\nabla {\times }\mathbf {B} )\end{aligned}}} Note that the matrix J B − J B T {\displaystyle \mathbf {J} _{\mathbf {B} }\,-\,\mathbf {J} _{\mathbf {B} }^{\textsf {T}}} is antisymmetric. == Second derivative identities == === Divergence of curl is zero === The divergence of the curl of any continuously twice-differentiable vector field A is always zero: ∇ ⋅ ( ∇ × A ) = 0 {\displaystyle \nabla \cdot (\nabla \times \mathbf {A} )=0} This is a special case of the vanishing of the square of the exterior derivative in the De Rham chain complex. === Divergence of gradient is Laplacian === The Laplacian of a scalar field is the divergence of its gradient: Δ ψ = ∇ 2 ψ = ∇ ⋅ ( ∇ ψ ) {\displaystyle \Delta \psi =\nabla ^{2}\psi =\nabla \cdot (\nabla \psi )} The result is a scalar quantity. === Divergence of divergence is not defined === The divergence of a vector field A is a scalar, and the divergence of a scalar quantity is undefined. Therefore, ∇ ⋅ ( ∇ ⋅ A ) is undefined. {\displaystyle \nabla \cdot (\nabla \cdot \mathbf {A} ){\text{ is undefined.}}} === Curl of gradient is zero === The curl of the gradient of any continuously twice-differentiable scalar field φ {\displaystyle \varphi } (i.e., differentiability class C 2 {\displaystyle C^{2}} ) is always the zero vector: ∇ × ( ∇ φ ) = 0 . {\displaystyle \nabla \times (\nabla \varphi )=\mathbf {0} .} It can be easily proved by expressing ∇ × ( ∇ φ ) {\displaystyle \nabla \times (\nabla \varphi )} in a Cartesian coordinate system with Schwarz's theorem (also called Clairaut's theorem on equality of mixed partials). This result is a special case of the vanishing of the square of the exterior derivative in the De Rham chain complex. === Curl of curl === ∇ × ( ∇ × A ) = ∇ ( ∇ ⋅ A ) − ∇ 2 A {\displaystyle \nabla \times \left(\nabla \times \mathbf {A} \right)\ =\ \nabla (\nabla {\cdot }\mathbf {A} )\,-\,\nabla ^{2\!}\mathbf {A} } Here ∇2 is the vector Laplacian operating on the vector field A. === Curl of divergence is not defined === The divergence of a vector field A is a scalar, and the curl of a scalar quantity is undefined. Therefore, ∇ × ( ∇ ⋅ A ) is undefined. {\displaystyle \nabla \times (\nabla \cdot \mathbf {A} ){\text{ is undefined.}}} === Second derivative associative properties === ( ∇ ⋅ ∇ ) ψ = ∇ ⋅ ( ∇ ψ ) = ∇ 2 ψ ( ∇ ⋅ ∇ ) A = ∇ ⋅ ( ∇ A ) = ∇ 2 A ( ∇ × ∇ ) ψ = ∇ × ( ∇ ψ ) = 0 ( ∇ × ∇ ) A = ∇ × ( ∇ A ) = 0 {\displaystyle {\begin{aligned}(\nabla \cdot \nabla )\psi &=\nabla \cdot (\nabla \psi )=\nabla ^{2}\psi \\(\nabla \cdot \nabla )\mathbf {A} &=\nabla \cdot (\nabla \mathbf {A} )=\nabla ^{2}\mathbf {A} \\(\nabla \times \nabla )\psi &=\nabla \times (\nabla \psi )=\mathbf {0} \\(\nabla \times \nabla )\mathbf {A} &=\nabla \times (\nabla \mathbf {A} )=\mathbf {0} \end{aligned}}} === A mnemonic === The figure to the right is a mnemonic for some of these identities. The abbreviations used are: D: divergence, C: curl, G: gradient, L: Laplacian, CC: curl of curl. Each arrow is labeled with the result of an identity, specifically, the result of applying the operator at the arrow's tail to the operator at its head. The blue circle in the middle means curl of curl exists, whereas the other two red circles (dashed) mean that DD and GG do not exist. == Summary of important identities == === Differentiation === ==== Gradient ==== ∇ ( ψ + ϕ ) = ∇ ψ + ∇ ϕ {\displaystyle \nabla (\psi +\phi )=\nabla \psi +\nabla \phi } ∇ ( ψ ϕ ) = ϕ ∇ ψ + ψ ∇ ϕ {\displaystyle \nabla (\psi \phi )=\phi \nabla \psi +\psi \nabla \phi } ∇ ( ψ A ) = ∇ ψ ⊗ A + ψ ∇ A {\displaystyle \nabla (\psi \mathbf {A} )=\nabla \psi \otimes \mathbf {A} +\psi \nabla \mathbf {A} } ∇ ( A ⋅ B ) = ( A ⋅ ∇ ) B + ( B ⋅ ∇ ) A + A × ( ∇ × B ) + B × ( ∇ × A ) {\displaystyle \nabla (\mathbf {A} \cdot \mathbf {B} )=(\mathbf {A} \cdot \nabla )\mathbf {B} +(\mathbf {B} \cdot \nabla )\mathbf {A} +\mathbf {A} \times (\nabla \times \mathbf {B} )+\mathbf {B} \times (\nabla \times \mathbf {A} )} ==== Divergence ==== ∇ ⋅ ( A + B ) = ∇ ⋅ A + ∇ ⋅ B {\displaystyle \nabla \cdot (\mathbf {A} +\mathbf {B} )=\nabla \cdot \mathbf {A} +\nabla \cdot \mathbf {B} } ∇ ⋅ ( ψ A ) = ψ ∇ ⋅ A + A ⋅ ∇ ψ {\displaystyle \nabla \cdot \left(\psi \mathbf {A} \right)=\psi \nabla \cdot \mathbf {A} +\mathbf {A} \cdot \nabla \psi } ∇ ⋅ ( A × B ) = ( ∇ × A ) ⋅ B − ( ∇ × B ) ⋅ A {\displaystyle \nabla \cdot \left(\mathbf {A} \times \mathbf {B} \right)=(\nabla \times \mathbf {A} )\cdot \mathbf {B} -(\nabla \times \mathbf {B} )\cdot \mathbf {A} } ==== Curl ==== ∇ × ( A + B ) = ∇ × A + ∇ × B {\displaystyle \nabla \times (\mathbf {A} +\mathbf {B} )=\nabla \times \mathbf {A} +\nabla \times \mathbf {B} } ∇ × ( ψ A ) = ψ ( ∇ × A ) − ( A × ∇ ) ψ = ψ ( ∇ × A ) + ( ∇ ψ ) × A {\displaystyle \nabla \times \left(\psi \mathbf {A} \right)=\psi \,(\nabla \times \mathbf {A} )-(\mathbf {A} \times \nabla )\psi =\psi \,(\nabla \times \mathbf {A} )+(\nabla \psi )\times \mathbf {A} } ∇ × ( ψ ∇ ϕ ) = ∇ ψ × ∇ ϕ {\displaystyle \nabla \times \left(\psi \nabla \phi \right)=\nabla \psi \times \nabla \phi } ∇ × ( A × B ) = A ( ∇ ⋅ B ) − B ( ∇ ⋅ A ) + ( B ⋅ ∇ ) A − ( A ⋅ ∇ ) B {\displaystyle \nabla \times \left(\mathbf {A} \times \mathbf {B} \right)=\mathbf {A} \left(\nabla \cdot \mathbf {B} \right)-\mathbf {B} \left(\nabla \cdot \mathbf {A} \right)+\left(\mathbf {B} \cdot \nabla \right)\mathbf {A} -\left(\mathbf {A} \cdot \nabla \right)\mathbf {B} } ==== Vector-dot-Del Operator ==== ( A ⋅ ∇ ) B = 1 2 [ ∇ ( A ⋅ B ) − ∇ × ( A × B ) − B × ( ∇ × A ) − A × ( ∇ × B ) − B ( ∇ ⋅ A ) + A ( ∇ ⋅ B ) ] {\displaystyle (\mathbf {A} \cdot \nabla )\mathbf {B} ={\frac {1}{2}}{\bigg [}\nabla (\mathbf {A} \cdot \mathbf {B} )-\nabla \times (\mathbf {A} \times \mathbf {B} )-\mathbf {B} \times (\nabla \times \mathbf {A} )-\mathbf {A} \times (\nabla \times \mathbf {B} )-\mathbf {B} (\nabla \cdot \mathbf {A} )+\mathbf {A} (\nabla \cdot \mathbf {B} ){\bigg ]}} ( A ⋅ ∇ ) A = 1 2 ∇ | A | 2 − A × ( ∇ × A ) = 1 2 ∇ | A | 2 + ( ∇ × A ) × A {\displaystyle (\mathbf {A} \cdot \nabla )\mathbf {A} ={\frac {1}{2}}\nabla |\mathbf {A} |^{2}-\mathbf {A} \times (\nabla \times \mathbf {A} )={\frac {1}{2}}\nabla |\mathbf {A} |^{2}+(\nabla \times \mathbf {A} )\times \mathbf {A} } A ⋅ ∇ ( B ⋅ B ) = 2 B ⋅ ( A ⋅ ∇ ) B {\displaystyle \mathbf {A} \cdot \nabla (\mathbf {B} \cdot \mathbf {B} )=2\mathbf {B} \cdot (\mathbf {A} \cdot \nabla )\mathbf {B} } ==== Second derivatives ==== ∇ ⋅ ( ∇ × A ) = 0 {\displaystyle \nabla \cdot (\nabla \times \mathbf {A} )=0} ∇ × ( ∇ ψ ) = 0 {\displaystyle \nabla \times (\nabla \psi )=\mathbf {0} } ∇ ⋅ ( ∇ ψ ) = ∇ 2 ψ {\displaystyle \nabla \cdot (\nabla \psi )=\nabla ^{2}\psi } (scalar Laplacian) ∇ ( ∇ ⋅ A ) − ∇ × ( ∇ × A ) = ∇ 2 A {\displaystyle \nabla \left(\nabla \cdot \mathbf {A} \right)-\nabla \times \left(\nabla \times \mathbf {A} \right)=\nabla ^{2}\mathbf {A} } (vector Laplacian) ∇ ⋅ [ ∇ A + ( ∇ A ) T ] = ∇ 2 A + ∇ ( ∇ ⋅ A ) {\displaystyle \nabla \cdot {\big [}\nabla \mathbf {A} +(\nabla \mathbf {A} )^{\textsf {T}}{\big ]}=\nabla ^{2}\mathbf {A} +\nabla (\nabla \cdot \mathbf {A} )} ∇ ⋅ ( ϕ ∇ ψ ) = ϕ ∇ 2 ψ + ∇ ϕ ⋅ ∇ ψ {\displaystyle \nabla \cdot (\phi \nabla \psi )=\phi \nabla ^{2}\psi +\nabla \phi \cdot \nabla \psi } ψ ∇ 2 ϕ − ϕ ∇ 2 ψ = ∇ ⋅ ( ψ ∇ ϕ − ϕ ∇ ψ ) {\displaystyle \psi \nabla ^{2}\phi -\phi \nabla ^{2}\psi =\nabla \cdot \left(\psi \nabla \phi -\phi \nabla \psi \right)} ∇ 2 ( ϕ ψ ) = ϕ ∇ 2 ψ + 2 ( ∇ ϕ ) ⋅ ( ∇ ψ ) + ( ∇ 2 ϕ ) ψ {\displaystyle \nabla ^{2}(\phi \psi )=\phi \nabla ^{2}\psi +2(\nabla \phi )\cdot (\nabla \psi )+\left(\nabla ^{2}\phi \right)\psi } ∇ 2 ( ψ A ) = A ∇ 2 ψ + 2 ( ∇ ψ ⋅ ∇ ) A + ψ ∇ 2 A {\displaystyle \nabla ^{2}(\psi \mathbf {A} )=\mathbf {A} \nabla ^{2}\psi +2(\nabla \psi \cdot \nabla )\mathbf {A} +\psi \nabla ^{2}\mathbf {A} } ∇ 2 ( A ⋅ B ) = A ⋅ ∇ 2 B − B ⋅ ∇ 2 A + 2 ∇ ⋅ ( ( B ⋅ ∇ ) A + B × ( ∇ × A ) ) {\displaystyle \nabla ^{2}(\mathbf {A} \cdot \mathbf {B} )=\mathbf {A} \cdot \nabla ^{2}\mathbf {B} -\mathbf {B} \cdot \nabla ^{2}\!\mathbf {A} +2\nabla \cdot ((\mathbf {B} \cdot \nabla )\mathbf {A} +\mathbf {B} \times (\nabla \times \mathbf {A} ))} (Green's vector identity) ==== Third derivatives ==== ∇ 2 ( ∇ ψ ) = ∇ ( ∇ ⋅ ( ∇ ψ ) ) = ∇ ( ∇ 2 ψ ) {\displaystyle \nabla ^{2}(\nabla \psi )=\nabla (\nabla \cdot (\nabla \psi ))=\nabla \left(\nabla ^{2}\psi \right)} ∇ 2 ( ∇ ⋅ A ) = ∇ ⋅ ( ∇ ( ∇ ⋅ A ) ) = ∇ ⋅ ( ∇ 2 A ) {\displaystyle \nabla ^{2}(\nabla \cdot \mathbf {A} )=\nabla \cdot (\nabla (\nabla \cdot \mathbf {A} ))=\nabla \cdot \left(\nabla ^{2}\mathbf {A} \right)} ∇ 2 ( ∇ × A ) = − ∇ × ( ∇ × ( ∇ × A ) ) = ∇ × ( ∇ 2 A ) {\displaystyle \nabla ^{2}(\nabla \times \mathbf {A} )=-\nabla \times (\nabla \times (\nabla \times \mathbf {A} ))=\nabla \times \left(\nabla ^{2}\mathbf {A} \right)} === Integration === Below, the curly symbol ∂ means "boundary of" a surface or solid. ==== Surface–volume integrals ==== In the following surface–volume integral theorems, V denotes a three-dimensional volume with a corresponding two-dimensional boundary S = ∂V (a closed surface): ∂ V {\displaystyle \scriptstyle \partial V} ψ d S = ∭ V ∇ ψ d V {\displaystyle \psi \,d\mathbf {S} \ =\ \iiint _{V}\nabla \psi \,dV} ∂ V {\displaystyle \scriptstyle \partial V} A ⋅ d S = ∭ V ∇ ⋅ A d V {\displaystyle \mathbf {A} \cdot d\mathbf {S} \ =\ \iiint _{V}\nabla \cdot \mathbf {A} \,dV} (divergence theorem) ∂ V {\displaystyle \scriptstyle \partial V} A × d S = − ∭ V ∇ × A d V {\displaystyle \mathbf {A} \times d\mathbf {S} \ =\ -\iiint _{V}\nabla \times \mathbf {A} \,dV} ∂ V {\displaystyle \scriptstyle \partial V} ψ ∇ φ ⋅ d S = ∭ V ( ψ ∇ 2 φ + ∇ φ ⋅ ∇ ψ ) d V {\displaystyle \psi \nabla \!\varphi \cdot d\mathbf {S} \ =\ \iiint _{V}\left(\psi \nabla ^{2}\!\varphi +\nabla \!\varphi \cdot \nabla \!\psi \right)\,dV} (Green's first identity) ∂ V {\displaystyle \scriptstyle \partial V} ( ψ ∇ φ − φ ∇ ψ ) ⋅ d S = {\displaystyle \left(\psi \nabla \!\varphi -\varphi \nabla \!\psi \right)\cdot d\mathbf {S} \ =\ } ∂ V {\displaystyle \scriptstyle \partial V} ( ψ ∂ φ ∂ n − φ ∂ ψ ∂ n ) d S {\displaystyle \left(\psi {\frac {\partial \varphi }{\partial n}}-\varphi {\frac {\partial \psi }{\partial n}}\right)dS} = ∭ V ( ψ ∇ 2 φ − φ ∇ 2 ψ ) d V {\displaystyle \displaystyle \ =\ \iiint _{V}\left(\psi \nabla ^{2}\!\varphi -\varphi \nabla ^{2}\!\psi \right)\,dV} (Green's second identity) ∭ V A ⋅ ∇ ψ d V = {\displaystyle \iiint _{V}\mathbf {A} \cdot \nabla \psi \,dV\ =\ } ∂ V {\displaystyle \scriptstyle \partial V} ψ A ⋅ d S − ∭ V ψ ∇ ⋅ A d V {\displaystyle \psi \mathbf {A} \cdot d\mathbf {S} -\iiint _{V}\psi \nabla \cdot \mathbf {A} \,dV} (integration by parts) ∭ V ψ ∇ ⋅ A d V = {\displaystyle \iiint _{V}\psi \nabla \cdot \mathbf {A} \,dV\ =\ } ∂ V {\displaystyle \scriptstyle \partial V} ψ A ⋅ d S − ∭ V A ⋅ ∇ ψ d V {\displaystyle \psi \mathbf {A} \cdot d\mathbf {S} -\iiint _{V}\mathbf {A} \cdot \nabla \psi \,dV} (integration by parts) ∭ V A ⋅ ( ∇ × B ) d V = − {\displaystyle \iiint _{V}\mathbf {A} \cdot \left(\nabla \times \mathbf {B} \right)\,dV\ =\ -} ∂ V {\displaystyle \scriptstyle \partial V} ( A × B ) ⋅ d S + ∭ V ( ∇ × A ) ⋅ B d V {\displaystyle \left(\mathbf {A} \times \mathbf {B} \right)\cdot d\mathbf {S} +\iiint _{V}\left(\nabla \times \mathbf {A} \right)\cdot \mathbf {B} \,dV} (integration by parts) ∂ V {\displaystyle \scriptstyle \partial V} A × ( d S ⋅ ( B C T ) ) = ∭ V A × ( ∇ ⋅ ( B C T ) ) d V + ∭ V B ⋅ ( ∇ A ) × C d V {\displaystyle \mathbf {A} \times \left(d\mathbf {S} \cdot \left(\mathbf {B} \mathbf {C} ^{\textsf {T}}\right)\right)\ =\ \iiint _{V}\mathbf {A} \times \left(\nabla \cdot \left(\mathbf {B} \mathbf {C} ^{\textsf {T}}\right)\right)\,dV+\iiint _{V}\mathbf {B} \cdot (\nabla \mathbf {A} )\times \mathbf {C} \,dV} ∭ V ( ∇ ⋅ B + B ⋅ ∇ ) A d V = {\displaystyle \iiint _{V}\left(\nabla \cdot \mathbf {B} +\mathbf {B} \cdot \nabla \right)\mathbf {A} \,dV\ =\ } ∂ V {\displaystyle \scriptstyle \partial V} ( B ⋅ d S ) A {\displaystyle \left(\mathbf {B} \cdot d\mathbf {S} \right)\mathbf {A} } ==== Curve–surface integrals ==== In the following curve–surface integral theorems, S denotes a 2d open surface with a corresponding 1d boundary C = ∂S (a closed curve): ∮ ∂ S A ⋅ d ℓ = ∬ S ( ∇ × A ) ⋅ d S {\displaystyle \oint _{\partial S}\mathbf {A} \cdot d{\boldsymbol {\ell }}\ =\ \iint _{S}\left(\nabla \times \mathbf {A} \right)\cdot d\mathbf {S} } (Stokes' theorem) ∮ ∂ S ψ d ℓ = − ∬ S ∇ ψ × d S {\displaystyle \oint _{\partial S}\psi \,d{\boldsymbol {\ell }}\ =\ -\iint _{S}\nabla \psi \times d\mathbf {S} } ∮ ∂ S A × d ℓ = − ∬ S ( ∇ A − ( ∇ ⋅ A ) 1 ) ⋅ d S = − ∬ S ( d S × ∇ ) × A {\displaystyle \oint _{\partial S}\mathbf {A} \times d{\boldsymbol {\ell }}\ =\ -\iint _{S}\left(\nabla \mathbf {A} -(\nabla \cdot \mathbf {A} )\mathbf {1} \right)\cdot d\mathbf {S} \ =\ -\iint _{S}\left(d\mathbf {S} \times \nabla \right)\times \mathbf {A} } ∮ ∂ S A × ( B × d ℓ ) = ∬ S ( ∇ × ( A B T ) ) ⋅ d S + ∬ S ( ∇ ⋅ ( B A T ) ) × d S {\displaystyle \oint _{\partial S}\mathbf {A} \times (\mathbf {B} \times d{\boldsymbol {\ell }})\ =\ \iint _{S}\left(\nabla \times \left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)\right)\cdot d\mathbf {S} +\iint _{S}\left(\nabla \cdot \left(\mathbf {B} \mathbf {A} ^{\textsf {T}}\right)\right)\times d\mathbf {S} } ∮ ∂ S ( B ⋅ d ℓ ) A = ∬ S ( d S ⋅ [ ∇ × B − B × ∇ ] ) A {\displaystyle \oint _{\partial S}(\mathbf {B} \cdot d{\boldsymbol {\ell }})\mathbf {A} =\iint _{S}(d\mathbf {S} \cdot \left[\nabla \times \mathbf {B} -\mathbf {B} \times \nabla \right])\mathbf {A} } Integration around a closed curve in the clockwise sense is the negative of the same line integral in the counterclockwise sense (analogous to interchanging the limits in a definite integral): ==== Endpoint-curve integrals ==== In the following endpoint–curve integral theorems, P denotes a 1d open path with signed 0d boundary points q − p = ∂ P {\displaystyle \mathbf {q} -\mathbf {p} =\partial P} and integration along P is from p {\displaystyle \mathbf {p} } to q {\displaystyle \mathbf {q} } : ψ | ∂ P = ψ ( q ) − ψ ( p ) = ∫ P ∇ ψ ⋅ d ℓ {\displaystyle \psi |_{\partial P}=\psi (\mathbf {q} )-\psi (\mathbf {p} )=\int _{P}\nabla \psi \cdot d{\boldsymbol {\ell }}} (gradient theorem) A | ∂ P = A ( q ) − A ( p ) = ∫ P ( d ℓ ⋅ ∇ ) A {\displaystyle \mathbf {A} |_{\partial P}=\mathbf {A} (\mathbf {q} )-\mathbf {A} (\mathbf {p} )=\int _{P}\left(d{\boldsymbol {\ell }}\cdot \nabla \right)\mathbf {A} } A | ∂ P = A ( q ) − A ( p ) = ∫ P ( ∇ A ) ⋅ d ℓ + ∫ P ( ∇ × A ) × d ℓ {\displaystyle \mathbf {A} |_{\partial P}=\mathbf {A} (\mathbf {q} )-\mathbf {A} (\mathbf {p} )=\int _{P}\left(\nabla \mathbf {A} \right)\cdot d{\boldsymbol {\ell }}+\int _{P}\left(\nabla \times \mathbf {A} \right)\times d{\boldsymbol {\ell }}} ==== Tensor integrals ==== A tensor form of a vector integral theorem may be obtained by replacing the vector (or one of them) by a tensor, provided that the vector is first made to appear only as the right-most vector of each integrand. For example, Stokes' theorem becomes ∮ ∂ S d ℓ ⋅ T = ∬ S d S ⋅ ( ∇ × T ) {\displaystyle \oint _{\partial S}d{\boldsymbol {\ell }}\cdot \mathbf {T} \ =\ \iint _{S}d\mathbf {S} \cdot \left(\nabla \times \mathbf {T} \right)} . A scalar field may also be treated as a vector and replaced by a vector or tensor. For example, Green's first identity becomes ∂ V {\displaystyle \scriptstyle \partial V} ψ d S ⋅ ∇ A = ∭ V ( ψ ∇ 2 A + ∇ ψ ⋅ ∇ A ) d V {\displaystyle \psi \,d\mathbf {S} \cdot \nabla \!\mathbf {A} \ =\ \iiint _{V}\left(\psi \nabla ^{2}\!\mathbf {A} +\nabla \!\psi \cdot \nabla \!\mathbf {A} \right)\,dV} . Similar rules apply to algebraic and differentiation formulas. For algebraic formulas one may alternatively use the left-most vector position. == See also == Comparison of vector algebra and geometric algebra Del in cylindrical and spherical coordinates – Mathematical gradient operator in certain coordinate systems Differentiation rules – Rules for computing derivatives of functions Exterior calculus identities Exterior derivative – Operation on differential forms List of limits Table of derivatives – Rules for computing derivatives of functionsPages displaying short descriptions of redirect targets Vector algebra relations – Formulas about vectors in three-dimensional Euclidean space == References == == Further reading ==
Wikipedia/Vector_calculus_identity
Live Science is a science news website. The publication features stories on a wide range of topics, including space, animals, health, archaeology, human behavior, and planet Earth. It also includes a reference section with links to other websites. Its stated mission is to inform and entertain readers about science and the world around them. == History == Live Science was originally made in 2004. It was acquired by ediaNetwork, later called Purch, in 2009. Purch consumer brands (including Live Science) were acquired by Future in 2018. == Reception == In 2011, the Columbia Journalism Review's "News Startups Guide" called Live Science "a purebred Web animal, primarily featuring one-off stories and photo galleries produced at high speed by its mostly young staffers, almost all of whom have journalism degrees," noting that, "If you are looking for resource-intensive expositions of global warming, for instance, or thickly narrated journeys into the research process, LiveScience [sic] will disappoint. The site carries the big science news of the day, but its strength lies in the quirky diversity of its other content–oddball studies overlooked by major news organizations." == Awards == 2007: Winner, Award for Specialty Site Journalism (large or anization) from the Online Journalism Awards. 2008, 2010: Honoree, ebsites and Mobile Sites, Science from the Webby Awards. 2021: Listed as one of the top 10 science websites from the website "Make Use Of". Live Science was ranked in RealClearScience's "Top 10 Websites for Science" from 2016 to 2023. == References == == External links == Official website
Wikipedia/Live_Science
In a toroidal fusion power reactor, the magnetic fields confining the plasma are formed in a helical shape, winding around the interior of the reactor. The safety factor, labeled q or q(r), is the ratio of the times a particular magnetic field line travels around a toroidal confinement area's "long way" (toroidally) to the "short way" (poloidally). The term "safety" refers to the resulting stability of the plasma; plasmas that rotate around the torus poloidally about the same number of times as toroidally are inherently less susceptible to certain instabilities. The term is most commonly used when referring to tokamak devices. Although the same considerations apply in stellarators, by convention the inverse value is used, the rotational transform, or i. The concept was first developed by Martin David Kruskal and Vitaly Shafranov, who noticed that the plasma in pinch effect reactors would be stable if q was larger than 1. Macroscopically, this implies that the wavelength of the potential instability is longer than the reactor. This condition is known as the Kruskal–Shafranov limit. == Background == The key concept in magnetic confinement fusion is that ions and electrons in a plasma will rotate around magnetic lines of force. A simple way to confine a plasma would be to use a solenoid, a series of circular magnets mounted along a cylinder that generates uniform lines of force running down the long axis of the cylinder. A plasma generated in the center of the cylinder would be confined to run along the lines down the inside of the tube, keeping it away from the walls. However, it would be free to move along the axis and out the ends of the cylinder. One can close the ends by bending the solenoid around into a circle, forming a torus (a ring or donut). In this case, the particles will still be confined to the middle of the cylinder, and even if they move along it they would never exit the ends - they would circle the apparatus endlessly. However, Fermi noted a problem with this arrangement; consider a series of circular magnets with the toroidal confinement area threaded through their centers, the magnets will be closer together on the inside of the ring, with a stronger field. Particles in such a system will drift up or down across the torus. The solution to this problem is to add a secondary magnetic field at right angles to the first. The two magnetic fields will mix to produce a new combined field that is helical, like the stripes on a barber pole. A particle orbiting such a field line will find itself near the outside of the confinement area at some times, and near the inside at others. Although a test particle would always be drifting up (or down) compared to the field, since the field is rotating, that drift will, compared to the confinement chamber, be up or down, in or out, depending on its location along the cylinder. The net effect of the drift over a period of several orbits along the long axis of the reactor nearly adds up to zero. == Rotational transform == The effect of the helical field is to bend the path of a particle so it describes a loop around the cross section of the containment cylinder. At any given point in its orbit around the long axis of the toroid, the particle will be moving at an angle, θ. In the simple case, when the particle has completed one orbit of the reactor's major axis and returned to its original location, the fields will have made it complete one orbit of the minor axis as well. In this case the rotational transform is 1. In the more typical case, the fields do not "line up" this way, and the particle will not return to exactly the same location. In this case the rotational transform is calculated thus: i = 2 π ⋅ R ⋅ B p r ⋅ B t {\displaystyle i=2\pi \cdot {\frac {R\cdot B_{p}}{r\cdot B_{t}}}} where R is the major radius, r {\displaystyle r} the minor radius, B p {\displaystyle B_{p}} the poloidal field strength, and B t {\displaystyle B_{t}} the toroidal field. As the fields typically vary with their location within the cylinder, i {\displaystyle i} varies with location on the minor radius, and is expressed i(r). == Safety factor == In the case of an axisymmetric system, which was common in earlier fusion devices, it is more common to use the safety factor, which is simply the inverse of the rotational transform: q = 2 π i = r ⋅ B t R ⋅ B p {\displaystyle q={\frac {2\pi }{i}}={\frac {r\cdot B_{t}}{R\cdot B_{p}}}} The safety factor is essentially a measure of the "windiness" of the magnetic fields in a reactor. If the lines are not closed, the safety factor can be expressed as the pitch of the field: q = d ϕ d θ {\displaystyle q={\frac {d\phi }{d\theta }}} As the fields vary across the minor axis, q also varies and is often expressed as q(r). On the inside of the cylinder on a typical tokamak it converges on 1, while at the outside it is nearer 6 to 8. == Kruskal–Shafranov limit == Toroidal arrangements are a major class of magnetic fusion energy reactor designs. These are subject to a number of inherent instabilities that cause the plasma to exit the confinement area and hit the walls of the reactor on the order of milliseconds, far too rapidly to be used for energy generation. Among these is the kink instability, which is caused by small variations in the plasma shape. Areas where the plasma is slightly further from the centerline will experience a force outwards, causing a growing bulge that will eventually reach the reactor wall. These instabilities have a natural pattern based on the rotational transform. This leads to a characteristic wavelength of the kinks, which is based on the ratio of the two magnetic fields that mix to form the twisted field in the plasma. If that wavelength is longer than the long radius of the reactor, then they cannot form. That is, if the length along the major radius L s {\displaystyle L_{s}} is: L s = 2 π r B ϕ B θ > 2 π R {\displaystyle L_{s}={\frac {2\pi rB_{\phi }}{B_{\theta }}}>2\pi R} Then the plasma would be stable to this major class of instabilities. Basic mathematical rearrangement, removing the 2 π {\displaystyle 2\pi } from both sides and moving the major radius R to the other side of the equality produces: q = r B ϕ R B θ > 1 {\displaystyle q={\frac {rB_{\phi }}{RB_{\theta }}}>1} Which produces the simple rule of thumb that as long as the safety factor is greater than one at all points in the plasma, it will be naturally stable to this major class of instabilities. This principle led Soviet researchers to run their toroidal pinch machines with reduced current, leading to the stabilization that provided much higher performance in their T-3 machine in the late 1960s. In more modern machines, the plasma is pressed to the outside section of the chamber, producing a cross sectional shape like a D instead of a circle, which reduces the area with lower safety factor and allows higher currents to be driven through the plasma. == See also == Troyon limit == Notes == == References == Jeffrey Freidberg, "Plasma Physics and Fusion Energy", Cambridge University Press, 2007
Wikipedia/Safety_factor_(plasma_physics)
In magnetohydrodynamics (MHD), shocks and discontinuities are transition layers where properties of a plasma change from one equilibrium state to another. The relation between the plasma properties on both sides of a shock or a discontinuity can be obtained from the conservative form of the MHD equations, assuming conservation of mass, momentum, energy and of ∇ ⋅ B {\displaystyle \nabla \cdot \mathbf {B} } . == Rankine–Hugoniot jump conditions for MHD == The jump conditions across a time-independent MHD shock or discontinuity are referred as the Rankine–Hugoniot equations for MHD. In the frame moving with the shock/discontinuity, those jump conditions can be written: { ρ 1 v 1 ⊥ = ρ 2 v 2 ⊥ , B 1 ⊥ = B 2 ⊥ , ρ 1 v 1 ⊥ 2 + p 1 + 1 2 μ 0 B 1 ∥ 2 = ρ 2 v 2 ⊥ 2 + p 2 + 1 2 μ 0 B 2 ∥ 2 , ρ 1 v 1 ⊥ v 1 ∥ − 1 μ 0 B 1 ∥ B 1 ⊥ = ρ 2 v 2 ⊥ v 2 ∥ − 1 μ 0 B 2 ∥ B 2 ⊥ , ( γ γ − 1 p 1 ρ 1 + v 1 2 2 ) ρ 1 v 1 ⊥ + 1 μ 0 [ v 1 ⊥ B 1 ∥ 2 − B 1 ⊥ ( B 1 ∥ ⋅ v 1 ∥ ) ] = ( γ γ − 1 p 2 ρ 2 + v 2 2 2 ) ρ 2 v 2 ⊥ + 1 μ 0 [ v 2 ⊥ B 2 ∥ 2 − B 2 ⊥ ( B 2 ∥ ⋅ v 2 ∥ ) ] , ( v × B ) 1 ∥ = ( v × B ) 2 ∥ , {\displaystyle {\begin{cases}\rho _{1}v_{1\perp }=\rho _{2}v_{2\perp },\\[1.2ex]B_{1\perp }=B_{2\perp },\\[1.2ex]\rho _{1}v_{1\perp }^{2}+p_{1}+{\frac {1}{2\mu _{0}}}B_{1\parallel }^{2}=\rho _{2}v_{2\perp }^{2}+p_{2}+{\frac {1}{2\mu _{0}}}B_{2\parallel }^{2},\\[1.2ex]\rho _{1}v_{1\perp }\mathbf {v} _{1\parallel }-{\frac {1}{\mu _{0}}}\mathbf {B} _{1\parallel }B_{1\perp }=\rho _{2}v_{2\perp }\mathbf {v} _{2\parallel }-{\frac {1}{\mu _{0}}}\mathbf {B} _{2\parallel }B_{2\perp },\\[1.2ex]\displaystyle \left({\frac {\gamma }{\gamma -1}}{\frac {p_{1}}{\rho _{1}}}+{\frac {v_{1}^{2}}{2}}\right)\rho _{1}v_{1\perp }+{\frac {1}{\mu _{0}}}\left[{v_{1\perp }B_{1\parallel }^{2}}-{B_{1\perp }(\mathbf {B} _{1\parallel }\cdot \mathbf {v} _{1\parallel })}\right]=\left({\frac {\gamma }{\gamma -1}}{\frac {p_{2}}{\rho _{2}}}+{\frac {v_{2}^{2}}{2}}\right)\rho _{2}v_{2\perp }+{\frac {1}{\mu _{0}}}\left[{v_{2\perp }B_{2\parallel }^{2}}-{B_{2\perp }(\mathbf {B} _{2\parallel }\cdot \mathbf {v} _{2\parallel })}\right],\\[1.2ex](\mathbf {v} \times \mathbf {B} )_{1\parallel }=(\mathbf {v} \times \mathbf {B} )_{2\parallel },\end{cases}}} where ρ {\displaystyle \rho } , v, p, B are the plasma density, velocity, (thermal) pressure and magnetic field respectively. The subscripts ∥ {\displaystyle \parallel } and ⊥ {\displaystyle \perp } refer to the tangential and normal components of a vector (with respect to the shock/discontinuity front). The subscripts 1 and 2 refer to the two states of the plasma on each side of the shock/discontinuity == Contact and tangential discontinuities == Contact and tangential discontinuities are transition layers across which there is no particle transport. Thus, in the frame moving with the discontinuity, v 1 ⊥ = v 2 ⊥ = 0 {\displaystyle v_{1\perp }=v_{2\perp }=0} . Contact discontinuities are discontinuities for which the thermal pressure, the magnetic field and the velocity are continuous. Only the mass density and temperature change. Tangential discontinuities are discontinuities for which the total pressure (sum of the thermal and magnetic pressures) is conserved. The normal component of the magnetic field is identically zero. The density, thermal pressure and tangential component of the magnetic field vector can be discontinuous across the layer. == Shocks == Shocks are transition layers across which there is a transport of particles. There are three types of shocks in MHD: slow-mode, intermediate and fast-mode shocks. Intermediate shocks are non-compressive (meaning that the plasma density does not change across the shock). A special case of the intermediate shock is referred to as a rotational discontinuity. They are isentropic. All thermodynamic quantities are continuous across the shock, but the tangential component of the magnetic field can "rotate". Intermediate shocks in general however, unlike rotational discontinuities, can have a discontinuity in the pressure. Slow-mode and fast-mode shocks are compressive and are associated with an increase in entropy. Across slow-mode shock, the tangential component of the magnetic field decreases. Across fast-mode shock it increases. The type of shocks depend on the relative magnitude of the upstream velocity in the frame moving with the shock with respect to some characteristic speed. Those characteristic speeds, the slow and fast magnetosonic speeds, are related to the Alfvén speed, V A {\displaystyle V_{A}} and the sonic speed, c s {\displaystyle c_{s}} as follows: a slow 2 = 1 2 [ ( c s 2 + V A 2 ) − ( c s 2 + V A 2 ) 2 − 4 c s 2 V A 2 cos 2 ⁡ θ B n ] , a fast 2 = 1 2 [ ( c s 2 + V A 2 ) + ( c s 2 + V A 2 ) 2 − 4 c s 2 V A 2 cos 2 ⁡ θ B n ] , {\displaystyle {\begin{aligned}a_{\text{slow}}^{2}&={\frac {1}{2}}\left[\left(c_{s}^{2}+V_{A}^{2}\right)-{\sqrt {\left(c_{s}^{2}+V_{A}^{2}\right)^{2}-4c_{s}^{2}V_{A}^{2}\cos ^{2}\theta _{Bn}}}\,\right],\\[1ex]a_{\text{fast}}^{2}&={\frac {1}{2}}\left[\left(c_{s}^{2}+V_{A}^{2}\right)+{\sqrt {\left(c_{s}^{2}+V_{A}^{2}\right)^{2}-4c_{s}^{2}V_{A}^{2}\cos ^{2}\theta _{Bn}}}\,\right],\end{aligned}}} where V A {\displaystyle V_{A}} is the Alfvén speed and θ B n {\displaystyle \theta _{Bn}} is the angle between the incoming magnetic field and the shock normal vector. The normal component of the slow shock propagates with velocity a s l o w {\displaystyle a_{\mathrm {slow} }} in the frame moving with the upstream plasma, that of the intermediate shock with velocity V A n {\displaystyle V_{An}} and that of the fast shock with velocity a f a s t {\displaystyle a_{\mathrm {fast} }} . The fast mode waves have higher phase velocities than the slow mode waves because the density and magnetic field are in phase, whereas the slow mode wave components are out of phase. == Example of shocks and discontinuities in space == The Earth's bow shock, which is the boundary where the solar wind's speed drops due to the presence of Earth's magnetosphere is a fast mode shock. The termination shock is a fast-mode shock due to the interaction of the solar wind with the interstellar medium. Magnetic reconnection can happen associated with a slow-mode shock (Petschek or fast magnetic reconnection) in the solar corona. The existence of intermediate shocks is still a matter of debate. They may form in MHD simulation, but their stability has not been proven. Discontinuities (both contact and tangential) are observed in the solar wind, behind astrophysical shock waves (supernova remnant) or due to the interaction of multiple CME driven shock waves. The Earth's magnetopause is generally a tangential discontinuity. Coronal Mass Ejections (CMEs) moving at super-Alfvénic speeds are able to drive fast-mode MHD shocks while propagating away from the Sun into the solar wind. Signatures of these shocks have been identified in both radio (as type II radio bursts) and ultraviolet (UV) spectra. == See also == Alfvén wave Magnetohydrodynamics Moreton wave Rankine–Hugoniot conditions Shock wave == References == === Citations === === General references === The original research on MHD shock waves can be found in the following papers. === Textbooks ===
Wikipedia/Shocks_and_discontinuities_(magnetohydrodynamics)
In nuclear physics, a magic number is a number of nucleons (either protons or neutrons, separately) such that they are arranged into complete shells within the atomic nucleus. As a result, atomic nuclei with a "magic" number of protons or neutrons are much more stable than other nuclei. The seven most widely recognized magic numbers as of 2019 are 2, 8, 20, 28, 50, 82, and 126. For protons, this corresponds to the elements helium, oxygen, calcium, nickel, tin, lead, and the hypothetical unbihexium, although 126 is so far only known to be a magic number for neutrons. Atomic nuclei consisting of such a magic number of nucleons have a higher average binding energy per nucleon than one would expect based upon predictions such as the semi-empirical mass formula and are hence more stable against nuclear decay. The unusual stability of isotopes having magic numbers means that transuranium elements could theoretically be created with extremely large nuclei and yet not be subject to the extremely rapid radioactive decay normally associated with high atomic numbers. Large isotopes with magic numbers of nucleons are said to exist in an island of stability. Unlike the magic numbers 2–126, which are realized in spherical nuclei, theoretical calculations predict that nuclei in the island of stability are deformed. Before this was realized, higher magic numbers, such as 184, 258, 350, and 462, were predicted based on simple calculations that assumed spherical shapes: these are generated by the formula 2 ( ( n 1 ) + ( n 2 ) + ( n 3 ) ) {\displaystyle 2({\tbinom {n}{1}}+{\tbinom {n}{2}}+{\tbinom {n}{3}})} (see Binomial coefficient). It is now believed that the sequence of spherical magic numbers cannot be extended in this way. Further predicted magic numbers are 114, 122, 124, and 164 for protons as well as 184, 196, 236, and 318 for neutrons. However, more modern calculations predict 228 and 308 for neutrons, along with 184 and 196. == History and etymology == While working on the Manhattan Project, the German physicist Maria Goeppert Mayer became interested in the properties of nuclear fission products, such as decay energies and half-lives. In 1948, she published a body of experimental evidence for the occurrence of closed nuclear shells for nuclei with 50 or 82 protons or 50, 82, and 126 neutrons. It had already been known that nuclei with 20 protons or neutrons were stable: that was evidenced by calculations by Hungarian-American physicist Eugene Wigner, one of her colleagues in the Manhattan Project. Two years later, in 1950, a new publication followed in which she attributed the shell closures at the magic numbers to spin-orbit coupling. According to Steven Moszkowski, a student of Goeppert Mayer, the term "magic number" was coined by Wigner: "Wigner too believed in the liquid drop model, but he recognized, from the work of Maria Mayer, the very strong evidence for the closed shells. It seemed a little like magic to him, and that is how the words 'Magic Numbers' were coined." These magic numbers were the bedrock of the nuclear shell model, which Mayer developed in the following years together with Hans Jensen and culminated in their shared 1963 Nobel Prize in Physics. == Doubly magic == Nuclei which have neutron numbers and proton (atomic) numbers both equal to one of the magic numbers are called "doubly magic", and are generally very stable against decay. The known doubly magic isotopes are helium-4, helium-10, oxygen-16, calcium-40, calcium-48, nickel-48, nickel-56, nickel-78, tin-100, tin-132, and lead-208. While only helium-4, oxygen-16, calcium-40, and lead-208 are completely stable, calcium-48 is extremely long-lived and therefore found naturally, disintegrating only by a very inefficient double beta minus decay process. Double beta decay in general is so rare that several nuclides exist which are predicted to decay by this mechanism but in which no such decay has yet been observed. Even in nuclides whose double beta decay has been confirmed through observations, half lives usually exceed the age of the universe by orders of magnitude, and emitted beta or gamma radiation is for virtually all practical purposes irrelevant. On the other hand, helium-10 is extremely unstable, and has a half-life of just 260(40) yoctoseconds (2.6(4)×10−22 s). Doubly magic effects may allow the existence of stable isotopes which otherwise would not have been expected. An example is calcium-40, with 20 neutrons and 20 protons, which is the heaviest stable isotope made of the same number of protons and neutrons. Both calcium-48 and nickel-48 are doubly magic because calcium-48 has 20 protons and 28 neutrons while nickel-48 has 28 protons and 20 neutrons. Calcium-48 is very neutron-rich for such a relatively light element, but like calcium-40, it is stabilized by being doubly magic. As an exception, although oxygen-28 has 8 protons and 20 neutrons, it is unbound with respect to four-neutron decay and appears to lack closed neutron shells, so it is not regarded as doubly magic. Magic number shell effects are seen in ordinary abundances of elements: helium-4 is among the most abundant (and stable) nuclei in the universe and lead-208 is the heaviest stable nuclide (at least by known experimental observations). Alpha decay (the emission of a 4He nucleus – also known as an alpha particle – by a heavy element undergoing radioactive decay) is common in part due to the extraordinary stability of helium-4, which makes this type of decay energetically favored in most heavy nuclei over neutron emission, proton emission or any other type of cluster decay. The stability of 4He also leads to the absence of stable isobars of mass number 5 and 8; indeed, all nuclides of those mass numbers decay within fractions of a second to produce alpha particles. Magic effects can keep unstable nuclides from decaying as rapidly as would otherwise be expected. For example, the nuclides tin-100 and tin-132 are examples of doubly magic isotopes of tin that are unstable, and represent endpoints beyond which stability drops off rapidly. Nickel-48, discovered in 1999, is the most proton-rich doubly magic nuclide known. At the other extreme, nickel-78 is also doubly magic, with 28 protons and 50 neutrons, a ratio observed only in much heavier elements, apart from tritium with one proton and two neutrons (78Ni: 28/50 = 0.56; 238U: 92/146 = 0.63). In December 2006, hassium-270, with 108 protons and 162 neutrons, was discovered by an international team of scientists led by the Technical University of Munich, having a half-life of 9 seconds. Hassium-270 evidently forms part of an island of stability, and may even be doubly magic due to the deformed (American football- or rugby ball-like) shape of this nucleus. Although Z = 92 and N = 164 are not magic numbers, the undiscovered neutron-rich nucleus uranium-256 may be doubly magic and spherical due to the difference in size between low- and high-angular momentum orbitals, which alters the shape of the nuclear potential. == Derivation == Magic numbers are typically obtained by empirical studies; if the form of the nuclear potential is known, then the Schrödinger equation can be solved for the motion of nucleons and energy levels determined. Nuclear shells are said to occur when the separation between energy levels is significantly greater than the local mean separation. In the shell model for the nucleus, magic numbers are the numbers of nucleons at which a shell is filled. For instance, the magic number 8 occurs when the 1s1/2, 1p3/2, 1p1/2 energy levels are filled, as there is a large energy gap between the 1p1/2 and the next highest 1d5/2 energy levels. The atomic analog to nuclear magic numbers are those numbers of electrons leading to discontinuities in the ionization energy. These occur for the noble gases helium, neon, argon, krypton, xenon, radon and oganesson. Hence, the "atomic magic numbers" are 2, 10, 18, 36, 54, 86 and 118. As with the nuclear magic numbers, these are expected to be changed in the superheavy region due to spin/orbit-coupling effects affecting subshell energy levels. Hence copernicium (112) and flerovium (114) are expected to be more inert than oganesson (118), and the next noble gas after these is expected to occur at element 172 rather than 168 (which would continue the pattern). In 2010, an alternative explanation of magic numbers was given in terms of symmetry considerations. Based on the fractional extension of the standard rotation group, the ground state properties (including the magic numbers) for metallic clusters and nuclei were simultaneously determined analytically. A specific potential term is not necessary in this model. == See also == Magic number (chemistry) Superatom Superdeformation == References == == External links == Nave, C. R. "Shell Model of Nucleus". HyperPhysics. Scerri, Eric (2007). The Periodic Table, Its Story and Its Significance. Oxford University Press. ISBN 978-0-19-530573-9. see chapter 10 especially. Moskowitz, Clara. "New magic number "inside atoms" discovered". Scientific American. Watkins, Thayer. "A Nearly Complete Explanation of the Nuclear Magic Numbers".
Wikipedia/Magic_number_(physics)
High-energy nuclear physics studies the behavior of nuclear matter in energy regimes typical of high-energy physics. The primary focus of this field is the study of heavy-ion collisions, as compared to lighter atoms in other particle accelerators. At sufficient collision energies, these types of collisions are theorized to produce the quark–gluon plasma. In peripheral nuclear collisions at high energies one expects to obtain information on the electromagnetic production of leptons and mesons that are not accessible in electron–positron colliders due to their much smaller luminosities. Previous high-energy nuclear accelerator experiments have studied heavy-ion collisions using projectile energies of 1 GeV/nucleon at JINR and LBNL-Bevalac up to 158 GeV/nucleon at CERN-SPS. Experiments of this type, called "fixed-target" experiments, primarily accelerate a "bunch" of ions (typically around 106 to 108 ions per bunch) to speeds approaching the speed of light (0.999c) and smash them into a target of similar heavy ions. While all collision systems are interesting, great focus was applied in the late 1990s to symmetric collision systems of gold beams on gold targets at Brookhaven National Laboratory's Alternating Gradient Synchrotron (AGS) and uranium beams on uranium targets at CERN's Super Proton Synchrotron. High-energy nuclear physics experiments are continued at the Brookhaven National Laboratory's Relativistic Heavy Ion Collider (RHIC) and at the CERN Large Hadron Collider. At RHIC the programme began with four experiments— PHENIX, STAR, PHOBOS, and BRAHMS—all dedicated to study collisions of highly relativistic nuclei. Unlike fixed-target experiments, collider experiments steer two accelerated beams of ions toward each other at (in the case of RHIC) six interaction regions. At RHIC, ions can be accelerated (depending on the ion size) from 100 GeV/nucleon to 250 GeV/nucleon. Since each colliding ion possesses this energy moving in opposite directions, the maximal energy of the collisions can achieve a center-of-mass collision energy of 200 GeV/nucleon for gold and 500 GeV/nucleon for protons. The ALICE (A Large Ion Collider Experiment) detector at the LHC at CERN is specialized in studying Pb–Pb nuclei collisions at a center-of-mass energy of 2.76 TeV per nucleon pair. All major LHC detectors—ALICE, ATLAS, CMS and LHCb—participate in the heavy-ion programme. == History == The exploration of hot hadron matter and of multiparticle production has a long history initiated by theoretical work on multiparticle production by Enrico Fermi in the US and Lev Landau in the USSR. These efforts paved the way to the development in the early 1960s of the thermal description of multiparticle production and the statistical bootstrap model by Rolf Hagedorn. These developments led to search for and discovery of quark-gluon plasma. Onset of the production of this new form of matter remains under active investigation. === First collisions === The first heavy-ion collisions at modestly relativistic conditions were undertaken at the Lawrence Berkeley National Laboratory (LBNL, formerly LBL) at Berkeley, California, U.S.A., and at the Joint Institute for Nuclear Research (JINR) in Dubna, Moscow Oblast, USSR. At the LBL, a transport line was built to carry heavy ions from the heavy-ion accelerator HILAC to the Bevatron. The energy scale at the level of 1–2 GeV per nucleon attained initially yields compressed nuclear matter at few times normal nuclear density. The demonstration of the possibility of studying the properties of compressed and excited nuclear matter motivated research programs at much higher energies in accelerators available at BNL and CERN with relativist beams targeting laboratory fixed targets. The first collider experiments started in 1999 at RHIC, and LHC begun colliding heavy ions at one order of magnitude higher energy in 2010. == CERN operation == The LHC collider at CERN operates one month a year in the nuclear-collision mode, with Pb nuclei colliding at 2.76 TeV per nucleon pair, about 1500 times the energy equivalent of the rest mass. Overall 1250 valence quarks collide, generating a hot quark–gluon soup. Heavy atomic nuclei stripped of their electron cloud are called heavy ions, and one speaks of (ultra)relativistic heavy ions when the kinetic energy exceeds significantly the rest energy, as it is the case at LHC. The outcome of such collisions is production of very many strongly interacting particles. In August 2012 ALICE scientists announced that their experiments produced quark–gluon plasma with temperature at around 5.5 trillion kelvins, the highest temperature achieved in any physical experiments thus far. This temperature is about 38% higher than the previous record of about 4 trillion kelvins, achieved in the 2010 experiments at the Brookhaven National Laboratory. The ALICE results were announced at the August 13 Quark Matter 2012 conference in Washington, D.C. The quark–gluon plasma produced by these experiments approximates the conditions in the universe that existed microseconds after the Big Bang, before the matter coalesced into atoms. == Objectives == There are several scientific objectives of this international research program: The formation and investigation of a new state of matter made of quarks and gluons, the quark–gluon plasma QGP, which prevailed in early universe in first 30 microseconds. The study of color confinement and the transformation of color confining = quark confining vacuum state to the excited state physicists call perturbative vacuum, in which quarks and gluons can roam free, which occurs at Hagedorn temperature; The study the origins of hadron (proton, neutron etc.) matter mass believed to be related to the phenomenon of quark confinement and vacuum structure. == Experimental program == This experimental program follows on a decade of research at the RHIC collider at BNL and almost two decades of studies using fixed targets at SPS at CERN and AGS at BNL. This experimental program has already confirmed that the extreme conditions of matter necessary to reach QGP phase can be reached. A typical temperature range achieved in the QGP created T = 300 MeV / k B = 3.3 × 10 12 K {\displaystyle T=300~{\text{MeV}}/k_{\text{B}}=3.3\times 10^{12}~{\text{K}}} is more than 100000 times greater than in the center of the Sun. This corresponds to an energy density ϵ = 10 GeV/fm 3 = 1.8 × 10 16 g/cm 3 {\displaystyle \epsilon =10~{\text{GeV/fm}}^{3}=1.8\times 10^{16}~{\text{g/cm}}^{3}} . The corresponding relativistic-matter pressure is P ≃ 1 3 ϵ = 0.52 × 10 31 bar . {\displaystyle P\simeq {\frac {1}{3}}\epsilon =0.52\times 10^{31}~{\text{bar}}.} == More information == Rutgers University Nuclear Physics Home Page Publications - High Energy Nuclear Physics (HENP) https://web.archive.org/web/20101212105542/http://www.er.doe.gov/np/ == References ==
Wikipedia/High-energy_nuclear_physics
A three-body force is a force that does not exist in a system of two objects but appears in a three-body system. In general, if the behaviour of a system of more than two objects cannot be described by the two-body interactions between all possible pairs, as a first approximation, the deviation is mainly due to a three-body force. The fundamental strong interaction does exhibit such behaviour, the most important example being the stability experimentally observed for the helium-3 isotope, which can be described as a 3-body quantum cluster entity of two protons and one neutron [PNP] in stable superposition. Direct evidence of a 3-body force in helium-3 is known: [1]. The existence of stable [PNP] cluster calls into question models of the atomic nucleus that restrict nucleon interactions within shells to 2-body phenomenon. The three-nucleon-interaction is fundamentally possible because gluons, the mediators of the strong interaction, can couple to themselves. In particle physics, the interactions between the three quarks that compose hadrons can be described in a diquark model which might be equivalent to the hypothesis of a three-body force. There is growing evidence in the field of nuclear physics that three-body forces exist among the nucleons inside atomic nuclei for many different isotopes (three-nucleon force). == See also == Faddeev equation Few-body systems N-body problem Hydrogen molecular ion Borromean nucleus Efimov state Chiral perturbation theory == References == Loiseau, B.A.; Nogami, Y. (1967). "Three-nucleon force". Nuclear Physics B. 2 (4). Elsevier BV: 470–478. Bibcode:1967NuPhB...2..470L. doi:10.1016/0550-3213(67)90184-8. ISSN 0550-3213. Witała, H.; Glöckle, W.; Hüber, D.; Golak, J.; Kamada, H. (1998-08-10). "Cross Section Minima in Elastic Nd Scattering: Possible Evidence for Three-Nucleon Force Effects". Physical Review Letters. 81 (6): 1183–1186. arXiv:nucl-th/9801018. Bibcode:1998PhRvL..81.1183W. doi:10.1103/physrevlett.81.1183. ISSN 0031-9007. S2CID 34582091. Epelbaum, E.; Nogga, A.; Glöckle, W.; Kamada, H.; Meißner, Ulf-G.; Witała, H. (2002-12-11). "Three-nucleon forces from chiral effective field theory". Physical Review C. 66 (6). American Physical Society (APS): 064001. arXiv:nucl-th/0208023. Bibcode:2002PhRvC..66f4001E. doi:10.1103/physrevc.66.064001. ISSN 0556-2813. S2CID 15592470. Mermod, P.; Blomgren, J.; Bergenwall, B.; Hildebrand, A.; Johansson, C.; et al. (2004). "Search for three-body force effects in neutron–deuteron scattering at 95 MeV". Physics Letters B. 597 (3–4). Elsevier BV: 243–248. Bibcode:2004PhLB..597..243M. doi:10.1016/j.physletb.2004.07.028. ISSN 0370-2693.
Wikipedia/Three-body_force
In nuclear physics, ab initio methods seek to describe the atomic nucleus from the bottom up by solving the non-relativistic Schrödinger equation for all constituent nucleons and the forces between them. This is done either exactly for very light nuclei (up to four nucleons) or by employing certain well-controlled approximations for heavier nuclei. Ab initio methods constitute a more fundamental approach compared to e.g. the nuclear shell model. Recent progress has enabled ab initio treatment of heavier nuclei such as nickel. A significant challenge in the ab initio treatment stems from the complexities of the inter-nucleon interaction. The strong nuclear force is believed to emerge from the strong interaction described by quantum chromodynamics (QCD), but QCD is non-perturbative in the low-energy regime relevant to nuclear physics. This makes the direct use of QCD for the description of the inter-nucleon interactions very difficult (see lattice QCD), and a model must be used instead. The most sophisticated models available are based on chiral effective field theory. This effective field theory (EFT) includes all interactions compatible with the symmetries of QCD, ordered by the size of their contributions. The degrees of freedom in this theory are nucleons and pions, as opposed to quarks and gluons as in QCD. The effective theory contains parameters called low-energy constants, which can be determined from scattering data. Chiral EFT implies the existence of many-body forces, most notably the three-nucleon interaction which is known to be an essential ingredient in the nuclear many-body problem. After arriving at a Hamiltonian H {\displaystyle H} (based on chiral EFT or other models) one must solve the Schrödinger equation H | Ψ ⟩ = E | Ψ ⟩ , {\displaystyle H\vert {\Psi }\rangle =E\vert {\Psi }\rangle ,} where | Ψ ⟩ {\displaystyle \vert {\Psi }\rangle } is the many-body wavefunction of the A nucleons in the nucleus. Various ab initio methods have been devised to numerically find solutions to this equation: Green's function Monte Carlo (GFMC) No-core shell model (NCSM) Coupled cluster (CC) Self-consistent Green's function (SCGF) In-medium similarity renormalization group (IM-SRG) == Further reading == Dean, D. (2007). "Beyond the nuclear shell model". Physics Today. 60 (11): 48. Bibcode:2007PhT....60k..48D. doi:10.1063/1.2812123. Zastrow, M. (2017). "In search for "magic" nuclei, theory catches up to experiments". Proc Natl Acad Sci U S A. 114 (20): 5060–5062. Bibcode:2017PNAS..114.5060Z. doi:10.1073/pnas.1703620114. PMC 5441833. PMID 28512181. == References ==
Wikipedia/Ab_initio_methods_(nuclear_physics)
The decay energy is the energy change of a nucleus having undergone a radioactive decay. Radioactive decay is the process in which an unstable atomic nucleus loses energy by emitting ionizing particles and radiation. This decay, or loss of energy, results in an atom of one type (called the parent nuclide) transforming to an atom of a different type (called the daughter nuclide). == Decay calculation == The energy difference of the reactants is often written as Q: Q = ( Kinetic energy ) after − ( Kinetic energy ) before , {\displaystyle Q=\left({\text{Kinetic energy}}\right)_{\text{after}}-\left({\text{Kinetic energy}}\right)_{\text{before}},} Q = ( Rest mass ) before c 2 − ( Rest mass ) after c 2 . {\displaystyle Q=\left({\text{Rest mass}}\right)_{\text{before}}c^{2}-\left({\text{Rest mass}}\right)_{\text{after}}c^{2}.} Decay energy is usually quoted in terms of the energy units MeV (million electronvolts) or keV (thousand electronvolts): Q [MeV] = − 931.5 Δ M [Da] , ( where Δ M = Σ M products − Σ M reactants ) . {\displaystyle Q{\text{ [MeV]}}=-931.5\Delta M{\text{ [Da]}},~~({\text{where }}\Delta M=\Sigma M_{\text{products}}-\Sigma M_{\text{reactants}}).} Types of radioactive decay include gamma ray beta decay (decay energy is divided between the emitted electron and the neutrino which is emitted at the same time) alpha decay The decay energy is the mass difference Δm between the parent and the daughter atom and particles. It is equal to the energy of radiation E. If A is the radioactive activity, i.e. the number of transforming atoms per time, M the molar mass, then the radiation power P is: P = Δ m ( A M ) . {\displaystyle P=\Delta {m}\left({\frac {A}{M}}\right).} or P = E ( A M ) . {\displaystyle P=E\left({\frac {A}{M}}\right).} or P = Q A . {\displaystyle P=QA.} Example: 60Co decays into 60Ni. The mass difference Δm is 0.003 u. The radiated energy is approximately 2.8 MeV. The molar weight is 59.93. The half life T of 5.27 year corresponds to the activity A = N [ ln(2) / T ], where N is the number of atoms per mol, and T is the half-life. Taking care of the units the radiation power for 60Co is 17.9 W/g Radiation power in W/g for several isotopes: 60Co: 17.9 238Pu: 0.57 137Cs: 0.6 241Am: 0.1 210Po: 140 (T = 136 d) 90Sr: 0.9 226Ra: 0.02 For use in radioisotope thermoelectric generators (RTGs) high decay energy combined with a long half life is desirable. To reduce the cost and weight of radiation shielding, sources that do not emit strong gamma radiation are preferred. This table gives an indication why - despite its enormous cost - 238Pu with its roughly eighty year half life and low gamma emissions has become the RTG nuclide of choice. 90Sr performs worse than 238Pu on almost all measures, being shorter lived, a beta emitter rather than an easily shielded alpha emitter and releasing significant gamma radiation when its daughter nuclide 90Y decays, but as it is a high yield product of nuclear fission and easy to chemically extract from other fission products, Strontium titanate based RTGs were in widespread use for remote locations during much of the 20th century. Cobalt-60 while widely used for purposes such as food irradiation is not a practicable RTG isotope as most of its decay energy is released by gamma rays, requiring substantial shielding. Furthermore, its five-year half life is too short for many applications. == See also == Q value (nuclear science) == References == Radioactivity Radionuclides Radiation by Joseph Magill and Jean Galy, Springer Verlag, 2005
Wikipedia/Decay_energy
London dispersion forces (LDF, also known as dispersion forces, London forces, instantaneous dipole–induced dipole forces, fluctuating induced dipole bonds or loosely as van der Waals forces) are a type of intermolecular force acting between atoms and molecules that are normally electrically symmetric; that is, the electrons are symmetrically distributed with respect to the nucleus. They are part of the van der Waals forces. The LDF is named after the German physicist Fritz London. They are the weakest of the intermolecular forces. == Introduction == The electron distribution around an atom or molecule undergoes fluctuations in time. These fluctuations create instantaneous electric fields which are felt by other nearby atoms and molecules, which in turn adjust the spatial distribution of their own electrons. The net effect is that the fluctuations in electron positions in one atom induce a corresponding redistribution of electrons in other atoms, such that the electron motions become correlated. While the detailed theory requires a quantum-mechanical explanation (see quantum mechanical theory of dispersion forces), the effect is frequently described as the formation of instantaneous dipoles that (when separated by vacuum) attract each other. The magnitude of the London dispersion force is frequently described in terms of a single parameter called the Hamaker constant, typically symbolized A {\displaystyle A} . For atoms that are located closer together than the wavelength of light, the interaction is essentially instantaneous and is described in terms of a "non-retarded" Hamaker constant. For entities that are farther apart, the finite time required for the fluctuation at one atom to be felt at a second atom ("retardation") requires use of a "retarded" Hamaker constant. While the London dispersion force between individual atoms and molecules is quite weak and decreases quickly with separation R {\displaystyle R} like 1 R 6 {\displaystyle {\frac {1}{R^{6}}}} , in condensed matter (liquids and solids), the effect is cumulative over the volume of materials, or within and between organic molecules, such that London dispersion forces can be quite strong in bulk solid and liquids and decay much more slowly with distance. For example, the total force per unit area between two bulk solids decreases by 1 R 3 {\displaystyle {\frac {1}{R^{3}}}} where R {\displaystyle R} is the separation between them. The effects of London dispersion forces are most obvious in systems that are very non-polar (e.g., that lack ionic bonds), such as hydrocarbons and highly symmetric molecules like bromine (Br2, a liquid at room temperature) or iodine (I2, a solid at room temperature). In hydrocarbons and waxes, the dispersion forces are sufficient to cause condensation from the gas phase into the liquid or solid phase. Sublimation heats of e.g. hydrocarbon crystals reflect the dispersion interaction. Liquification of oxygen and nitrogen gases into liquid phases is also dominated by attractive London dispersion forces. When atoms/molecules are separated by a third medium (rather than vacuum), the situation becomes more complex. In aqueous solutions, the effects of dispersion forces between atoms or molecules are frequently less pronounced due to competition with polarizable solvent molecules. That is, the instantaneous fluctuations in one atom or molecule are felt both by the solvent (water) and by other molecules. Larger and heavier atoms and molecules exhibit stronger dispersion forces than smaller and lighter ones. This is due to the increased polarizability of molecules with larger, more dispersed electron clouds. The polarizability is a measure of how easily electrons can be redistributed; a large polarizability implies that the electrons are more easily redistributed. This trend is exemplified by the halogens (from smallest to largest: F2, Cl2, Br2, I2). The same increase of dispersive attraction occurs within and between organic molecules in the order RF, RCl, RBr, RI (from smallest to largest) or with other more polarizable heteroatoms. Fluorine and chlorine are gases at room temperature, bromine is a liquid, and iodine is a solid. The London forces are thought to arise from the motion of electrons. == Quantum mechanical theory == The first explanation of the attraction between noble gas atoms was given by Fritz London in 1930. He used a quantum-mechanical theory based on second-order perturbation theory. The perturbation is because of the Coulomb interaction between the electrons and nuclei of the two moieties (atoms or molecules). The second-order perturbation expression of the interaction energy contains a sum over states. The states appearing in this sum are simple products of the stimulated electronic states of the monomers. Thus, no intermolecular antisymmetrization of the electronic states is included, and the Pauli exclusion principle is only partially satisfied. London wrote a Taylor series expansion of the perturbation in 1 R {\displaystyle {\frac {1}{R}}} , where R {\displaystyle R} is the distance between the nuclear centers of mass of the moieties. This expansion is known as the multipole expansion because the terms in this series can be regarded as energies of two interacting multipoles, one on each monomer. Substitution of the multipole-expanded form of V into the second-order energy yields an expression that resembles an expression describing the interaction between instantaneous multipoles (see the qualitative description above). Additionally, an approximation, named after Albrecht Unsöld, must be introduced in order to obtain a description of London dispersion in terms of polarizability volumes, α ′ {\displaystyle \alpha '} , and ionization energies, I {\displaystyle I} , (ancient term: ionization potentials). In this manner, the following approximation is obtained for the dispersion interaction E A B d i s p {\displaystyle E_{AB}^{\rm {disp}}} between two atoms A {\displaystyle A} and B {\displaystyle B} . Here α A ′ {\displaystyle \alpha '_{A}} and α B ′ {\displaystyle \alpha '_{B}} are the polarizability volumes of the respective atoms. The quantities I A {\displaystyle I_{A}} and I B {\displaystyle I_{B}} are the first ionization energies of the atoms, and R {\displaystyle R} is the intermolecular distance. E A B d i s p ≈ − 3 2 I A I B I A + I B α A ′ α B ′ R 6 {\displaystyle E_{AB}^{\rm {disp}}\approx -{3 \over 2}{I_{A}I_{B} \over I_{A}+I_{B}}{\alpha '_{A}\alpha '_{B} \over {R^{6}}}} Note that this final London equation does not contain instantaneous dipoles (see molecular dipoles). The "explanation" of the dispersion force as the interaction between two such dipoles was invented after London arrived at the proper quantum mechanical theory. The authoritative work contains a criticism of the instantaneous dipole model and a modern and thorough exposition of the theory of intermolecular forces. The London theory has much similarity to the quantum mechanical theory of light dispersion, which is why London coined the phrase "dispersion effect". In physics, the term "dispersion" describes the variation of a quantity with frequency, which is the fluctuation of the electrons in the case of the London dispersion. == Relative magnitude == Dispersion forces are usually dominant over the three van der Waals forces (orientation, induction, dispersion) between atoms and molecules, with the exception of molecules that are small and highly polar, such as water. The following contribution of the dispersion to the total intermolecular interaction energy has been given: == See also == Dispersion (chemistry) van der Waals force van der Waals molecule Non-covalent interactions == References ==
Wikipedia/London_dispersion_force
In classical mechanics, a central force on an object is a force that is directed towards or away from a point called center of force. F ( r ) = F ( r ) r {\displaystyle \mathbf {F} (\mathbf {r} )=F(\mathbf {r} ){\mathbf {r} }} where F is a force vector, F is a scalar valued force function (whose absolute value gives the magnitude of the force and is positive if the force is outward and negative if the force is inward), r is the position vector, ||r|| is its length, and r ^ = r / ‖ r ‖ {\textstyle {\hat {\mathbf {r} }}=\mathbf {r} /\|\mathbf {r} \|} is the corresponding unit vector.: 93  Not all central force fields are conservative or spherically symmetric. However, a central force is conservative if and only if it is spherically symmetric or rotationally invariant.: 133–38  Examples of spherically symmetric central forces include the Coulomb force and the force of gravity. == Properties == Central forces that are conservative can always be expressed as the negative gradient of a potential energy: F ( r ) = − ∇ V ( r ) , where V ( r ) = ∫ | r | + ∞ F ( r ) d r {\displaystyle \mathbf {F} (\mathbf {r} )=-\mathbf {\nabla } V(\mathbf {r} )\;{\text{, where }}V(\mathbf {r} )=\int _{|\mathbf {r} |}^{+\infty }F(r)\,\mathrm {d} r} (the upper bound of integration is arbitrary, as the potential is defined up to an additive constant). In a conservative field, the total mechanical energy (kinetic and potential) is conserved: E = 1 2 m | r ˙ | 2 + 1 2 I | ω | 2 + V ( r ) = constant {\displaystyle E={\tfrac {1}{2}}m|\mathbf {\dot {r}} |^{2}+{\tfrac {1}{2}}I|{\boldsymbol {\omega }}|^{2}+V(\mathbf {r} )={\text{constant}}} (where 'ṙ' denotes the derivative of 'r' with respect to time, that is the velocity,'I' denotes moment of inertia of that body and 'ω' denotes angular velocity), and in a central force field, so is the angular momentum: L = r × m r ˙ = constant {\displaystyle \mathbf {L} =\mathbf {r} \times m\mathbf {\dot {r}} ={\text{constant}}} because the torque exerted by the force is zero. As a consequence, the body moves on the plane perpendicular to the angular momentum vector and containing the origin, and obeys Kepler's second law. (If the angular momentum is zero, the body moves along the line joining it with the origin.) It can also be shown that an object that moves under the influence of any central force obeys Kepler's second law. However, the first and third laws depend on the inverse-square nature of Newton's law of universal gravitation and do not hold in general for other central forces. As a consequence of being conservative, these specific central force fields are irrotational, that is, its curl is zero, except at the origin: ∇ × F ( r ) = 0 . {\displaystyle \nabla \times \mathbf {F} (\mathbf {r} )=\mathbf {0} .} == Examples == Gravitational force and Coulomb force are two familiar examples with F ( r ) {\displaystyle F(\mathbf {r} )} being proportional to 1/r2 only. An object in such a force field with negative F ( r ) {\displaystyle F(\mathbf {r} )} (corresponding to an attractive force) obeys Kepler's laws of planetary motion. The force field of a spatial harmonic oscillator is central with F ( r ) {\displaystyle F(\mathbf {r} )} proportional to r only and negative. By Bertrand's theorem, these two, F ( r ) = − k / r 2 {\displaystyle F(\mathbf {r} )=-k/r^{2}} and F ( r ) = − k r {\displaystyle F(\mathbf {r} )=-kr} , are the only possible central force fields where all bounded orbits are stable closed orbits. However, there exist other force fields, which have some closed orbits. == See also == Classical central-force problem Particle in a spherically symmetric potential == References ==
Wikipedia/Central_forces
The atomic nucleus is the small, dense region consisting of protons and neutrons at the center of an atom, discovered in 1911 by Ernest Rutherford at the University of Manchester based on the 1909 Geiger–Marsden gold foil experiment. After the discovery of the neutron in 1932, models for a nucleus composed of protons and neutrons were quickly developed by Dmitri Ivanenko and Werner Heisenberg. An atom is composed of a positively charged nucleus, with a cloud of negatively charged electrons surrounding it, bound together by electrostatic force. Almost all of the mass of an atom is located in the nucleus, with a very small contribution from the electron cloud. Protons and neutrons are bound together to form a nucleus by the nuclear force. The diameter of the nucleus is in the range of 1.70 fm (1.70×10−15 m) for hydrogen (the diameter of a single proton) to about 11.7 fm for uranium. These dimensions are much smaller than the diameter of the atom itself (nucleus + electron cloud), by a factor of about 26,634 (uranium atomic radius is about 156 pm (156×10−12 m)) to about 60,250 (hydrogen atomic radius is about 52.92 pm). The branch of physics involved with the study and understanding of the atomic nucleus, including its composition and the forces that bind it together, is called nuclear physics. == History == The nucleus was discovered in 1911, as a result of Ernest Rutherford's efforts to test Thomson's "plum pudding model" of the atom. The electron had already been discovered by J. J. Thomson. Knowing that atoms are electrically neutral, J. J. Thomson postulated that there must be a positive charge as well. In his plum pudding model, Thomson suggested that an atom consisted of negative electrons randomly scattered within a sphere of positive charge. Ernest Rutherford later devised an experiment with his research partner Hans Geiger and with help of Ernest Marsden, that involved the deflection of alpha particles (helium nuclei) directed at a thin sheet of metal foil. He reasoned that if J. J. Thomson's model were correct, the positively charged alpha particles would easily pass through the foil with very little deviation in their paths, as the foil should act as electrically neutral if the negative and positive charges are so intimately mixed as to make it appear neutral. To his surprise, many of the particles were deflected at very large angles. Because the mass of an alpha particle is about 8000 times that of an electron, it became apparent that a very strong force must be present if it could deflect the massive and fast moving alpha particles. He realized that the plum pudding model could not be accurate and that the deflections of the alpha particles could only be explained if the positive and negative charges were separated from each other and that the mass of the atom was a concentrated point of positive charge. This justified the idea of a nuclear atom with a dense center of positive charge and mass. === Etymology === The term nucleus is from the Latin word nucleus, a diminutive of nux ('nut'), meaning 'the kernel' (i.e., the 'small nut') inside a watery type of fruit (like a peach). In 1844, Michael Faraday used the term to refer to the "central point of an atom". The modern atomic meaning was proposed by Ernest Rutherford in 1912. The adoption of the term "nucleus" to atomic theory, however, was not immediate. In 1916, for example, Gilbert N. Lewis stated, in his famous article The Atom and the Molecule, that "the atom is composed of the kernel and an outer atom or shell." Similarly, the term kern meaning kernel is used for nucleus in German and Dutch. == Principles == The nucleus of an atom consists of neutrons and protons, which in turn are the manifestation of more elementary particles, called quarks, that are held in association by the nuclear strong force in certain stable combinations of hadrons, called baryons. The nuclear strong force extends far enough from each baryon so as to bind the neutrons and protons together against the repulsive electrical force between the positively charged protons. The nuclear strong force has a very short range, and essentially drops to zero just beyond the edge of the nucleus. The collective action of the positively charged nucleus is to hold the electrically negative charged electrons in their orbits about the nucleus. The collection of negatively charged electrons orbiting the nucleus display an affinity for certain configurations and numbers of electrons that make their orbits stable. Which chemical element an atom represents is determined by the number of protons in the nucleus; the neutral atom will have an equal number of electrons orbiting that nucleus. Individual chemical elements can create more stable electron configurations by combining to share their electrons. It is that sharing of electrons to create stable electronic orbits about the nuclei that appears to us as the chemistry of our macro world. Protons define the entire charge of a nucleus, and hence its chemical identity. Neutrons are electrically neutral, but contribute to the mass of a nucleus to nearly the same extent as the protons. Neutrons can explain the phenomenon of isotopes (same atomic number with different atomic mass). The main role of neutrons is to reduce electrostatic repulsion inside the nucleus. == Composition and shape == Protons and neutrons are fermions, with different values of the strong isospin quantum number, so two protons and two neutrons can share the same space wave function since they are not identical quantum entities. They are sometimes viewed as two different quantum states of the same particle, the nucleon. Two fermions, such as two protons, or two neutrons, or a proton + neutron (the deuteron) can exhibit bosonic behavior when they become loosely bound in pairs, which have integer spin. In the rare case of a hypernucleus, a third baryon called a hyperon, containing one or more strange quarks and/or other unusual quark(s), can also share the wave function. However, this type of nucleus is extremely unstable and not found on Earth except in high-energy physics experiments. The neutron has a positively charged core of radius ≈ 0.3 fm surrounded by a compensating negative charge of radius between 0.3 fm and 2 fm. The proton has an approximately exponentially decaying positive charge distribution with a mean square radius of about 0.8 fm. The shape of the atomic nucleus can be spherical, rugby ball-shaped (prolate deformation), discus-shaped (oblate deformation), triaxial (a combination of oblate and prolate deformation) or pear-shaped. == Forces == Nuclei are bound together by the residual strong force (nuclear force). The residual strong force is a minor residuum of the strong interaction which binds quarks together to form protons and neutrons. This force is much weaker between neutrons and protons because it is mostly neutralized within them, in the same way that electromagnetic forces between neutral atoms (such as van der Waals forces that act between two inert gas atoms) are much weaker than the electromagnetic forces that hold the parts of the atoms together internally (for example, the forces that hold the electrons in an inert gas atom bound to its nucleus). The nuclear force is highly attractive at the distance of typical nucleon separation, and this overwhelms the repulsion between protons due to the electromagnetic force, thus allowing nuclei to exist. However, the residual strong force has a limited range because it decays quickly with distance (see Yukawa potential); thus only nuclei smaller than a certain size can be completely stable. The largest known completely stable nucleus (i.e. stable to alpha, beta, and gamma decay) is lead-208 which contains a total of 208 nucleons (126 neutrons and 82 protons). Nuclei larger than this maximum are unstable and tend to be increasingly short-lived with larger numbers of nucleons. However, bismuth-209 is also stable to beta decay and has the longest half-life to alpha decay of any known isotope, estimated at a billion times longer than the age of the universe. The residual strong force is effective over a very short range (usually only a few femtometres (fm); roughly one or two nucleon diameters) and causes an attraction between any pair of nucleons. For example, between a proton and a neutron to form a deuteron [NP], and also between protons and protons, and neutrons and neutrons. == Halo nuclei and nuclear force range limits == The effective absolute limit of the range of the nuclear force (also known as residual strong force) is represented by halo nuclei such as lithium-11 or boron-14, in which dineutrons, or other collections of neutrons, orbit at distances of about 10 fm (roughly similar to the 8 fm radius of the nucleus of uranium-238). These nuclei are not maximally dense. Halo nuclei form at the extreme edges of the chart of the nuclides—the neutron drip line and proton drip line—and are all unstable with short half-lives, measured in milliseconds; for example, lithium-11 has a half-life of 8.8 ms. Halos in effect represent an excited state with nucleons in an outer quantum shell which has unfilled energy levels "below" it (both in terms of radius and energy). The halo may be made of either neutrons [NN, NNN] or protons [PP, PPP]. Nuclei which have a single neutron halo include 11Be and 19C. A two-neutron halo is exhibited by 6He, 11Li, 17B, 19B and 22C. Two-neutron halo nuclei break into three fragments, never two, and are called Borromean nuclei because of this behavior (referring to a system of three interlocked rings in which breaking any ring frees both of the others). 8He and 14Be both exhibit a four-neutron halo. Nuclei which have a proton halo include 8B and 26P. A two-proton halo is exhibited by 17Ne and 27S. Proton halos are expected to be more rare and unstable than the neutron examples, because of the repulsive electromagnetic forces of the halo proton(s). == Nuclear models == Although the standard model of physics is widely believed to completely describe the composition and behavior of the nucleus, generating predictions from theory is much more difficult than for most other areas of particle physics. This is due to two reasons: In principle, the physics within a nucleus can be derived entirely from quantum chromodynamics (QCD). In practice however, current computational and mathematical approaches for solving QCD in low-energy systems such as the nuclei are extremely limited. This is due to the phase transition that occurs between high-energy quark matter and low-energy hadronic matter, which renders perturbative techniques unusable, making it difficult to construct an accurate QCD-derived model of the forces between nucleons. Current approaches are limited to either phenomenological models such as the Argonne v18 potential or chiral effective field theory. Even if the nuclear force is well constrained, a significant amount of computational power is required to accurately compute the properties of nuclei ab initio. Developments in many-body theory have made this possible for many low mass and relatively stable nuclei, but further improvements in both computational power and mathematical approaches are required before heavy nuclei or highly unstable nuclei can be tackled. Historically, experiments have been compared to relatively crude models that are necessarily imperfect. None of these models can completely explain experimental data on nuclear structure. The nuclear radius (R) is considered to be one of the basic quantities that any model must predict. For stable nuclei (not halo nuclei or other unstable distorted nuclei) the nuclear radius is roughly proportional to the cube root of the mass number (A) of the nucleus, and particularly in nuclei containing many nucleons, as they arrange in more spherical configurations: The stable nucleus has approximately a constant density and therefore the nuclear radius R can be approximated by the following formula, R = r 0 A 1 / 3 {\displaystyle R=r_{0}A^{1/3}\,} where A = Atomic mass number (the number of protons Z, plus the number of neutrons N) and r0 = 1.25 fm = 1.25 × 10−15 m. In this equation, the "constant" r0 varies by 0.2 fm, depending on the nucleus in question, but this is less than 20% change from a constant. In other words, packing protons and neutrons in the nucleus gives approximately the same total size result as packing hard spheres of a constant size (like marbles) into a tight spherical or almost spherical bag (some stable nuclei are not quite spherical, but are known to be prolate). Models of nuclear structure include: === Cluster model === The cluster model describes the nucleus as a molecule-like collection of proton-neutron groups (e.g., alpha particles) with one or more valence neutrons occupying molecular orbitals. === Liquid drop model === Early models of the nucleus viewed the nucleus as a rotating liquid drop. In this model, the trade-off of long-range electromagnetic forces and relatively short-range nuclear forces, together cause behavior which resembled surface tension forces in liquid drops of different sizes. This formula is successful at explaining many important phenomena of nuclei, such as their changing amounts of binding energy as their size and composition changes (see semi-empirical mass formula), but it does not explain the special stability which occurs when nuclei have special "magic numbers" of protons or neutrons. The terms in the semi-empirical mass formula, which can be used to approximate the binding energy of many nuclei, are considered as the sum of five types of energies (see below). Then the picture of a nucleus as a drop of incompressible liquid roughly accounts for the observed variation of binding energy of the nucleus: Volume energy. When an assembly of nucleons of the same size is packed together into the smallest volume, each interior nucleon has a certain number of other nucleons in contact with it. So, this nuclear energy is proportional to the volume. Surface energy. A nucleon at the surface of a nucleus interacts with fewer other nucleons than one in the interior of the nucleus and hence its binding energy is less. This surface energy term takes that into account and is therefore negative and is proportional to the surface area. Coulomb energy. The electric repulsion between each pair of protons in a nucleus contributes toward decreasing its binding energy. Asymmetry energy (also called Pauli Energy). An energy associated with the Pauli exclusion principle. Were it not for the Coulomb energy, the most stable form of nuclear matter would have the same number of neutrons as protons, since unequal numbers of neutrons and protons imply filling higher energy levels for one type of particle, while leaving lower energy levels vacant for the other type. Pairing energy. An energy which is a correction term that arises from the tendency of proton pairs and neutron pairs to occur. An even number of particles is more stable than an odd number. === Shell models and other quantum models === A number of models for the nucleus have also been proposed in which nucleons occupy orbitals, much like the atomic orbitals in atomic physics theory. These wave models imagine nucleons to be either sizeless point particles in potential wells, or else probability waves as in the "optical model", frictionlessly orbiting at high speed in potential wells. In the above models, the nucleons may occupy orbitals in pairs, due to being fermions, which allows explanation of even/odd Z and N effects well known from experiments. The exact nature and capacity of nuclear shells differs from those of electrons in atomic orbitals, primarily because the potential well in which the nucleons move (especially in larger nuclei) is quite different from the central electromagnetic potential well which binds electrons in atoms. Some resemblance to atomic orbital models may be seen in a small atomic nucleus like that of helium-4, in which the two protons and two neutrons separately occupy 1s orbitals analogous to the 1s orbital for the two electrons in the helium atom, and achieve unusual stability for the same reason. Nuclei with 5 nucleons are all extremely unstable and short-lived, yet, helium-3, with 3 nucleons, is very stable even with lack of a closed 1s orbital shell. Another nucleus with 3 nucleons, the triton hydrogen-3 is unstable and will decay into helium-3 when isolated. Weak nuclear stability with 2 nucleons {NP} in the 1s orbital is found in the deuteron hydrogen-2, with only one nucleon in each of the proton and neutron potential wells. While each nucleon is a fermion, the {NP} deuteron is a boson and thus does not follow Pauli Exclusion for close packing within shells. Lithium-6 with 6 nucleons is highly stable without a closed second 1p shell orbital. For light nuclei with total nucleon numbers 1 to 6 only those with 5 do not show some evidence of stability. Observations of beta-stability of light nuclei outside closed shells indicate that nuclear stability is much more complex than simple closure of shell orbitals with magic numbers of protons and neutrons. For larger nuclei, the shells occupied by nucleons begin to differ significantly from electron shells, but nevertheless, present nuclear theory does predict the magic numbers of filled nuclear shells for both protons and neutrons. The closure of the stable shells predicts unusually stable configurations, analogous to the noble group of nearly-inert gases in chemistry. An example is the stability of the closed shell of 50 protons, which allows tin to have 10 stable isotopes, more than any other element. Similarly, the distance from shell-closure explains the unusual instability of isotopes which have far from stable numbers of these particles, such as the radioactive elements 43 (technetium) and 61 (promethium), each of which is preceded and followed by 17 or more stable elements. There are however problems with the shell model when an attempt is made to account for nuclear properties well away from closed shells. This has led to complex post hoc distortions of the shape of the potential well to fit experimental data, but the question remains whether these mathematical manipulations actually correspond to the spatial deformations in real nuclei. Problems with the shell model have led some to propose realistic two-body and three-body nuclear force effects involving nucleon clusters and then build the nucleus on this basis. Three such cluster models are the 1936 Resonating Group Structure model of John Wheeler, Close-Packed Spheron Model of Linus Pauling and the 2D Ising Model of MacGregor. == See also == == Notes == == References == == External links == The Nucleus – a chapter from an online textbook Archived December 14, 2010, at the Wayback Machine The LIVEChart of Nuclides – IAEA in Java or HTML Article on the "nuclear shell model", giving nuclear shell filling for the various elements. Accessed September 16, 2009. Timeline: Subatomic Concepts, Nuclear Science & Technology Archived February 5, 2021, at the Wayback Machine.
Wikipedia/Nuclear_model
The interacting boson model (IBM) is a model in nuclear physics in which nucleons (protons or neutrons) pair up, essentially acting as a single particle with boson properties, with integral spin of either 2 (d-boson) or 0 (s-boson). They correspond to a quintuplet and singlet, i.e. 6 states. It is sometimes known as the Interacting boson approximation (IBA).: 7  The IBM1/IBM-I model treats both types of nucleons the same and considers only pairs of nucleons coupled to total angular momentum 0 and 2, called respectively, s and d bosons. The IBM2/IBM-II model treats protons and neutrons separately. Both models are restricted to nuclei with even numbers of protons and neutrons.: 9  The model can be used to predict vibrational and rotational modes of non-spherical nuclei. == History == This model was invented by Akito Arima and Francesco Iachello in 1974.: 6  while working at the Kernfysisch Versneller Instituut(KVI) in Groningen, Netherlands. KVI is now property of Universitair Medisch Centrum Groningen (https://umcgresearch.org/). == See also == Liquid drop model Nuclear shell model == References == Arima, A.; Iachello, F. (1975-10-20). "Collective Nuclear States as Representations of a SU(6) Group". Physical Review Letters. 35 (16). American Physical Society (APS): 1069–1072. Bibcode:1975PhRvL..35.1069A. doi:10.1103/physrevlett.35.1069. ISSN 0031-9007. Iachello, F; Arima, A. (1987). The Interacting Boson Model. Cambridge: Cambridge University Press. ISBN 978-0-511-89551-7. OCLC 776970502. Arima, A; Iachello, F (1976). "Interacting boson model of collective states I. The vibrational limit". Annals of Physics. 99 (2). Elsevier BV: 253–317. Bibcode:1976AnPhy..99..253A. doi:10.1016/0003-4916(76)90097-x. ISSN 0003-4916. Arima, A; Iachello, F (1978). "Interacting boson model of collective nuclear states II. The rotational limit". Annals of Physics. 111 (1). Elsevier BV: 201–238. Bibcode:1978AnPhy.111..201A. doi:10.1016/0003-4916(78)90228-2. ISSN 0003-4916. Scholten, O; Iachello, F; Arima, A (1978). "Interacting boson model of collective nuclear states III. The transition from SU(5) to SU(3)". Annals of Physics. 115 (2). Elsevier BV: 325–366. Bibcode:1978AnPhy.115..325S. doi:10.1016/0003-4916(78)90159-8. ISSN 0003-4916. Arima, A; Iachello, F (1979). "Interacting boson model of collective nuclear states IV. The O(6) limit". Annals of Physics. 123 (2). Elsevier BV: 468–492. Bibcode:1979AnPhy.123..468A. doi:10.1016/0003-4916(79)90347-6. ISSN 0003-4916. Arima, A; Iachello, F (1981). "The Interacting Boson Model". Annual Review of Nuclear and Particle Science. 31 (1). Annual Reviews: 75–105. Bibcode:1981ARNPS..31...75A. doi:10.1146/annurev.ns.31.120181.000451. ISSN 0163-8998. Talmi, Igal (1993). Simple models of complex nuclei : the shell model and interacting boson model. Chur, Switzerland Langhorne, Pa., U.S.A: Harwood Academic Publishers. ISBN 978-3-7186-0550-7. OCLC 25706648. == Further reading == Evolution of shapes in even–even nuclei using the standard interacting boson model
Wikipedia/Interacting_boson_model
Internal conversion is an atomic decay process where an excited nucleus interacts electromagnetically with one of the orbital electrons of an atom. This causes the electron to be emitted (ejected) from the atom. Thus, in internal conversion (often abbreviated IC), a high-energy electron is emitted from the excited atom, but not from the nucleus. For this reason, the high-speed electrons resulting from internal conversion are not called beta particles, since the latter come from beta decay, where they are newly created in the nuclear decay process. IC is possible whenever gamma decay is possible, except if the atom is fully ionized. In IC, the atomic number does not change, and thus there is no transmutation of one element to another. Also, neutrinos and the weak force are not involved in IC. Since an electron is lost from the atom, a hole appears in an electron aura which is subsequently filled by other electrons that descend to the empty, yet lower energy level, and in the process emit characteristic X-ray(s), Auger electron(s), or both. The atom thus emits high-energy electrons and X-ray photons, none of which originate in that nucleus. The atom supplies the energy needed to eject the electron, which in turn causes the latter events and the other emissions. Since primary electrons from IC carry a fixed (large) part of the characteristic decay energy, they have a discrete energy spectrum, rather than the spread (continuous) spectrum characteristic of beta particles. Whereas the energy spectrum of beta particles plots as a broad hump, the energy spectrum of internally converted electrons plots as a single sharp peak (see example below). == Mechanism == In the quantum model of the electron, there is non-zero probability of finding the electron within the nucleus. In internal conversion, the wavefunction of an inner shell electron (usually an s electron) penetrates the nucleus. When this happens, the electron may couple to an excited energy state of the nucleus and take the energy of the nuclear transition directly, without an intermediate gamma ray being first produced. The kinetic energy of the emitted electron is equal to the transition energy in the nucleus, minus the binding energy of the electron to the atom. Most IC electrons come from the K shell (the 1s state), as these two electrons have the highest probability of being within the nucleus. However, the s states in the L, M, and N shells (i.e., the 2s, 3s, and 4s states) are also able to couple to the nuclear fields and cause IC electron ejections from those shells (called L or M or N internal conversion). Ratios of K-shell to other L, M, or N shell internal conversion probabilities for various nuclides have been prepared. An amount of energy exceeding the atomic binding energy of the s electron must be supplied to that electron in order to eject it from the atom to result in IC; that is to say, internal conversion cannot happen if the decay energy of the nucleus is less than a certain threshold. Though s electrons are more likely for IC due to their superior nuclear penetration compared to electrons with greater orbital angular momentum, spectral studies show that p electrons (from shells L and higher) are occasionally ejected in the IC process. There are also a few radionuclides in which the decay energy is not sufficient to convert (eject) a 1s (K shell) electron, and these nuclides, to decay by internal conversion, must decay by ejecting electrons from the L or M or N shells (i.e., by ejecting 2s, 3s, or 4s electrons) as these binding energies are lower. After the IC electron is emitted, the atom is left with a vacancy in one of its electron shells, usually an inner one. This hole will be filled with an electron from one of the higher shells, which causes another outer electron to fill its place in turn, causing a cascade. Consequently, one or more characteristic X-rays or Auger electrons will be emitted as the remaining electrons in the atom cascade down to fill the vacancies. == Example: decay of 203Hg == The decay scheme on the left shows that 203Hg produces a continuous beta spectrum with maximum energy 214 keV, that leads to an excited state of the daughter nucleus 203Tl. This state decays very quickly (within 2.8×10−10 s) to the ground state of 203Tl, emitting a gamma quantum of 279 keV. The figure on the right shows the electron spectrum of 203Hg, measured by means of a magnetic spectrometer. It includes the continuous beta spectrum and K-, L-, and M-lines due to internal conversion. Since the binding energy of the K electrons in 203Tl is 85 keV, the K line has an energy of 279 − 85 = 194 keV. Due to lesser binding energies, the L- and M-lines have higher energies. Due to the finite energy resolution of the spectrometer, the "lines" have a Gaussian shape of finite width. == When the process is expected == Internal conversion is favored whenever the energy available for a gamma transition is small, and it is also the primary mode of de-excitation for 0+→0+ (i.e. E0) transitions. The 0+→0+ transitions occur where an excited nucleus has zero-spin and positive parity, and decays to a ground state which also has zero-spin and positive parity (such as all nuclides with even number of protons and neutrons). In such cases, de-excitation cannot take place by emission of a gamma ray, since this would violate conservation of angular momentum, hence other mechanisms like IC predominate. This also shows that internal conversion (contrary to its name) is not a two-step process where a gamma ray would be first emitted and then converted. The competition between IC and gamma decay is quantified in the form of the internal conversion coefficient which is defined as α = e / γ {\displaystyle \alpha =e/{\gamma }} where e {\displaystyle e} is the rate of conversion electrons and γ {\displaystyle \gamma } is the rate of gamma-ray emission observed from a decaying nucleus. For example, in the decay of the excited state at 35 keV of 125Te (which is produced by the decay of 125I), 7% of decays emit energy as a gamma ray, while 93% release energy as conversion electrons. Therefore, this excited state of 125Te has an IC coefficient of α = 93 / 7 = 13.3 {\displaystyle \alpha =93/7=13.3} . For increasing atomic number (Z) and decreasing gamma-ray energy, IC coefficients increase. For example, calculated IC coefficients for electric dipole (E1) transitions, for Z = 40, 60, and 80, are shown in the figure. The energy of the emitted gamma ray is a precise measure of the difference in energy between the excited states of the decaying nucleus. In the case of conversion electrons, the binding energy must also be taken into account: The energy of a conversion electron is given as E = ( E i − E f ) − E B {\displaystyle E=(E_{i}-E_{f})-E_{B}} , where E i {\displaystyle E_{i}} and E f {\displaystyle E_{f}} are the energies of the nucleus in its initial and final states, respectively, while E B {\displaystyle E_{B}} is the binding energy of the electron. == Similar processes == Nuclei with zero-spin and high excitation energies (more than about 1.022 MeV) also can't rid themselves of energy by (single) gamma emission due to the constraint imposed by conservation of momentum, but they do have enough decay energy to decay by pair production. In this type of decay, an electron and positron are both emitted from the atom at the same time, and conservation of angular momentum is solved by having these two product particles spin in opposite directions. IC should not be confused with the similar photoelectric effect. When a gamma ray emitted by the nucleus of an atom hits another atom, it may be absorbed producing a photoelectron of well-defined energy (this used to be called "external conversion"). In IC, however, the process happens within one atom, and without a real intermediate gamma ray. Just as an atom may produce an IC electron instead of a gamma ray if energy is available from within the nucleus, so an atom may produce an Auger electron instead of an X-ray if an electron is missing from one of the low-lying electron shells. (The first process can even precipitate the second one.) Like IC electrons, Auger electrons have a discrete energy, resulting in a sharp energy peak in the spectrum. Electron capture also involves an inner shell electron, which in this case is retained in the nucleus (changing the atomic number) and leaving the atom (not nucleus) in an excited state. The atom missing an inner electron can relax by a cascade of X-ray emissions as higher energy electrons in the atom fall to fill the vacancy left in the electron cloud by the captured electron. Such atoms also typically exhibit Auger electron emission. Electron capture, like beta decay, also typically results in excited atomic nuclei, which may then relax to a state of lowest nuclear energy by any of the methods permitted by spin constraints, including gamma decay and internal conversion decay. == See also == Internal conversion coefficient == References == == Further reading == Krane, Kenneth S. (1988). Introductory Nuclear Physics. J. Wiley & Sons. ISBN 0-471-80553-X. Bertulani, Carlos A. (2007). Nuclear Physics in a Nutshell. Princeton University Press. ISBN 978-0-691-12505-3. L'Annunziata, Michael F.; et al. (2003). Handbook of Radioactivity Analysis. Academic Press. ISBN 0-12-436603-1. R.W.Howell, Radiation spectra for Auger-electron emitting radionuclides: Report No. 2 of AAPM Nuclear Medicine Task Group No. 6, 1992, Medical Physics 19(6), 1371–1383 == External links == HyperPhysics
Wikipedia/Internal_conversion
In nuclear physics, atomic physics, and nuclear chemistry, the nuclear shell model utilizes the Pauli exclusion principle to model the structure of atomic nuclei in terms of energy levels. The first shell model was proposed by Dmitri Ivanenko (together with E. Gapon) in 1932. The model was developed in 1949 following independent work by several physicists, most notably Maria Goeppert Mayer and J. Hans D. Jensen, who received the 1963 Nobel Prize in Physics for their contributions to this model, and Eugene Wigner, who received the Nobel Prize alongside them for his earlier groundlaying work on the atomic nuclei. The nuclear shell model is partly analogous to the atomic shell model, which describes the arrangement of electrons in an atom, in that a filled shell results in better stability. When adding nucleons (protons and neutrons) to a nucleus, there are certain points where the binding energy of the next nucleon is significantly less than the last one. This observation that there are specific magic quantum numbers of nucleons (2, 8, 20, 28, 50, 82, and 126) that are more tightly bound than the following higher number is the origin of the shell model. The shells for protons and neutrons are independent of each other. Therefore, there can exist both "magic nuclei", in which one nucleon type or the other is at a magic number, and "doubly magic quantum nuclei", where both are. Due to variations in orbital filling, the upper magic numbers are 126 and, speculatively, 184 for neutrons, but only 114 for protons, playing a role in the search for the so-called island of stability. Some semi-magic numbers have been found, notably Z = 40, which gives the nuclear shell filling for the various elements; 16 may also be a magic number. To get these numbers, the nuclear shell model starts with an average potential with a shape somewhere between the square well and the harmonic oscillator. To this potential, a spin-orbit term is added. Even so, the total perturbation does not coincide with the experiment, and an empirical spin-orbit coupling must be added with at least two or three different values of its coupling constant, depending on the nuclei being studied. The magic numbers of nuclei, as well as other properties, can be arrived at by approximating the model with a three-dimensional harmonic oscillator plus a spin–orbit interaction. A more realistic but complicated potential is known as the Woods–Saxon potential. == Modified harmonic oscillator model == Consider a three-dimensional harmonic oscillator. This would give, for example, in the first three levels ("ℓ" is the angular momentum quantum number): Nuclei are built by adding protons and neutrons. These will always fill the lowest available level, with the first two protons filling level zero, the next six protons filling level one, and so on. As with electrons in the periodic table, protons in the outermost shell will be relatively loosely bound to the nucleus if there are only a few protons in that shell because they are farthest from the center of the nucleus. Therefore, nuclei with a full outer proton shell will have a higher nuclear binding energy than other nuclei with a similar total number of protons. The same is true for neutrons. This means that the magic numbers are expected to be those in which all occupied shells are full. In accordance with the experiment, we get 2 (level 0 full) and 8 (levels 0 and 1 full) for the first two numbers. However, the full set of magic numbers does not turn out correctly. These can be computed as follows: In a three-dimensional harmonic oscillator the total degeneracy of states at level n is ( n + 1 ) ( n + 2 ) 2 {\displaystyle {(n+1)(n+2) \over 2}} . Due to the spin, the degeneracy is doubled and is ( n + 1 ) ( n + 2 ) {\displaystyle (n+1)(n+2)} . Thus, the magic numbers would be ∑ n = 0 k ( n + 1 ) ( n + 2 ) = ( k + 1 ) ( k + 2 ) ( k + 3 ) 3 {\displaystyle \sum _{n=0}^{k}(n+1)(n+2)={\frac {(k+1)(k+2)(k+3)}{3}}} for all integer k. This gives the following magic numbers: 2, 8, 20, 40, 70, 112, ..., which agree with experiment only in the first three entries. These numbers are twice the tetrahedral numbers (1, 4, 10, 20, 35, 56, ...) from the Pascal Triangle. In particular, the first six shells are: level 0: 2 states (ℓ = 0) = 2. level 1: 6 states (ℓ = 1) = 6. level 2: 2 states (ℓ = 0) + 10 states (ℓ = 2) = 12. level 3: 6 states (ℓ = 1) + 14 states (ℓ = 3) = 20. level 4: 2 states (ℓ = 0) + 10 states (ℓ = 2) + 18 states (ℓ = 4) = 30. level 5: 6 states (ℓ = 1) + 14 states (ℓ = 3) + 22 states (ℓ = 5) = 42. where for every ℓ there are 2ℓ+1 different values of ml and 2 values of ms, giving a total of 4ℓ+2 states for every specific level. These numbers are twice the values of triangular numbers from the Pascal Triangle: 1, 3, 6, 10, 15, 21, .... === Including a spin-orbit interaction === We next include a spin–orbit interaction. First, we have to describe the system by the quantum numbers j, mj and parity instead of ℓ, ml and ms, as in the hydrogen–like atom. Since every even level includes only even values of ℓ, it includes only states of even (positive) parity. Similarly, every odd level includes only states of odd (negative) parity. Thus we can ignore parity in counting states. The first six shells, described by the new quantum numbers, are level 0 (n = 0): 2 states (j = ⁠1/2⁠). Even parity. level 1 (n = 1): 2 states (j = ⁠1/2⁠) + 4 states (j = ⁠3/2⁠) = 6. Odd parity. level 2 (n = 2): 2 states (j = ⁠1/2⁠) + 4 states (j = ⁠3/2⁠) + 6 states (j = ⁠5/2⁠) = 12. Even parity. level 3 (n = 3): 2 states (j = ⁠1/2⁠) + 4 states (j = ⁠3/2⁠) + 6 states (j = ⁠5/2⁠) + 8 states (j = ⁠7/2⁠) = 20. Odd parity. level 4 (n = 4): 2 states (j = ⁠1/2⁠) + 4 states (j = ⁠3/2⁠) + 6 states (j = ⁠5/2⁠) + 8 states (j = ⁠7/2⁠) + 10 states (j = ⁠9/2⁠) = 30. Even parity. level 5 (n = 5): 2 states (j = ⁠1/2⁠) + 4 states (j = ⁠3/2⁠) + 6 states (j = ⁠5/2⁠) + 8 states (j = ⁠7/2⁠) + 10 states (j = ⁠9/2⁠) + 12 states (j = ⁠11/2⁠) = 42. Odd parity. where for every j there are 2j+1 different states from different values of mj. Due to the spin–orbit interaction, the energies of states of the same level but with different j will no longer be identical. This is because in the original quantum numbers, when s → {\displaystyle \scriptstyle {\vec {s}}} is parallel to l → {\displaystyle \scriptstyle {\vec {l}}} , the interaction energy is positive, and in this case j = ℓ + s = ℓ + ⁠1/2⁠. When s → {\displaystyle \scriptstyle {\vec {s}}} is anti-parallel to l → {\displaystyle \scriptstyle {\vec {l}}} (i.e. aligned oppositely), the interaction energy is negative, and in this case j=ℓ−s=ℓ−⁠1/2⁠. Furthermore, the strength of the interaction is roughly proportional to ℓ. For example, consider the states at level 4: The 10 states with j = ⁠9/2⁠ come from ℓ = 4 and s parallel to ℓ. Thus they have a positive spin–orbit interaction energy. The 8 states with j = ⁠7/2⁠ came from ℓ = 4 and s anti-parallel to ℓ. Thus they have a negative spin–orbit interaction energy. The 6 states with j = ⁠5/2⁠ came from ℓ = 2 and s parallel to ℓ. Thus they have a positive spin–orbit interaction energy. However, its magnitude is half compared to the states with j = ⁠9/2⁠. The 4 states with j = ⁠3/2⁠ came from ℓ = 2 and s anti-parallel to ℓ. Thus they have a negative spin–orbit interaction energy. However, its magnitude is half compared to the states with j = ⁠7/2⁠. The 2 states with j = ⁠1/2⁠ came from ℓ = 0 and thus have zero spin–orbit interaction energy. === Changing the profile of the potential === The harmonic oscillator potential V ( r ) = μ ω 2 r 2 / 2 {\displaystyle V(r)=\mu \omega ^{2}r^{2}/2} grows infinitely as the distance from the center r goes to infinity. A more realistic potential, such as the Woods–Saxon potential, would approach a constant at this limit. One main consequence is that the average radius of nucleons' orbits would be larger in a realistic potential. This leads to a reduced term ℏ 2 l ( l + 1 ) / 2 m r 2 {\displaystyle \scriptstyle \hbar ^{2}l(l+1)/2mr^{2}} in the Laplace operator of the Hamiltonian operator. Another main difference is that orbits with high average radii, such as those with high n or high ℓ, will have a lower energy than in a harmonic oscillator potential. Both effects lead to a reduction in the energy levels of high ℓ orbits. === Predicted magic numbers === Together with the spin–orbit interaction, and for appropriate magnitudes of both effects, one is led to the following qualitative picture: at all levels, the highest j states have their energies shifted downwards, especially for high n (where the highest j is high). This is both due to the negative spin–orbit interaction energy and to the reduction in energy resulting from deforming the potential into a more realistic one. The second-to-highest j states, on the contrary, have their energy shifted up by the first effect and down by the second effect, leading to a small overall shift. The shifts in the energy of the highest j states can thus bring the energy of states of one level closer to the energy of states of a lower level. The "shells" of the shell model are then no longer identical to the levels denoted by n, and the magic numbers are changed. We may then suppose that the highest j states for n = 3 have an intermediate energy between the average energies of n = 2 and n = 3, and suppose that the highest j states for larger n (at least up to n = 7) have an energy closer to the average energy of n−1. Then we get the following shells (see the figure) 1st shell: 2 states (n = 0, j = ⁠1/2⁠). 2nd shell: 6 states (n = 1, j = ⁠1/2⁠ or ⁠3/2⁠). 3rd shell: 12 states (n = 2, j = ⁠1/2⁠, ⁠3/2⁠ or ⁠5/2⁠). 4th shell: 8 states (n = 3, j = ⁠7/2⁠). 5th shell: 22 states (n = 3, j = ⁠1/2⁠, ⁠3/2⁠ or ⁠5/2⁠; n = 4, j = ⁠9/2⁠). 6th shell: 32 states (n = 4, j = ⁠1/2⁠, ⁠3/2⁠, ⁠5/2⁠ or ⁠7/2⁠; n = 5, j = ⁠11/2⁠). 7th shell: 44 states (n = 5, j = ⁠1/2⁠, ⁠3/2⁠, ⁠5/2⁠, ⁠7/2⁠ or ⁠9/2⁠; n = 6, j = ⁠13/2⁠). 8th shell: 58 states (n = 6, j = ⁠1/2⁠, ⁠3/2⁠, ⁠5/2⁠, ⁠7/2⁠, ⁠9/2⁠ or ⁠11/2⁠; n = 7, j = ⁠15/2⁠). and so on. Note that the numbers of states after the 4th shell are doubled triangular numbers plus two. Spin–orbit coupling causes so-called 'intruder levels' to drop down from the next higher shell into the structure of the previous shell. The sizes of the intruders are such that the resulting shell sizes are themselves increased to the next higher doubled triangular numbers from those of the harmonic oscillator. For example, 1f2p has 20 nucleons, and spin–orbit coupling adds 1g9/2 (10 nucleons), leading to a new shell with 30 nucleons. 1g2d3s has 30 nucleons, and adding intruder 1h11/2 (12 nucleons) yields a new shell size of 42, and so on. The magic numbers are then   2   8=2+6  20=2+6+12  28=2+6+12+8  50=2+6+12+8+22  82=2+6+12+8+22+32 126=2+6+12+8+22+32+44 184=2+6+12+8+22+32+44+58 and so on. This gives all the observed magic numbers and also predicts a new one (the so-called island of stability) at the value of 184 (for protons, the magic number 126 has not been observed yet, and more complicated theoretical considerations predict the magic number to be 114 instead). Another way to predict magic (and semi-magic) numbers is by laying out the idealized filling order (with spin–orbit splitting but energy levels not overlapping). For consistency, s is split into j = ⁠1/2⁠ and j = −⁠1/2⁠ components with 2 and 0 members respectively. Taking the leftmost and rightmost total counts within sequences bounded by / here gives the magic and semi-magic numbers. s(2,0)/p(4,2) > 2,2/6,8, so (semi)magic numbers 2,2/6,8 d(6,4):s(2,0)/f(8,6):p(4,2) > 14,18:20,20/28,34:38,40, so 14,20/28,40 g(10,8):d(6,4):s(2,0)/h(12,10):f(8,6):p(4,2) > 50,58,64,68,70,70/82,92,100,106,110,112, so 50,70/82,112 i(14,12):g(10,8):d(6,4):s(2,0)/j(16,14):h(12,10):f(8,6):p(4,2) > 126,138,148,156,162,166,168,168/184,198,210,220,228,234,238,240, so 126,168/184,240 The rightmost predicted magic numbers of each pair within the quartets bisected by / are double tetrahedral numbers from the Pascal Triangle: 2, 8, 20, 40, 70, 112, 168, 240 are 2x 1, 4, 10, 20, 35, 56, 84, 120, ..., and the leftmost members of the pairs differ from the rightmost by double triangular numbers: 2 − 2 = 0, 8 − 6 = 2, 20 − 14 = 6, 40 − 28 = 12, 70 − 50 = 20, 112 − 82 = 30, 168 − 126 = 42, 240 − 184 = 56, where 0, 2, 6, 12, 20, 30, 42, 56, ... are 2 × 0, 1, 3, 6, 10, 15, 21, 28, ... . === Other properties of nuclei === This model also predicts or explains with some success other properties of nuclei, in particular spin and parity of nuclei ground states, and to some extent their excited nuclear states as well. Take 178O (oxygen-17) as an example: Its nucleus has eight protons filling the first three proton "shells", eight neutrons filling the first three neutron "shells", and one extra neutron. All protons in a complete proton shell have zero total angular momentum, since their angular momenta cancel each other. The same is true for neutrons. All protons in the same level (n) have the same parity (either +1 or −1), and since the parity of a pair of particles is the product of their parities, an even number of protons from the same level (n) will have +1 parity. Thus, the total angular momentum of the eight protons and the first eight neutrons is zero, and their total parity is +1. This means that the spin (i.e. angular momentum) of the nucleus, as well as its parity, are fully determined by that of the ninth neutron. This one is in the first (i.e. lowest energy) state of the 4th shell, which is a d-shell (ℓ = 2), and since p = (−1)ℓ, this gives the nucleus an overall parity of +1. This 4th d-shell has a j = ⁠5/2⁠, thus the nucleus of 178O is expected to have positive parity and total angular momentum ⁠5/2⁠, which indeed it has. The rules for the ordering of the nucleus shells are similar to Hund's Rules of the atomic shells, however, unlike its use in atomic physics, the completion of a shell is not signified by reaching the next n, as such the shell model cannot accurately predict the order of excited nuclei states, though it is very successful in predicting the ground states. The order of the first few terms are listed as follows: 1s, 1p⁠3/2⁠, 1p⁠1/2⁠, 1d⁠5/2⁠, 2s, 1d⁠3/2⁠... For further clarification on the notation refer to the article on the Russell–Saunders term symbol. For nuclei farther from the magic quantum numbers one must add the assumption that due to the relation between the strong nuclear force and total angular momentum, protons or neutrons with the same n tend to form pairs of opposite angular momentum. Therefore, a nucleus with an even number of protons and an even number of neutrons has 0 spin and positive parity. A nucleus with an even number of protons and an odd number of neutrons (or vice versa) has the parity of the last neutron (or proton), and the spin equal to the total angular momentum of this neutron (or proton). By "last" we mean the properties coming from the highest energy level. In the case of a nucleus with an odd number of protons and an odd number of neutrons, one must consider the total angular momentum and parity of both the last neutron and the last proton. The nucleus parity will be a product of theirs, while the nucleus spin will be one of the possible results of the sum of their angular momenta (with other possible results being excited states of the nucleus). The ordering of angular momentum levels within each shell is according to the principles described above – due to spin–orbit interaction, with high angular momentum states having their energies shifted downwards due to the deformation of the potential (i.e. moving from a harmonic oscillator potential to a more realistic one). For nucleon pairs, however, it is often energetically favourable to be at high angular momentum, even if its energy level for a single nucleon would be higher. This is due to the relation between angular momentum and the strong nuclear force. The nuclear magnetic moment of neutrons and protons is partly predicted by this simple version of the shell model. The magnetic moment is calculated through j, ℓ and s of the "last" nucleon, but nuclei are not in states of well-defined ℓ and s. Furthermore, for odd-odd nuclei, one has to consider the two "last" nucleons, as in deuterium. Therefore, one gets several possible answers for the nuclear magnetic moment, one for each possible combined ℓ and s state, and the real state of the nucleus is a superposition of them. Thus the real (measured) nuclear magnetic moment is somewhere in between the possible answers. The electric dipole of a nucleus is always zero, because its ground state has a definite parity. The matter density (ψ2, where ψ is the wavefunction) is always invariant under parity. This is usually the situation with the atomic electric dipole. Higher electric and magnetic multipole moments cannot be predicted by this simple version of the shell model for reasons similar to those in the case of deuterium. == Including residual interactions == For nuclei having two or more valence nucleons (i.e. nucleons outside a closed shell), a residual two-body interaction must be added. This residual term comes from the part of the inter-nucleon interaction not included in the approximative average potential. Through this inclusion, different shell configurations are mixed, and the energy degeneracy of states corresponding to the same configuration is broken. These residual interactions are incorporated through shell model calculations in a truncated model space (or valence space). This space is spanned by a basis of many-particle states where only single-particle states in the model space are active. The Schrödinger equation is solved on this basis, using an effective Hamiltonian specifically suited for the model space. This Hamiltonian is different from the one of free nucleons as, among other things, it has to compensate for excluded configurations. One can do away with the average potential approximation entirely by extending the model space to the previously inert core and treating all single-particle states up to the model space truncation as active. This forms the basis of the no-core shell model, which is an ab initio method. It is necessary to include a three-body interaction in such calculations to achieve agreement with experiments. == Collective rotation and the deformed potential == In 1953 the first experimental examples were found of rotational bands in nuclei, with their energy levels following the same J(J+1) pattern of energies as in rotating molecules. Quantum mechanically, it is impossible to have a collective rotation of a sphere, so this implied that the shape of these nuclei was non-spherical. In principle, these rotational states could have been described as coherent superpositions of particle-hole excitations in the basis consisting of single-particle states of the spherical potential. But in reality, the description of these states in this manner is intractable, due to a large number of valence particles—and this intractability was even greater in the 1950s when computing power was extremely rudimentary. For these reasons, Aage Bohr, Ben Mottelson, and Sven Gösta Nilsson constructed models in which the potential was deformed into an ellipsoidal shape. The first successful model of this type is now known as the Nilsson model. It is essentially the harmonic oscillator model described in this article, but with anisotropy added, so the oscillator frequencies along the three Cartesian axes are not all the same. Typically the shape is a prolate ellipsoid, with the axis of symmetry taken to be z. Because the potential is not spherically symmetric, the single-particle states are not states of good angular momentum J. However, a Lagrange multiplier − ω ⋅ J {\displaystyle -\omega \cdot J} , known as a "cranking" term, can be added to the Hamiltonian. Usually the angular frequency vector ω is taken to be perpendicular to the symmetry axis, although tilted-axis cranking can also be considered. Filling the single-particle states up to the Fermi level produces states whose expected angular momentum along the cranking axis ⟨ J x ⟩ {\displaystyle \langle J_{x}\rangle } is the desired value. == Related models == Igal Talmi developed a method to obtain the information from experimental data and use it to calculate and predict energies which have not been measured. This method has been successfully used by many nuclear physicists and has led to a deeper understanding of nuclear structure. The theory which gives a good description of these properties was developed. This description turned out to furnish the shell model basis of the elegant and successful interacting boson model. A model derived from the nuclear shell model is the alpha particle model developed by Henry Margenau, Edward Teller, J. K. Pering, T. H. Skyrme, also sometimes called the Skyrme model. Note, however, that the Skyrme model is usually taken to be a model of the nucleon itself, as a "cloud" of mesons (pions), rather than as a model of the nucleus as a "cloud" of alpha particles. == See also == Nuclear structure Table of nuclides Semi-empirical mass formula Isomeric shift Interacting boson model == References == == Further reading == Talmi, Igal; de-Shalit, A. (1963). Nuclear Shell Theory. Academic Press. ISBN 978-0-486-43933-4. {{cite book}}: ISBN / Date incompatibility (help) Talmi, Igal (1993). Simple Models of Complex Nuclei: The Shell Model and the Interacting Boson Model. Harwood Academic Publishers. ISBN 978-3-7186-0551-4. == External links == Igal Talmi (November 24, 2010). On single nucleon wave functions. RIKEN Nishina Center.
Wikipedia/Nuclear_shell_model
Agricultural science (or agriscience for short) is a broad multidisciplinary field of biology that encompasses the parts of exact, natural, economic and social sciences that are used in the practice and understanding of agriculture. Professionals of the agricultural science are called agricultural scientists or agriculturists. == History == In the 18th century, Johann Friedrich Mayer conducted experiments on the use of gypsum (hydrated calcium sulfate) as a fertilizer. In 1843, John Bennet Lawes and Joseph Henry Gilbert began a set of long-term field experiments at Rothamsted Research in England, some of which are still running as of 2018. In the United States, a scientific revolution in agriculture began with the Hatch Act of 1887, which used the term "agricultural science". The Hatch Act was driven by farmers' interest in knowing the constituents of early artificial fertilizer. The Smith–Hughes Act of 1917 shifted agricultural education back to its vocational roots, but the scientific foundation had been built. For the next 44 years after 1906, federal expenditures on agricultural research in the United States outpaced private expenditures.: xxi  == Prominent agricultural scientists == Wilbur Olin Atwater Robert Bakewell Norman Borlaug Luther Burbank George Washington Carver Carl Henry Clerk George C. Clerk René Dumont Sir Albert Howard Kailas Nath Kaul Thomas Lecky Justus von Liebig Jay Laurence Lush Gregor Mendel Louis Pasteur M. S. Swaminathan Jethro Tull Artturi Ilmari Virtanen Sewall Wright == Fields or related disciplines == == Scope == Agriculture, agricultural science, and agronomy are closely related. However, they cover different concepts: Agriculture is the set of activities that transform the environment for the production of animals and plants for human use. Agriculture concerns techniques, including the application of agronomic research. Agronomy is research and development related to studying and improving plant-based crops. Geoponics is the science of cultivating the earth. Hydroponics involves growing plants without soil, by using water-based mineral nutrient solutions in an artificial environment. == Research topics == Agricultural sciences include research and development on: Improving agricultural productivity in terms of quantity and quality (e.g., selection of drought-resistant crops and animals, development of new pesticides, yield-sensing technologies, simulation models of crop growth, in-vitro cell culture techniques) Minimizing the effects of pests (weeds, insects, pathogens, mollusks, nematodes) on crop or animal production systems. Transformation of primary products into end-consumer products (e.g., production, preservation, and packaging of dairy products) Prevention and correction of adverse environmental effects (e.g., soil degradation, waste management, bioremediation) Theoretical production ecology, relating to crop production modeling Traditional agricultural systems, sometimes termed subsistence agriculture, which feed most of the poorest people in the world. These systems are of interest as they sometimes retain a level of integration with natural ecological systems greater than that of industrial agriculture, which may be more sustainable than some modern agricultural systems. Food production and demand globally, with particular attention paid to the primary producers, such as China, India, Brazil, the US, and the EU. Various sciences relating to agricultural resources and the environment (e.g. soil science, agroclimatology); biology of agricultural crops and animals (e.g. crop science, animal science and their included sciences, e.g. ruminant nutrition, farm animal welfare); such fields as agricultural economics and rural sociology; various disciplines encompassed in agricultural engineering. == See also == Agricultural Research Council Agricultural sciences basic topics Agriculture ministry Agroecology American Society of Agronomy Consultative Group on International Agricultural Research (CGIAR) Crop Science Society of America Genomics of domestication History of agricultural science Indian Council of Agricultural Research Institute of Food and Agricultural Sciences International Assessment of Agricultural Science and Technology for Development International Food Policy Research Institute, IFPRI International Institute of Tropical Agriculture International Livestock Research Institute List of agriculture topics National Agricultural Library (NAL) National FFA Organization Research Institute of Crop Production (RICP) (in the Czech Republic) Soil Science Society of America USDA Agricultural Research Service University of Agricultural Sciences == References == == Further reading == Agricultural Research, Livelihoods, and Poverty: Studies of Economic and Social Impacts in Six Countries Edited by Michelle Adato and Ruth Meinzen-Dick (2007), Johns Hopkins University Press Food Policy Report Claude Bourguignon, Regenerating the Soil: From Agronomy to Agrology, Other India Press, 2005 Pimentel David, Pimentel Marcia, Computer les kilocalories, Cérès, n. 59, sept-oct. 1977 Russell E. Walter, Soil conditions and plant growth, Longman group, London, New York 1973 Salamini, Francesco; Özkan, Hakan; Brandolini, Andrea; Schäfer-Pregl, Ralf; Martin, William (2002). "Genetics and geography of wild cereal domestication in the near east". Nature Reviews Genetics. 3 (6): 429–441. doi:10.1038/nrg817. PMID 12042770. S2CID 25166879. Saltini Antonio, Storia delle scienze agrarie, 4 vols, Bologna 1984–89, ISBN 88-206-2412-5, ISBN 88-206-2413-3, ISBN 88-206-2414-1, ISBN 88-206-2415-X Vavilov Nicolai I. (Starr Chester K. editor), The Origin, Variation, Immunity and Breeding of Cultivated Plants. Selected Writings, in Chronica botanica, 13: 1–6, Waltham, Mass., 1949–50 Vavilov Nicolai I., World Resources of Cereals, Leguminous Seed Crops and Flax, Academy of Sciences of Urss, National Science Foundation, Washington, Israel Program for Scientific Translations, Jerusalem 1960 Winogradsky Serge, Microbiologie du sol. Problèmes et methodes. Cinquante ans de recherches, Masson & c.ie, Paris 1949
Wikipedia/Agriculture_science
The Polish Academy of Sciences (Polish: Polska Akademia Nauk, PAN) is a Polish state-sponsored institution of higher learning. Headquartered in Warsaw, it is responsible for spearheading the development of science across the country by a society of distinguished scholars and a network of research institutes. It was established in 1951, during the early period of the Polish People's Republic following World War II. == History == The Polish Academy of Sciences is a Polish state-sponsored institution of higher learning, headquartered in Warsaw, that was established by the merger of earlier science societies, including the Polish Academy of Learning (Polska Akademia Umiejętności, abbreviated PAU), with its seat in Kraków, and the Warsaw Society of Friends of Learning (Science), which had been founded in the late 18th century. The Polish Academy of Sciences functions as a learned society acting through an elected assembly of leading scholars and research institutions. The Academy has also, operating through its committees, become a major scientific advisory body. Another aspect of the Academy is its coordination and overseeing of numerous (several dozen) research institutes. PAN institutes employ over 2,000 people and are funded by about a third of the Polish government's budget for science. == Leadership == The Polish Academy of Sciences is led by a President, elected by the assembly of Academy members for a four-year term, together with a number of Vice Presidents. The President for the 2019–2022 term was Jerzy Duszyński (his second term in the post), together with five Vice Presidents: Stanisław Czuczwar, Stanisław Filipowicz, Paweł Rowiński, Roman Słowiński, and Romuald Zabielski. On 20 October 2022, General Assembly of the Polish Academy of Sciences elected Marek Konarzewski to become the new President of the Academy for the 2023–2026 term. On 8 December 2022, another session of General Assembly of the Academy elected four Vice Presidents at the recommendation of the President Elect; as such Mirosława Ostrowska, Natalia Sobczak, and Dariusz Jemielniak, and Aleksander Welfe were elected as Vice Presidents of the Academy for the 2023–2026 term. All the Presidents of the Polish Academy of Sciences to date, by term, are as follows: 1952–1956: Jan Bohdan Dembowski 1957–1962: Tadeusz Kotarbiński 1962–1971: Janusz Groszkowski 1971–1977: Włodzimierz Trzebiatowski 1977–1980: Witold Nowacki 1980–1983: Aleksander Gieysztor 1984–1990: Jan Karol Kostrzewski 1990–1992: Aleksander Gieysztor 1993–1998: Leszek Kuźnicki 1999–2001: Mirosław Mossakowski 2002–2003: Jerzy Kołodziejczak 2003–2006: Andrzej Legocki 2007–2014: Michał Kleiber 2015–2018: Jerzy Duszyński 2019–2022: Jerzy Duszyński 2023–2026: (president-elect) Marek Konarzewski == Institutes == The Polish Academy of Sciences has numerous institutes, including: Hirszfeld Institute of Immunology and Experimental Therapy Nencki Institute of Experimental Biology Bohdan Dobrzański Institute of Agrophysics Museum and Institute of Zoology Kielanowski Institute of Animal Physiology and Nutrition Mammal Research Institute of the Polish Academy of Sciences Institute of Pharmacology of the Polish Academy of Sciences - established, 1954, became an independent institute in 1974; publishes the journal Pharmacological Reports. Institute of Psychology Institute of Slavic Studies Institute of High Pressure Physics Institute of Hydro-Engineering Nicolaus Copernicus Astronomical Center Institute of Fundamental Technological Research Institute of Metallurgy and Materials Science Polish Institute of Physical Chemistry Centre of Molecular and Macromolecular Studies, Polish Academy of Sciences in Lodz Department of Turbine Dynamics and Diagnostics of the Institute of Fluid-flow Machinery of the Polish Academy of Sciences Institute of Nuclear Physics of the Polish Academy of Sciences Institute of Physics of the Polish Academy of Sciences Institute of Mathematics of the Polish Academy of Sciences Institute of Computer Science of the Polish Academy of Sciences Institute of Theoretical and Applied Informatics, Polish Academy of Sciences Institute for the History of Science, Polish Academy of Sciences Institute of Economics of the Polish Academy of Sciences == Notable members == Bogdan Baranowski, chemist Franciszek Bujak, historian Carsten Carlberg, biochemist Tomasz Dietl, physicist Aleksandra Dunin-Wąsowicz, archaeologist Zofia Hilczer-Kurnatowska, archaeologist Maria Janion, scholar, critic and theoretician of literature Zofia Kielan-Jaworowska, paleontologist Franciszek Kokot, nephrologist Stanisław Konturek, physician Leszek Kołakowski, philosopher Roman Kozłowski, paleontologist Jacek Leociak, literary scholar Wanda Leopold, author, translator, and literature critic Mieczysław Mąkosza, chemist Zenon Mariak, neurosurgeon, professor Zenon Mróz, engineer Karol Myśliwiec, archeologist Witold Nowacki, mathematician (president of the Academy 1978 to 1980) Czesław Olech, mathematician Bohdan Paczyński, astrophysicist Krystian Pilarczyk, hydraulic engineer Włodzimierz Ptak, immunologist Marianna Sankiewicz-Budzyńska electronics engineer and academic Andrzej Schinzel, mathematician Jan Strelau, psychologist Zofia Sulgostowska, archaeologist Piotr Sztompka, sociologist Joanna Tokarska-Bakir, anthropologist and religious studies scholar Andrzej Trautman, physicist Andrzej Udalski, astrophysicist and astronomer Jerzy Vetulani, pharmacologist and neuroscientist Jan Woleński, philosopher Aleksander Wolszczan, astronomer Bernard Zabłocki, microbiologist and immunologist Stanisław Zagaja, pomologist, professor and director of Research Institute of Pomology and Floriculture == Foreign members == Aage Bohr, physicist Zbigniew Darzynkiewicz, cell biologist Joseph H. Eberly, physicist Erol Gelenbe, computer scientist and engineer Martin Hairer, mathematician Jack K. Hale, mathematician Stephen T. Holgate, immunopharmacologist (2001) Ernst Håkon Jahr, linguist Krzysztof Matyjaszewski, Polish chemist working at Carnegie Mellon University Robert K. Merton, sociologist Karl Alexander Müller, physicist Roger Penrose, mathematician Carlo Rubbia, physicist Peter M. Simons, philosopher Boleslaw Szymanski, computer scientist Chen Ning Yang, physicist George Zarnecki, art historian == Periodicals == Acta Arithmetica Acta Asiatica Varsoviensia Acta Ornithologica Acta Palaeontologica Polonica Acta Physica Polonica Annales Polonici Mathematici Annales Zoologici Archaeologia Polona Fundamenta Mathematicae == See also == Academy of Sciences French Academy of Sciences Polish Academy of Learning (headquartered in Kraków) Poznań Society of Friends of Learning Royal Society Warsaw Society of Friends of Learning == References == == External links == PAN website (click on British flag icon for English-language content)
Wikipedia/Polish_Academy_of_Sciences
Agricultural science (or agriscience for short) is a broad multidisciplinary field of biology that encompasses the parts of exact, natural, economic and social sciences that are used in the practice and understanding of agriculture. Professionals of the agricultural science are called agricultural scientists or agriculturists. == History == In the 18th century, Johann Friedrich Mayer conducted experiments on the use of gypsum (hydrated calcium sulfate) as a fertilizer. In 1843, John Bennet Lawes and Joseph Henry Gilbert began a set of long-term field experiments at Rothamsted Research in England, some of which are still running as of 2018. In the United States, a scientific revolution in agriculture began with the Hatch Act of 1887, which used the term "agricultural science". The Hatch Act was driven by farmers' interest in knowing the constituents of early artificial fertilizer. The Smith–Hughes Act of 1917 shifted agricultural education back to its vocational roots, but the scientific foundation had been built. For the next 44 years after 1906, federal expenditures on agricultural research in the United States outpaced private expenditures.: xxi  == Prominent agricultural scientists == Wilbur Olin Atwater Robert Bakewell Norman Borlaug Luther Burbank George Washington Carver Carl Henry Clerk George C. Clerk René Dumont Sir Albert Howard Kailas Nath Kaul Thomas Lecky Justus von Liebig Jay Laurence Lush Gregor Mendel Louis Pasteur M. S. Swaminathan Jethro Tull Artturi Ilmari Virtanen Sewall Wright == Fields or related disciplines == == Scope == Agriculture, agricultural science, and agronomy are closely related. However, they cover different concepts: Agriculture is the set of activities that transform the environment for the production of animals and plants for human use. Agriculture concerns techniques, including the application of agronomic research. Agronomy is research and development related to studying and improving plant-based crops. Geoponics is the science of cultivating the earth. Hydroponics involves growing plants without soil, by using water-based mineral nutrient solutions in an artificial environment. == Research topics == Agricultural sciences include research and development on: Improving agricultural productivity in terms of quantity and quality (e.g., selection of drought-resistant crops and animals, development of new pesticides, yield-sensing technologies, simulation models of crop growth, in-vitro cell culture techniques) Minimizing the effects of pests (weeds, insects, pathogens, mollusks, nematodes) on crop or animal production systems. Transformation of primary products into end-consumer products (e.g., production, preservation, and packaging of dairy products) Prevention and correction of adverse environmental effects (e.g., soil degradation, waste management, bioremediation) Theoretical production ecology, relating to crop production modeling Traditional agricultural systems, sometimes termed subsistence agriculture, which feed most of the poorest people in the world. These systems are of interest as they sometimes retain a level of integration with natural ecological systems greater than that of industrial agriculture, which may be more sustainable than some modern agricultural systems. Food production and demand globally, with particular attention paid to the primary producers, such as China, India, Brazil, the US, and the EU. Various sciences relating to agricultural resources and the environment (e.g. soil science, agroclimatology); biology of agricultural crops and animals (e.g. crop science, animal science and their included sciences, e.g. ruminant nutrition, farm animal welfare); such fields as agricultural economics and rural sociology; various disciplines encompassed in agricultural engineering. == See also == Agricultural Research Council Agricultural sciences basic topics Agriculture ministry Agroecology American Society of Agronomy Consultative Group on International Agricultural Research (CGIAR) Crop Science Society of America Genomics of domestication History of agricultural science Indian Council of Agricultural Research Institute of Food and Agricultural Sciences International Assessment of Agricultural Science and Technology for Development International Food Policy Research Institute, IFPRI International Institute of Tropical Agriculture International Livestock Research Institute List of agriculture topics National Agricultural Library (NAL) National FFA Organization Research Institute of Crop Production (RICP) (in the Czech Republic) Soil Science Society of America USDA Agricultural Research Service University of Agricultural Sciences == References == == Further reading == Agricultural Research, Livelihoods, and Poverty: Studies of Economic and Social Impacts in Six Countries Edited by Michelle Adato and Ruth Meinzen-Dick (2007), Johns Hopkins University Press Food Policy Report Claude Bourguignon, Regenerating the Soil: From Agronomy to Agrology, Other India Press, 2005 Pimentel David, Pimentel Marcia, Computer les kilocalories, Cérès, n. 59, sept-oct. 1977 Russell E. Walter, Soil conditions and plant growth, Longman group, London, New York 1973 Salamini, Francesco; Özkan, Hakan; Brandolini, Andrea; Schäfer-Pregl, Ralf; Martin, William (2002). "Genetics and geography of wild cereal domestication in the near east". Nature Reviews Genetics. 3 (6): 429–441. doi:10.1038/nrg817. PMID 12042770. S2CID 25166879. Saltini Antonio, Storia delle scienze agrarie, 4 vols, Bologna 1984–89, ISBN 88-206-2412-5, ISBN 88-206-2413-3, ISBN 88-206-2414-1, ISBN 88-206-2415-X Vavilov Nicolai I. (Starr Chester K. editor), The Origin, Variation, Immunity and Breeding of Cultivated Plants. Selected Writings, in Chronica botanica, 13: 1–6, Waltham, Mass., 1949–50 Vavilov Nicolai I., World Resources of Cereals, Leguminous Seed Crops and Flax, Academy of Sciences of Urss, National Science Foundation, Washington, Israel Program for Scientific Translations, Jerusalem 1960 Winogradsky Serge, Microbiologie du sol. Problèmes et methodes. Cinquante ans de recherches, Masson & c.ie, Paris 1949
Wikipedia/Agricultural_Science
Industrial ecology (IE) is the study of material and energy flows through industrial systems. The global industrial economy can be modelled as a network of industrial processes that extract resources from the Earth and transform those resources into by-products, products and services which can be bought and sold to meet the needs of humanity. Industrial ecology seeks to quantify the material flows and document the industrial processes that make modern society function. Industrial ecologists are often concerned with the impacts that industrial activities have on the environment, with use of the planet's supply of natural resources, and with problems of waste disposal. Industrial ecology is a young but growing multidisciplinary field of research which combines aspects of engineering, economics, sociology, toxicology and the natural sciences. Industrial ecology has been defined as a "systems-based, multidisciplinary discourse that seeks to understand emergent behavior of complex integrated human/natural systems". The field approaches issues of sustainability by examining problems from multiple perspectives, usually involving aspects of sociology, the environment, economy and technology. The name comes from the idea that the analogy of natural systems should be used as an aid in understanding how to design sustainable industrial systems. == Overview == Industrial ecology is concerned with the shifting of industrial process from linear (open loop) systems, in which resource and capital investments move through the system to become waste, to a closed loop system where wastes can become inputs for new processes. Much of the research focuses on the following areas: material and energy flow studies ("industrial metabolism") dematerialization and decarbonization technological change and the environment life-cycle planning, design and assessment design for the environment ("eco-design") extended producer responsibility ("product stewardship") eco-industrial parks ("industrial symbiosis") product-oriented environmental policy eco-efficiency Industrial ecology seeks to understand the way in which industrial systems (for example a factory, an ecoregion, or national or global economy) interact with the biosphere. Natural ecosystems provide a metaphor for understanding how different parts of industrial systems interact with one another, in an "ecosystem" based on resources and infrastructural capital rather than on natural capital. It seeks to exploit the idea that natural systems do not have waste in them to inspire sustainable design. Along with more general energy conservation and material conservation goals, and redefining related international trade markets and product stewardship relations strictly as a service economy, industrial ecology is one of the four objectives of Natural Capitalism. This strategy discourages forms of amoral purchasing arising from ignorance of what goes on at a distance and implies a political economy that values natural capital highly and relies on more instructional capital to design and maintain each unique industrial ecology. == History == Industrial ecology was popularized in 1989 in a Scientific American article by Robert Frosch and Nicholas E. Gallopoulos. Frosch and Gallopoulos' vision was "why would not our industrial system behave like an ecosystem, where the wastes of a species may be resource to another species? Why would not the outputs of an industry be the inputs of another, thus reducing use of raw materials, pollution, and saving on waste treatment?" A notable example resides in a Danish industrial park in the city of Kalundborg. Here several linkages of byproducts and waste heat can be found between numerous entities such as a large power plant, an oil refinery, a pharmaceutical plant, a plasterboard factory, an enzyme manufacturer, a waste company and the city itself. Another example is the Rantasalmi EIP in Rantasalmi, Finland. While this country has had previous organically formed EIP's, the park at Rantasalmi is Finland's first planned EIP. The scientific field of industrial ecology has grown quickly. The Journal of Industrial Ecology (since 1997), the International Society for Industrial Ecology (since 2001), and the journal Progress in Industrial Ecology (since 2004) give Industrial Ecology a strong and dynamic position in the international scientific community. Industrial ecology principles are also emerging in various policy realms such as the idea of the circular economy. Although the definition of the circular economy has yet to be formalized, generally the focus is on strategies such as creating a circular flow of materials, and cascading energy flows. An example of this would be using waste heat from one process to run another process that requires a lower temperature. The hope is that strategies such as this will create a more efficient economy with fewer pollutants and other unwanted by-products. == Examples == The Kalundborg industrial park is located in Denmark. This industrial park is special because companies reuse each other's waste (which then becomes by-products). For example, the Energy E2 Asnæs Power Station produces gypsum as a by-product of the electricity generation process; this gypsum becomes a resource for the BPB Gyproc A/S which produces plasterboards. This is one example of a system inspired by the biosphere-technosphere metaphor: in ecosystems, the waste from one organism is used as inputs to other organisms; in industrial systems, waste from a company is used as a resource by others. Apart from the direct benefit of incorporating waste into the loop, the use of an eco-industrial park can be a means of making renewable energy generating plants, like Solar PV, more economical and environmentally friendly. In essence, this assists the growth of the renewable energy industry and the environmental benefits that come with replacing fossil-fuels. Additional examples of industrial ecology include: Substituting the fly ash byproduct of coal burning practices for cement in concrete production Using second generation biofuels. An example of this is converting grease or cooking oil to biodiesels to fuel vehicles. South Africa's National Cleaner Production Center (NCPC) was created in order to make the region's industries more efficient in terms of materials. Results of the use of sustainable methods will include lowered energy costs and improved waste management. The program assesses existing companies to implement change. Onsite non-potable water reuse Biodegradable plastic created from polymerized chicken feathers, which are 90% keratin and account for over 6 million tons of waste in the EU and US annually. As agricultural waste, the chicken feathers are recycled into disposable plastic products which are then easily biodegraded into soil. Toyota Motor Company channels a portion of the greenhouse gases emitted back into their system as recovered thermal energy. Anheuser-Busch signed a memorandum of understanding with biochemical company Blue Marble to use brewing wastes as the basis for its "green" products. Enhanced oil recovery at Petra Nova. Reusing cork from wine bottles for use in shoe soles, flooring tiles, building insulation, automotive gaskets, craft materials, and soil conditioner. Darling Quarter Commonwealth Bank Place North building in Sydney, Australia recycles and reuses its wastewater. Plant based plastic packaging that is 100% recyclable and environmentally friendly. Food waste can be used for compost, which can be used as a natural fertilizer for future food production. Additionally, food waste that has not been contaminated can be used to feed those experiencing food insecurity. Hellisheiði geothermal power station uses ground water to produce electricity and hot water for the city of Reykjavik. Their carbon byproducts are then injected back into the Earth and calcified, leaving the station with a net zero carbon emission. == Future directions == The ecosystem metaphor popularized by Frosch and Gallopoulos has been a valuable creative tool for helping researchers look for novel solutions to difficult problems. Recently, it has been pointed out that this metaphor is based largely on a model of classical ecology, and that advancements in understanding ecology based on complexity science have been made by researchers such as C. S. Holling, James J. Kay, and further advanced in terms of contemporary ecology by others. For industrial ecology, this may mean a shift from a more mechanistic view of systems, to one where sustainability is viewed as an emergent property of a complex system. To explore this further, several researchers are working with agent based modeling techniques. Exergy analysis is performed in the field of industrial ecology to use energy more efficiently. The term exergy was coined by Zoran Rant in 1956, but the concept was developed by J. Willard Gibbs. In recent decades, utilization of exergy has spread outside physics and engineering to the fields of industrial ecology, ecological economics, systems ecology, and energetics. == See also == == References == == Further reading == The industrial green game: implications for environmental design and management, Deanna J Richards (Ed), National Academy Press, Washington DC, USA, 1997, ISBN 0-309-05294-7 'Handbook of Input-Output Economics in Industrial Ecology', Sangwon Suh (Ed), Springer, 2009, ISBN 978-1-4020-6154-7 Boons, Frank (2012). "Freedom Versus Coercion in Industrial Ecology: Mind the Gap!". Econ Journal Watch. 9 (2): 100–111. Desrochers, Pierre (2012). "Freedom Versus Coercion in Industrial Ecology: A Reply to Boons". Econ Journal Watch. 9 (2): 78–99. == External links == Industrial Ecology: An Introduction
Wikipedia/Industrial_ecology
Carnot's theorem, also called Carnot's rule or Carnot's law, is a principle of thermodynamics developed by Nicolas Léonard Sadi Carnot in 1824 that specifies limits on the maximum efficiency that any heat engine can obtain. Carnot's theorem states that all heat engines operating between the same two thermal or heat reservoirs cannot have efficiencies greater than a reversible heat engine operating between the same reservoirs. A corollary of this theorem is that every reversible heat engine operating between a pair of heat reservoirs is equally efficient, regardless of the working substance employed or the operation details. Since a Carnot heat engine is also a reversible engine, the efficiency of all the reversible heat engines is determined as the efficiency of the Carnot heat engine that depends solely on the temperatures of its hot and cold reservoirs. The maximum efficiency (i.e., the Carnot heat engine efficiency) of a heat engine operating between hot and cold reservoirs, denoted as H and C respectively, is the ratio of the temperature difference between the reservoirs to the hot reservoir temperature, expressed in the equation η max = T H − T C T H , {\displaystyle \eta _{\text{max}}={\frac {T_{\mathrm {H} }-T_{\mathrm {C} }}{T_{\mathrm {H} }}},} where ⁠ T H {\displaystyle T_{\mathrm {H} }} ⁠ and ⁠ T C {\displaystyle T_{\mathrm {C} }} ⁠ are the absolute temperatures of the hot and cold reservoirs, respectively, and the efficiency ⁠ η {\displaystyle \eta } ⁠ is the ratio of the work done by the engine (to the surroundings) to the heat drawn out of the hot reservoir (to the engine). ⁠ η max {\displaystyle \eta _{\text{max}}} ⁠ is greater than zero if and only if there is a temperature difference between the two thermal reservoirs. Since ⁠ η max {\displaystyle \eta _{\text{max}}} ⁠ is the upper limit of all reversible and irreversible heat engine efficiencies, it is concluded that work from a heat engine can be produced if and only if there is a temperature difference between two thermal reservoirs connecting to the engine. Carnot's theorem is a consequence of the second law of thermodynamics. Historically, it was based on contemporary caloric theory, and preceded the establishment of the second law. == Proof == The proof of the Carnot theorem is a proof by contradiction or reductio ad absurdum (a method to prove a statement by assuming its falsity and logically deriving a false or contradictory statement from this assumption), based on a situation like the right figure where two heat engines with different efficiencies are operating between two thermal reservoirs at different temperature. The relatively hotter reservoir is called the hot reservoir and the other reservoir is called the cold reservoir. A (not necessarily reversible) heat engine M {\displaystyle M} with a greater efficiency η M {\displaystyle \eta _{_{M}}} is driving a reversible heat engine L {\displaystyle L} with a less efficiency η L {\displaystyle \eta _{_{L}}} , causing the latter to act as a heat pump. The requirement for the engine L {\displaystyle L} to be reversible is necessary to explain work W {\displaystyle W} and heat Q {\displaystyle Q} associated with it by using its known efficiency. However, since η M > η L {\displaystyle \eta _{_{M}}>\eta _{_{L}}} , the net heat flow would be backwards, i.e., into the hot reservoir: Q h out = Q < η M η L Q = Q h in , {\displaystyle Q_{\text{h}}^{\text{out}}=Q<{\frac {\eta _{_{M}}}{\eta _{_{L}}}}Q=Q_{\text{h}}^{\text{in}},} where Q {\displaystyle Q} represents heat, in {\displaystyle {\text{in}}} denotes input to an object, out {\displaystyle {\text{out}}} for output from an object, and h {\displaystyle h} for the hot thermal reservoir. If heat Q h out {\displaystyle Q_{\text{h}}^{\text{out}}} flows from the hot reservoir then it has the sign of + while if Q h in {\displaystyle Q_{\text{h}}^{\text{in}}} flows from the hot reservoir then it has the sign of -. This expression can be easily derived by using the definition of the efficiency of a heat engine, η = W / Q h in {\displaystyle \eta =W/Q_{\text{h}}^{\text{in}}} , where work and heat in this expression are net quantities per engine cycle, and the conservation of energy for each engine as shown below. The sign convention of work W {\displaystyle W} , with which the sign of + for work done by an engine to its surroundings, is employed. The above expression means that heat into the hot reservoir from the engine pair (can be considered as a single engine) is greater than heat into the engine pair from the hot reservoir (i.e., the hot reservoir continuously gets energy). A reversible heat engine with a low efficiency delivers more heat (energy) to the hot reservoir for a given amount of work (energy) to this engine when it is being driven as a heat pump. All these mean that heat can transfer from cold to hot places without external work, and such a heat transfer is impossible by the second law of thermodynamics. It may seem odd that a hypothetical reversible heat pump with a low efficiency is used to violate the second law of thermodynamics, but the figure of merit for refrigerator units is not the efficiency, W / Q h out {\displaystyle W/Q_{\text{h}}^{\text{out}}} , but the coefficient of performance (COP), which is Q c out / W {\displaystyle Q_{\text{c}}^{\text{out}}/W} where this W {\displaystyle W} has the sign opposite to the above (+ for work done to the engine). Let's find the values of work W {\displaystyle W} and heat Q {\displaystyle Q} depicted in the right figure in which a reversible heat engine L {\displaystyle L} with a less efficiency η L {\displaystyle \eta _{_{L}}} is driven as a heat pump by a heat engine M {\displaystyle M} with a more efficiency η M {\displaystyle \eta _{_{M}}} . The definition of the efficiency is η = W / Q h out {\displaystyle \eta =W/Q_{\text{h}}^{\text{out}}} for each engine and the following expressions can be made: η M = W M Q h out , M = η M Q Q = η M , {\displaystyle \eta _{M}={\frac {W_{M}}{Q_{\text{h}}^{{\text{out}},M}}}={\frac {\eta _{M}Q}{Q}}=\eta _{M},} η L = W L Q h out , L = − η M Q − η M η L Q = η L . {\displaystyle \eta _{L}={\frac {W_{L}}{Q_{\text{h}}^{{\text{out}},L}}}={\frac {-\eta _{M}Q}{-{\frac {\eta _{M}}{\eta _{L}}}Q}}=\eta _{L}.} The denominator of the second expression, Q h out , L = − η M η L Q {\displaystyle Q_{\text{h}}^{{\text{out}},L}=-{\frac {\eta _{M}}{\eta _{L}}}Q} , is made to make the expression to be consistent, and it helps to fill the values of work and heat for the engine L {\displaystyle L} . For each engine, the absolute value of the energy entering the engine, E abs in {\displaystyle E_{\text{abs}}^{\text{in}}} , must be equal to the absolute value of the energy leaving from the engine, E abs out {\displaystyle E_{\text{abs}}^{\text{out}}} . Otherwise, energy is continuously accumulated in an engine or the conservation of energy is violated by taking more energy from an engine than input energy to the engine: E M,abs in = Q = ( 1 − η M ) Q + η M Q = E M,abs out , {\displaystyle E_{\text{M,abs}}^{\text{in}}=Q=(1-\eta _{M})Q+\eta _{M}Q=E_{\text{M,abs}}^{\text{out}},} E L,abs in = η M Q + η M Q ( 1 η L − 1 ) = η M η L Q = E L,abs out . {\displaystyle E_{\text{L,abs}}^{\text{in}}=\eta _{M}Q+\eta _{M}Q\left({\frac {1}{\eta _{L}}}-1\right)={\frac {\eta _{M}}{\eta _{L}}}Q=E_{\text{L,abs}}^{\text{out}}.} In the second expression, | Q h out , L | = | − η M η L Q | {\textstyle \left|Q_{\text{h}}^{{\text{out}},L}\right|=\left|-{\frac {\eta _{M}}{\eta _{L}}}Q\right|} is used to find the term η M Q ( 1 η L − 1 ) {\textstyle \eta _{M}Q\left({\frac {1}{\eta _{L}}}-1\right)} describing the amount of heat taken from the cold reservoir, completing the absolute value expressions of work and heat in the right figure. Having established that the right figure values are correct, Carnot's theorem may be proven for irreversible and the reversible heat engines as shown below. === Reversible engines === To see that every reversible engine operating between reservoirs at temperatures T 1 {\displaystyle T_{1}} and T 2 {\displaystyle T_{2}} must have the same efficiency, assume that two reversible heat engines have different efficiencies, and let the relatively more efficient engine M {\displaystyle M} drive the relatively less efficient engine L {\displaystyle L} as a heat pump. As the right figure shows, this will cause heat to flow from the cold to the hot reservoir without external work, which violates the second law of thermodynamics. Therefore, both (reversible) heat engines have the same efficiency, and we conclude that: All reversible heat engines that operate between the same two thermal (heat) reservoirs have the same efficiency. The reversible heat engine efficiency can be determined by analyzing a Carnot heat engine as one of reversible heat engine. This conclusion is an important result because it helps establish the Clausius theorem, which implies that the change in entropy S {\displaystyle S} is unique for all reversible processes: Δ S = ∫ a b d Q rev T {\displaystyle \Delta S=\int _{a}^{b}{\frac {dQ_{\text{rev}}}{T}}} as the entropy change, that is made during a transition from a thermodynamic equilibrium state a {\displaystyle a} to a state b {\displaystyle b} in a V-T (Volume-Temperature) space, is the same over all reversible process paths between these two states. If this integral were not path independent, then entropy would not be a state variable. === Irreversible engines === Consider two engines, M {\displaystyle M} and L {\displaystyle L} , which are irreversible and reversible respectively. We construct the machine shown in the right figure, with M {\displaystyle M} driving L {\displaystyle L} as a heat pump. Then if M {\displaystyle M} is more efficient than L {\displaystyle L} , the machine will violate the second law of thermodynamics. Since a Carnot heat engine is a reversible heat engine, and all reversible heat engines operate with the same efficiency between the same reservoirs, we have the first part of Carnot's theorem: No irreversible heat engine is more efficient than a Carnot heat engine operating between the same two thermal reservoirs. == Definition of thermodynamic temperature == The efficiency of a heat engine is the work done by the engine divided by the heat introduced to the engine per engine cycle or where w cy {\displaystyle w_{\text{cy}}} is the work done by the engine, q C {\displaystyle q_{C}} is the heat to the cold reservoir from the engine, and q H {\displaystyle q_{H}} is the heat to the engine from the hot reservoir, per cycle. Thus, the efficiency depends only on q C q H {\displaystyle {\frac {q_{C}}{q_{H}}}} . Because all reversible heat engines operating between temperatures T 1 {\displaystyle T_{1}} and T 2 {\displaystyle T_{2}} must have the same efficiency, the efficiency of a reversible heat engine is a function of only the two reservoir temperatures: In addition, a reversible heat engine operating between temperatures T 1 {\displaystyle T_{1}} and T 3 {\displaystyle T_{3}} must have the same efficiency as one consisting of two cycles, one between T 1 {\displaystyle T_{1}} and another (intermediate) temperature T 2 {\displaystyle T_{2}} , and the second between T 2 {\displaystyle T_{2}} and T 3 {\displaystyle T_{3}} ( T 1 < T 2 < T 3 {\displaystyle T_{1}<T_{2}<T_{3}} ). This can only be the case if Specializing to the case that T 1 {\displaystyle T_{1}} is a fixed reference temperature: the temperature of the triple point of water as 273.16. (Of course any reference temperature and any positive numerical value could be used — the choice here corresponds to the Kelvin scale.) Then for any T 2 {\displaystyle T_{2}} and T 3 {\displaystyle T_{3}} , f ( T 2 , T 3 ) = f ( T 1 , T 3 ) f ( T 1 , T 2 ) = 273.16 ⋅ f ( T 1 , T 3 ) 273.16 ⋅ f ( T 1 , T 2 ) . {\displaystyle f(T_{2},T_{3})={\frac {f(T_{1},T_{3})}{f(T_{1},T_{2})}}={\frac {273.16\cdot f(T_{1},T_{3})}{273.16\cdot f(T_{1},T_{2})}}.} Therefore, if thermodynamic temperature is defined by T ′ = 273.16 ⋅ f ( T 1 , T ) , {\displaystyle T'=273.16\cdot f(T_{1},T),} then the function viewed as a function of thermodynamic temperature, is f ( T 2 , T 3 ) = T 3 ′ T 2 ′ . {\displaystyle f(T_{2},T_{3})={\frac {T_{3}'}{T_{2}'}}.} It follows immediately that Substituting this equation back into the above equation q C q H = f ( T H , T C ) {\displaystyle {\frac {q_{C}}{q_{H}}}=f(T_{H},T_{C})} gives a relationship for the efficiency in terms of thermodynamic temperatures: == Applicability to fuel cells == Since fuel cells can generate useful power when all components of the system are at the same temperature ( T = T H = T C {\displaystyle T=T_{H}=T_{C}} ), they are clearly not limited by Carnot's theorem, which states that no power can be generated when T H = T C {\displaystyle T_{H}=T_{C}} . This is because Carnot's theorem applies to engines converting thermal energy to work, whereas fuel cells instead convert chemical energy to work. Nevertheless, the second law of thermodynamics still provides restrictions on fuel cell energy conversion. A Carnot battery is a type of energy storage system that stores electricity in thermal energy storage and converts the stored heat back to electricity through thermodynamic cycles. == See also == Chambadal–Novikov efficiency Heating and cooling efficiency bounds == References ==
Wikipedia/Carnot's_theorem_(thermodynamics)
Equilibrium Thermodynamics is the systematic study of transformations of matter and energy in systems in terms of a concept called thermodynamic equilibrium. The word equilibrium implies a state of balance. Equilibrium thermodynamics, in origins, derives from analysis of the Carnot cycle. Here, typically a system, as cylinder of gas, initially in its own state of internal thermodynamic equilibrium, is set out of balance via heat input from a combustion reaction. Then, through a series of steps, as the system settles into its final equilibrium state, work is extracted. In an equilibrium state the potentials, or driving forces, within the system, are in exact balance. A central aim in equilibrium thermodynamics is: given a system in a well-defined initial state of thermodynamic equilibrium, subject to accurately specified constraints, to calculate, when the constraints are changed by an externally imposed intervention, what the state of the system will be once it has reached a new equilibrium. An equilibrium state is mathematically ascertained by seeking the extrema of a thermodynamic potential function, whose nature depends on the constraints imposed on the system. For example, a chemical reaction at constant temperature and pressure will reach equilibrium at a minimum of its components' Gibbs free energy and a maximum of their entropy. Equilibrium thermodynamics differs from non-equilibrium thermodynamics, in that, with the latter, the state of the system under investigation will typically not be uniform but will vary locally in those as energy, entropy, and temperature distributions as gradients are imposed by dissipative thermodynamic fluxes. In equilibrium thermodynamics, by contrast, the state of the system will be considered uniform throughout, defined macroscopically by such quantities as temperature, pressure, or volume. Systems are studied in terms of change from one equilibrium state to another; such a change is called a thermodynamic process. Ruppeiner geometry is a type of information geometry used to study thermodynamics. It claims that thermodynamic systems can be represented by Riemannian geometry, and that statistical properties can be derived from the model. This geometrical model is based on the idea that there exist equilibrium states which can be represented by points on two-dimensional surface and the distance between these equilibrium states is related to the fluctuation between them. == See also == Non-equilibrium thermodynamics Thermodynamics == References == Adkins, C.J. (1983). Equilibrium Thermodynamics, 3rd Ed. Cambridge: Cambridge University Press. Cengel, Y. & Boles, M. (2002). Thermodynamics – an Engineering Approach, 4th Ed. (textbook). New York: McGraw Hill. Kondepudi, D. & Prigogine, I. (2004). Modern Thermodynamics – From Heat Engines to Dissipative Structures (textbook). New York: John Wiley & Sons. Perrot, P. (1998). A to Z of Thermodynamics (dictionary). New York: Oxford University Press.
Wikipedia/Equilibrium_thermodynamics
The third law of thermodynamics states that the entropy of a closed system at thermodynamic equilibrium approaches a constant value when its temperature approaches absolute zero. This constant value cannot depend on any other parameters characterizing the system, such as pressure or applied magnetic field. At absolute zero (zero kelvins) the system must be in a state with the minimum possible energy. Entropy is related to the number of accessible microstates, and there is typically one unique state (called the ground state) with minimum energy. In such a case, the entropy at absolute zero will be exactly zero. If the system does not have a well-defined order (if its order is glassy, for example), then there may remain some finite entropy as the system is brought to very low temperatures, either because the system becomes locked into a configuration with non-minimal energy or because the minimum energy state is non-unique. The constant value is called the residual entropy of the system. == Formulations == The third law has many formulations, some more general than others, some equivalent, and some neither more general nor equivalent. The Planck statement applies only to perfect crystalline substances:As temperature falls to zero, the entropy of any pure crystalline substance tends to a universal constant. That is, lim T → 0 S = S 0 {\displaystyle \lim _{T\to 0}S=S_{0}} , where S 0 {\displaystyle S_{0}} is a universal constant that applies for all possible crystals, of all possible sizes, in all possible external constraints. So it can be taken as zero, giving lim T → 0 S = 0 {\displaystyle \lim _{T\to 0}S=0} . The Nernst statement concerns thermodynamic processes at a fixed, low temperature, for condensed systems, which are liquids and solids: The entropy change associated with any condensed system undergoing a reversible isothermal process approaches zero as the temperature at which it is performed approaches 0 K. That is, lim T → 0 S ( T , X 1 ) − S ( T , X 2 ) = 0 {\displaystyle \lim _{T\to 0}S(T,X_{1})-S(T,X_{2})=0} . Or equivalently, At absolute zero, the entropy change becomes independent of the process path. That is, ∀ x , lim T → 0 | S ( T , x ) − S ( T , x + Δ x ) | → 0 {\displaystyle \forall x,\lim _{T\to 0}|S(T,x)-S(T,x+\Delta x)|\to 0} where Δ x {\displaystyle \Delta x} represents a change in the state variable x {\displaystyle x} . The unattainability principle of Nernst: It is impossible for any process, no matter how idealized, to reduce the entropy of a system to its absolute-zero value in a finite number of operations. This principle implies that cooling a system to absolute zero would require an infinite number of steps or an infinite amount of time. The statement in adiabatic accessibility: It is impossible to start from a state of positive temperature, and adiabatically reach a state with zero temperature. The Einstein statement: The entropy of any substance approaches a finite value as the temperature approaches absolute zero. That is, ∀ x , lim T → 0 S ( T , x ) → S 0 ( x ) {\textstyle \forall x,\lim _{T\to 0}S(T,x)\rightarrow S_{0}(x)} where S {\displaystyle S} is the entropy, the zero-point entropy S 0 ( x ) {\displaystyle S_{0}(x)} is finite-valued, T {\displaystyle T} is the temperature, and x {\displaystyle x} represents other relevant state variables. This implies that the heat capacity C ( T , x ) {\displaystyle C(T,x)} of a substance must (uniformly) vanish at absolute zero, as otherwise the entropy S = ∫ 0 T 1 C ( T , x ) d T T {\displaystyle S=\int _{0}^{T_{1}}{\frac {C(T,x)dT}{T}}} would diverge. There is also a formulation as the impossibility of "perpetual motion machines of the third kind". == History == The third law was developed by chemist Walther Nernst during the years 1906 to 1912 and is therefore often referred to as the Nernst heat theorem, or sometimes the Nernst-Simon heat theorem to include the contribution of Nernst's doctoral student Francis Simon. The third law of thermodynamics states that the entropy of a system at absolute zero is a well-defined constant. This is because a system at zero temperature exists in its ground state, so that its entropy is determined only by the degeneracy of the ground state. In 1912 Nernst stated the law thus: "It is impossible for any procedure to lead to the isotherm T = 0 in a finite number of steps." An alternative version of the third law of thermodynamics was enunciated by Gilbert N. Lewis and Merle Randall in 1923: If the entropy of each element in some (perfect) crystalline state be taken as zero at the absolute zero of temperature, every substance has a finite positive entropy; but at the absolute zero of temperature the entropy may become zero, and does so become in the case of perfect crystalline substances. This version states not only Δ S {\displaystyle \Delta S} will reach zero at 0 K, but S {\displaystyle S} itself will also reach zero as long as the crystal has a ground state with only one configuration. Some crystals form defects which cause a residual entropy. This residual entropy disappears when the kinetic barriers to transitioning to one ground state are overcome. With the development of statistical mechanics, the third law of thermodynamics (like the other laws) changed from a fundamental law (justified by experiments) to a derived law (derived from even more basic laws). The basic law from which it is primarily derived is the statistical-mechanics definition of entropy for a large system: S − S 0 = k B ln Ω {\displaystyle S-S_{0}=k_{\text{B}}\ln \,\Omega } where S {\displaystyle S} is entropy, k B {\displaystyle k_{\mathrm {B} }} is the Boltzmann constant, and Ω {\displaystyle \Omega } is the number of microstates consistent with the macroscopic configuration. The counting of states is from the reference state of absolute zero, which corresponds to the entropy of S 0 {\displaystyle S_{0}} . == Explanation == In simple terms, the third law states that the entropy of a perfect crystal of a pure substance approaches zero as the temperature approaches zero. The alignment of a perfect crystal leaves no ambiguity as to the location and orientation of each part of the crystal. As the energy of the crystal is reduced, the vibrations of the individual atoms are reduced to nothing, and the crystal becomes the same everywhere. The third law provides an absolute reference point for the determination of entropy at any other temperature. The entropy of a closed system, determined relative to this zero point, is then the absolute entropy of that system. Mathematically, the absolute entropy of any system at zero temperature is the natural log of the number of ground states times the Boltzmann constant kB = 1.38×10−23 J K−1. The entropy of a perfect crystal lattice as defined by Nernst's theorem is zero provided that its ground state is unique, because ln(1) = 0. If the system is composed of one-billion atoms that are all alike and lie within the matrix of a perfect crystal, the number of combinations of one billion identical things taken one billion at a time is Ω = 1. Hence: S − S 0 = k B ln ⁡ Ω = k B ln ⁡ 1 = 0 {\displaystyle S-S_{0}=k_{\text{B}}\ln \Omega =k_{\text{B}}\ln {1}=0} The difference is zero; hence the initial entropy S0 can be any selected value so long as all other such calculations include that as the initial entropy. As a result, the initial entropy value of zero is selected S0 = 0 is used for convenience. S − S 0 = S − 0 = 0 {\displaystyle S-S_{0}=S-0=0} S = 0 {\displaystyle S=0} === Example: Entropy change of a crystal lattice heated by an incoming photon === Suppose a system consisting of a crystal lattice with volume V of N identical atoms at T = 0 K, and an incoming photon of wavelength λ and energy ε. Initially, there is only one accessible microstate: S 0 = k B ln ⁡ Ω = k B ln ⁡ 1 = 0. {\displaystyle S_{0}=k_{\text{B}}\ln \Omega =k_{\text{B}}\ln {1}=0.} Let us assume the crystal lattice absorbs the incoming photon. There is a unique atom in the lattice that interacts and absorbs this photon. So after absorption, there are N possible microstates accessible by the system, each corresponding to one excited atom, while the other atoms remain at ground state. The entropy, energy, and temperature of the closed system rises and can be calculated. The entropy change is Δ S = S − S 0 = k B ln ⁡ Ω {\displaystyle \Delta S=S-S_{0}=k_{\text{B}}\ln {\Omega }} From the second law of thermodynamics: Δ S = S − S 0 = δ Q T {\displaystyle \Delta S=S-S_{0}={\frac {\delta Q}{T}}} Hence Δ S = S − S 0 = k B ln ⁡ ( Ω ) = δ Q T {\displaystyle \Delta S=S-S_{0}=k_{\text{B}}\ln(\Omega )={\frac {\delta Q}{T}}} Calculating entropy change: S − 0 = k B ln ⁡ N = 1.38 × 10 − 23 × ln ⁡ ( 3 × 10 22 ) = 70 × 10 − 23 J K − 1 {\displaystyle S-0=k_{\text{B}}\ln {N}=1.38\times 10^{-23}\times \ln {\left(3\times 10^{22}\right)}=70\times 10^{-23}\,\mathrm {J\,K^{-1}} } We assume N = 3 × 1022 and λ = 1 cm. The energy change of the system as a result of absorbing the single photon whose energy is ε: δ Q = ε = h c λ = 6.62 × 10 − 34 J ⋅ s × 3 × 10 8 m s − 1 0.01 m = 2 × 10 − 23 J {\displaystyle \delta Q=\varepsilon ={\frac {hc}{\lambda }}={\frac {6.62\times 10^{-34}\,\mathrm {J\cdot s} \times 3\times 10^{8}\,\mathrm {m\,s^{-1}} }{0.01\,\mathrm {m} }}=2\times 10^{-23}\,\mathrm {J} } The temperature of the closed system rises by T = ε Δ S = 2 × 10 − 23 J 70 × 10 − 23 J K − 1 = 0.02857 K {\displaystyle T={\frac {\varepsilon }{\Delta S}}={\frac {2\times 10^{-23}\,\mathrm {J} }{70\times 10^{-23}\,\mathrm {J\,K^{-1}} }}=0.02857\,\mathrm {K} } This can be interpreted as the average temperature of the system over the range from 0 < S < 70 × 10 − 23 J K − 1 {\displaystyle 0<S<70\times 10^{-23}\,\mathrm {J\,K^{-1}} } . A single atom is assumed to absorb the photon, but the temperature and entropy change characterizes the entire system. === Systems with non-zero entropy at absolute zero === An example of a system that does not have a unique ground state is one whose net spin is a half-integer, for which time-reversal symmetry gives two degenerate ground states. For such systems, the entropy at zero temperature is at least kB ln(2) (which is negligible on a macroscopic scale). Some crystalline systems exhibit geometrical frustration, where the structure of the crystal lattice prevents the emergence of a unique ground state. Ground-state helium (unless under pressure) remains liquid. Glasses and solid solutions retain significant entropy at 0 K, because they are large collections of nearly degenerate states, in which they become trapped out of equilibrium. Another example of a solid with many nearly-degenerate ground states, trapped out of equilibrium, is ice Ih, which has "proton disorder". For the entropy at absolute zero to be zero, the magnetic moments of a perfectly ordered crystal must themselves be perfectly ordered; from an entropic perspective, this can be considered to be part of the definition of a "perfect crystal". Only ferromagnetic, antiferromagnetic, and diamagnetic materials can satisfy this condition. However, ferromagnetic materials do not, in fact, have zero entropy at zero temperature, because the spins of the unpaired electrons are all aligned and this gives a ground-state spin degeneracy. Materials that remain paramagnetic at 0 K, by contrast, may have many nearly degenerate ground states (for example, in a spin glass), or may retain dynamic disorder (a quantum spin liquid). == Consequences == === Absolute zero === The third law is equivalent to the statement that It is impossible by any procedure, no matter how idealized, to reduce the temperature of any closed system to zero temperature in a finite number of finite operations. The reason that T = 0 cannot be reached according to the third law is explained as follows: Suppose that the temperature of a substance can be reduced in an isentropic process by changing the parameter X from X2 to X1. One can think of a multistage nuclear demagnetization setup where a magnetic field is switched on and off in a controlled way. If there were an entropy difference at absolute zero, T = 0 could be reached in a finite number of steps. However, at T = 0 there is no entropy difference, so an infinite number of steps would be needed. The process is illustrated in Fig. 1. ==== Example: magnetic refrigeration ==== To be concrete, we imagine that we are refrigerating magnetic material. Suppose we have a large bulk of paramagnetic salt and an adjustable external magnetic field in the vertical direction. Let the parameter X {\displaystyle X} represent the external magnetic field. At the same temperature, if the external magnetic field is strong, then the internal atoms in the salt would strongly align with the field, so the disorder (entropy) would decrease. Therefore, in Fig. 1, the curve for X 1 {\displaystyle X_{1}} is the curve for lower magnetic field, and the curve for X 2 {\displaystyle X_{2}} is the curve for higher magnetic field. The refrigeration process repeats the following two steps: Isothermal process. Here, we have a chunk of salt in magnetic field X 1 {\displaystyle X_{1}} and temperature T {\displaystyle T} . We divide the chunk into two parts: a large part playing the role of "environment", and a small part playing the role of "system". We slowly increase the magnetic field on the system to X 2 {\displaystyle X_{2}} , but keep the magnetic field constant on the environment. The atoms in the system would lose directional degrees of freedom (DOF), and the energy in the directional DOF would be squeezed out into the vibrational DOF. This makes it slightly hotter, and then it would lose thermal energy to the environment, to remain in the same temperature T {\displaystyle T} . (The environment is now discarded.) Isentropic cooling. Here, the system is wrapped in adiathermal covering, and the external magnetic field is slowly lowered to X 1 {\displaystyle X_{1}} . This frees up the direction DOF, absorbing some energy from the vibrational DOF. The effect is that the system has the same entropy, but reaches a lower temperature T ′ < T {\displaystyle T'<T} . At every two-step of the process, the mass of the system decreases, as we discard more and more salt as the "environment". However, if the equations of state for this salt is as shown in Fig. 1 (left), then we can start with a large but finite amount of salt, and end up with a small piece of salt that has T = 0 {\displaystyle T=0} . === Specific heat === A non-quantitative description of his third law that Nernst gave at the very beginning was simply that the specific heat of a material can always be made zero by cooling it down far enough. A modern, quantitative analysis follows. Suppose that the heat capacity of a sample in the low temperature region has the form of a power law C(T,X) = C0Tα asymptotically as T → 0, and we wish to find which values of α are compatible with the third law. We have By the discussion of third law above, this integral must be bounded as T0 → 0, which is only possible if α > 0. So the heat capacity must go to zero at absolute zero if it has the form of a power law. The same argument shows that it cannot be bounded below by a positive constant, even if we drop the power-law assumption. On the other hand, the molar specific heat at constant volume of a monatomic classical ideal gas, such as helium at room temperature, is given by CV = (3/2)R with R the molar ideal gas constant. But clearly a constant heat capacity does not satisfy Eq. (12). That is, a gas with a constant heat capacity all the way to absolute zero violates the third law of thermodynamics. We can verify this more fundamentally by substituting CV in Eq. (14), which yields In the limit T0 → 0 this expression diverges, again contradicting the third law of thermodynamics. The conflict is resolved as follows: At a certain temperature the quantum nature of matter starts to dominate the behavior. Fermi particles follow Fermi–Dirac statistics and Bose particles follow Bose–Einstein statistics. In both cases the heat capacity at low temperatures is no longer temperature independent, even for ideal gases. For Fermi gases with the Fermi temperature TF given by Here NA is the Avogadro constant, Vm the molar volume, and M the molar mass. For Bose gases with TB given by The specific heats given by Eq. (14) and (16) both satisfy Eq. (12). Indeed, they are power laws with α = 1 and α = 3/2 respectively. Even within a purely classical setting, the density of a classical ideal gas at fixed particle number becomes arbitrarily high as T goes to zero, so the interparticle spacing goes to zero. The assumption of non-interacting particles presumably breaks down when they are sufficiently close together, so the value of CV gets modified away from its ideal constant value. === Vapor pressure === The only liquids near absolute zero are 3He and 4He. Their heat of evaporation has a limiting value given by with L0 and Cp constant. If we consider a container partly filled with liquid and partly gas, the entropy of the liquid–gas mixture is where Sl(T) is the entropy of the liquid and x is the gas fraction. Clearly the entropy change during the liquid–gas transition (x from 0 to 1) diverges in the limit of T→0. This violates Eq. (8). Nature solves this paradox as follows: at temperatures below about 100 mK, the vapor pressure 10 − 31 m m H g {\displaystyle 10^{-31}\mathrm {mmHg} } is so low that the gas density is lower than the best vacuum in the universe. In other words, below 100 mK there is simply no gas above the liquid.: 91  === Miscibility === If liquid helium with mixed 3He and 4He were cooled to absolute zero, the liquid must have zero entropy. This either means they are ordered perfectly as a mixed liquid, which is impossible for a liquid, or that they fully separate out into two layers of pure liquid. This is precisely what happens. For example, if a solution with 3 3He to 2 4He atoms were cooled, it would start the separation at 0.9 K, purifying more and more, until at absolute zero, when the upper layer becomes purely 3He, and the lower layer becomes purely 4He.: 129  === Surface tension === Let σ {\displaystyle \sigma } be the surface tension of liquid, then the entropy per area is − d σ / d T {\displaystyle -d\sigma /dT} . So if a liquid can exist down to absolute zero, then since its entropy is constant no matter its shape at absolute zero, its entropy per area must converge to zero. That is, its surface tension would become constant at low temperatures.: 87  In particular, the surface tension of 3He is well-approximated by σ = σ 0 − b T 2 {\displaystyle \sigma =\sigma _{0}-bT^{2}} for some parameters σ 0 , b {\displaystyle \sigma _{0},b} . === Latent heat of melting === The melting curves of 3He and 4He both extend down to absolute zero at finite pressure. At the melting pressure, liquid and solid are in equilibrium. The third law demands that the entropies of the solid and liquid are equal at T = 0. As a result, the latent heat of melting is zero, and the slope of the melting curve extrapolates to zero as a result of the Clausius–Clapeyron equation.: 140  === Thermal expansion coefficient === The thermal expansion coefficient is defined as With the Maxwell relation and Eq. (8) with X = p it is shown that So the thermal expansion coefficient of all materials must go to zero at zero kelvin. == See also == Adiabatic process Ground state Laws of thermodynamics Quantum thermodynamics Residual entropy Thermodynamic entropy Timeline of thermodynamics, statistical mechanics, and random processes Quantum heat engines and refrigerators == References == == Further reading == Goldstein, Martin & Inge F. (1993) The Refrigerator and the Universe. Cambridge MA: Harvard University Press. ISBN 0-674-75324-0. Chpt. 14 is a nontechnical discussion of the Third Law, one including the requisite elementary quantum mechanics. Braun, S.; Ronzheimer, J. P.; Schreiber, M.; Hodgman, S. S.; Rom, T.; Bloch, I.; Schneider, U. (2013). "Negative Absolute Temperature for Motional Degrees of Freedom". Science. 339 (6115): 52–5. arXiv:1211.0545. Bibcode:2013Sci...339...52B. doi:10.1126/science.1227831. PMID 23288533. S2CID 8207974. Jacob Aron (3 January 2013). "Cloud of atoms goes beyond absolute zero". New Scientist. Levy, A.; Alicki, R.; Kosloff, R. (2012). "Quantum refrigerators and the third law of thermodynamics". Phys. Rev. E. 85 (6): 061126. arXiv:1205.1347. Bibcode:2012PhRvE..85f1126L. doi:10.1103/PhysRevE.85.061126. PMID 23005070. S2CID 24251763.
Wikipedia/Third_law_of_thermodynamics
The Air Movement and Control Association International, Inc. (AMCA) is an international trade body that sets standards for Heating, Ventilation and Air Conditioning (HVAC) equipment. It rates fan balance and vibration, aerodynamic performance, air density, speed and efficiency. AMCA was formed in 1955 from several earlier trade associations which could be tracked back to the fan-testing requirements of the US Navy in 1923. It is a nonprofit organization that issues over 60 publications and standards, including testing methods, a Certified Ratings Program (CRP), application guides, educational texts, and safety guides. == Membership and Activities == AMCA membership is open to any company that manufactures or holds the design of a product that falls under the AMCA scope. AMCA publications and standards are developed when sufficient interest has been expressed by AMCA members. Publication and standard writing committees are composed of volunteers, which include both AMCA members and interested individuals with a technical background. All AMCA standards are proposed as American National Standards. AMCA lobbies code bodies on behalf of member companies to ensure that member company products are represented in local and national codes. AMCA hosts two educational seminars in alternating years. The AMCA inside Technical Seminar provides engineers with basic information regarding devices and engineering principles relevant to the air movement and air control industry. The Engineering Conference is a discussion forum for presentation of engineering papers written by engineers and experts in the air movement and control industry. U.S. licensed engineers attending either seminar are eligible for approximately 12 Professional Development Hours. The AMCA headquarters is located at 30 West University Drive, Arlington Heights, IL 60004 USA. == Certified Ratings Program == The AMCA Certified Ratings Program (CRP) is a program that allows all manufacturers of air movement and air control devices to obtain an AMCA Seal when their equipment has been tested and rated in accordance with recognized test standards. The goal of the AMCA CRP is to ensure that a manufacturer's product lines have been tested and rated in conformance with an approved test standard and rating requirement. Only after the product has been tested and the manufacturer's cataloged ratings have been submitted to and approved by AMCA International's staff, can performance seals be displayed in literature and on equipment. Additionally, each certified / licensed product line is subject to continuing check tests every three years in AMCA International's Laboratory or one of AMCA International's Independent Accredited laboratories. == Publications and Standards == AMCA International publishes over 64 publications and standards, including testing methods, a Certified Ratings Program (CRP), application guides, educational texts, and safety guides. AMCA is an accredited ANSI developer, and all AMCA standards are proposed as American National Standards. === Testing Standards === ANSI/AMCA Standard 204 - Balance Quality and Vibration Levels for Fans addresses the subjects of fan balance and vibration. It defines appropriate fan balance quality and operating vibration levels to those who specify, manufacture, use, and maintain fan equipment. ANSI/AMCA Standard 210 - Laboratory Methods of Testing Fans for Certified Aerodynamic Performance Rating establishes uniform test methods for a laboratory test of a fan or other air moving device to determine its aerodynamic performance in terms of airflow rate, pressure developed, power consumption, air density, speed of rotation, and efficiency for rating or guarantee purposes. It applies to a fan or other air moving device when air is used as the test gas with the following exceptions: (a) air circulating fans (ceiling fans, desk fans); (b) positive pressure ventilators; (c) compressors with inter-stage cooling; (d) positive displacement machines; (e) test procedures to be used for design, production, or field testing. ANSI/AMCA Standard 220 - Laboratory Methods of Testing Air Curtain Units For Aerodynamic Performance Rating establishes uniform methods for laboratory testing of air curtain units to determine aerodynamic performance in terms of airflow rate, outlet air velocity uniformity, power consumption and air velocity projection for rating or guarantee purposes. ANSI/AMCA Standard 230 - Laboratory Methods of Testing Air Circulating Fans for Rating and Certification establishes uniform methods of laboratory testing for air circulating fans in order to determine performance in terms of thrust for rating, certification or guarantee purposes. The 1999 version, in general, described a method to determine the thrust developed and used a simple equation to convert the measured thrust to airflow. During the periodic review process it was determined that the calculated airflow was too high, therefore this version no longer artificially calculates airflow, but leaves the measured performance in units of thrust. ANSI/AMCA Standard 240 - Laboratory Methods of Testing Positive Pressure Ventilators for Aerodynamic Performance Rating establishes a uniform method of laboratory testing of positive pressure ventilators (PPVs) in order to determine the aerodynamic performance in terms of airflow rate, pressure, air density, and speed of rotation for rating or guarantee purposes. PPVs are portable fans that can be positioned relative to an opening of an enclosure and cause it to be positively pressurized by discharge air. ANSI/AMCA Standard 250 - Laboratory Methods of Testing Jet Tunnel Fans for Performance was developed in response to the need for a standard method of testing jet tunnel fans, sometimes called impulse fans, which have seen increasing use in the United States. The test procedures outlined in this standard are in harmony with those found in ISO 13350. This standard deals with the determination of those technical characteristics needed to describe all aspects of the performance of jet tunnel fans. It does not cover those fans designed for ducted applications or those designed solely for air circulation, e.g., ceiling fans and table fans. The test procedures described relate to laboratory conditions. The measurement of performance under in-situ conditions is not included. ANSI/AMCA Standard 260 - Laboratory Methods of Testing Induced Flow Fans for Rating establishes a uniform method of laboratory testing of induced flow fans in order to determine their aerodynamic performance in terms of inlet and outlet airflow rate, pressure developed, power consumption, air density, speed of rotation, and efficiency. This standard is an adjunct to AMCA 210 in order to accommodate the induced flow fan's unique characteristics. Induced flow fans are housed fans whose outlet airflow is greater than their inlet airflow due to induced airflow. They are generally used in laboratory or hazardous atmosphere exhaust applications. ANSI/AMCA Standard 300 - Reverberant Room Method for Sound Testing of Fans applies to fans of all types and sizes. It is limited to the determination of airborne sound emission for the specified setups. Vibration is not measured, nor is the sensitivity of airborne sound emission to vibration effects determined. The test setup requirements in this standard establish the laboratory conditions necessary for a successful test. Rarely will it be possible to meet these requirements in a field situation. This standard is not intended for field measurements. ANSI/AMCA Standard 301 - Methods of Calculating Fan Sound Ratings from Laboratory Test Data establishes standard methods for calculating consistent fan sound ratings from laboratory test data. ANSI/AMCA Standard 320 - Laboratory Methods of Sound Testing of Fans Using Sound Intensity establishes a method of determining the octave band sound power levels of a fan. The sound power levels are determined using sound intensity measurements on a measurement surface that encloses the sound source. Guidelines are provided on suitable test environment acoustical characteristics, the measurement surface, and the number of intensity measurements. Test setups are designated generally to represent the physical orientation of fans as installed following ANSI/AMCA 210 and also used in ANSI/AMCA 300. The method is reproducible when all requirements of the method are met. This standard applies to fans of all types and sizes. It is limited to the determination of airborne sound emission for the specified setups. ANSI/AMCA Standard 500-D - Laboratory Methods of Testing Dampers for Rating establishes uniform laboratory test methods for dampers including air leakage, pressure drop, dynamic closure, operational torque and elevated temperature testing. ANSI/AMCA Standard 500-L - Laboratory Methods of Testing Louvers for Rating establishes uniform test methods for louvers including air leakage, pressure drop, water penetration, wind driven rain and operational torque. ANSI/AMCA Standard 510 - Methods of Testing Heavy Duty Dampers for Rating establishes testing methods to be used in measuring the performance of dampers generally described as “custom design,” “heavy duty” or “severe service” normally used in applications where elevated temperature, erosion and/or corrosion conditions exist, including dampers which are used to control the flow of gas, or to isolate one section of a duct system from another section of the system. ANSI/AMCA Standard 520 - Laboratory Methods for Testing Actuators establishes an industry standard for minimum rating and testing of actuators used on fire/smoke dampers. Requirements cover torque or force rating, long term holding, operational life, elevated temperature performance, periodic maintenance, production, and sound testing for both pneumatic and electric operators. AMCA Standard 540 - Test Method for Louvers Impacted by Wind Borne Debris establishes uniform methods for laboratory testing of louvers that are impact tested with the large missile described in ASTM E 1996-04 and E 1886–05. ANSI/AMCA Standard 610 - Laboratory Methods of Testing Airflow Measurement Stations for Performance Rating establishes a laboratory test method under which a permanently installed airflow measurement station may be tested for rating. Includes descriptions of the test facility requirements, reference airflow sources and presentation of test results. AMCA Standard 803 - Industrial Process/Power Generation Fans: Site Performance Test Standard establishes uniform methods to determine the aerodynamic performance of large fans in-situ. Includes procedures for determining whether the airflow pattern is such that a test can be conducted with confidence. === Certified Ratings Program === The following publications provide specifications and guidelines for participants in the Certified Ratings Program. AMCA Publication 11 - Certified Ratings Program Operating Manual contains requirements common to all AMCA International's Certified Ratings Programs. Specific requirements that only pertain to a category of product will be found in a Product Rating Manual for that category. Publication 11 is effective on the same date as a Product Rating Manual that references it. AMCA Publication 111 - Laboratory Accreditation Program outlines procedure for obtaining AMCA International recognition of a laboratory as qualified to perform tests in accordance with AMCA International test methods. Registration qualifications may be applied to individual test results. The latest revision adds AMCA 260–07, Laboratory Methods of Testing Induced Flow Fans for Rating, to the list of standard test methods in which a laboratory can be accredited to perform. AMCA Publication 211 - Certified Ratings Program - Product Rating Manual for Fan Air Performance describes in detail the certification and check test procedures used in implementing the program under which the AMCA International Certified Ratings Seal for air performance is granted. AMCA Publication 212 - Certified Ratings Program - Product Rating Manual for Smoke Management Fan Performance describes in detail the certification and check test procedures used in implementing the program under which the AMCA International Certified Ratings Seal for Smoke Management Fans is granted. AMCA Publication 311 - Certified Ratings Program - Product Rating Manual for Fan Sound Performance explains in detail the certification procedures for both ducted and non-ducted fans under which the use of the AMCA International Certified Ratings Program Seal for sound performance is granted. AMCA Publication 511 - Certified Ratings Program - Product Rating Manual for Air Control Devices describes the certification procedures under which the AMCA International Certified Ratings Seal is granted for sound, air performance, air leakage, water penetration and wind driven rain for air control products. AMCA Publication 611 - Certified Ratings Program - Product Rating Manual for Airflow Measurement Devices describes in detail the certification procedure for airflow measurement stations under which the use of the AMCA International Certified Ratings Seal for Airflow Measurement Performance is granted. AMCA Publication 1011 - Certified Ratings Program - Product Rating Manual for Acoustical Duct Silencers provides a detailed description of the procedure under which the AMCA International Certified Ratings Seal is granted for acoustical duct silencers. === Application Guides === AMCA Publication 501 - Application Manual for Louvers provides general information and comments on factors to be considered when designing or specifying installations requiring louvers. It also serves as a guide to understanding the various types of louvers available and includes items to be considered to ensure their proper use. AMCA Publication 502 - Damper Application Manual for Heating, Ventilating, and Air Conditioning is a guide to understanding the various types of dampers available and items to be considered for their proper use. Dampers classified as fire dampers, heat dampers, and smoke dampers are not included. Includes much of the information not found in the companion guide, AMCA Publication 503. AMCA Publication 503 - Fire, Ceiling (Radiation), Smoke, and Fire/Smoke Damper Application Manual details information for individuals that design, purchase, or specify systems in which fire and/or smoke is a factor. Includes much of the information not found in the companion guide, AMCA Publication 502. AMCA Publication 600 - Application Manual for Airflow Measurement Stations is intended to assist designers and users with the proper application, performance considerations, selection and limitations of airflow measurement stations. === Educational Texts === AMCA Publication 99 - Standards Handbook is a compilation of important AMCA standards that include the Fan Laws, common industry terminology and symbols, classifications for spark resistant construction, and various other useful data. AMCA Publication 200 - Air Systems. Part 1 of the Fan Application Manual, this publication provides basic information necessary for the design of energy efficient air systems. This edition includes examples in both the Inch-Pound and SI systems as the reader is provided with basic information on air systems. AMCA Publication 201 - Fans and Systems. Part 2 of the Fan Application Manual, discusses the effect of inlet and outlet connections on fan performance. It includes separate axial fan factors and is aimed primarily at the designer of the air moving system. AMCA Publication 202 - Troubleshooting. Part 3 of the Fan Application Manual, helps to identify and correct problems with the performance and operation of the air moving system after fan installation. AMCA Publication 203 - Field Performance Measurement of Fan Systems. Part 4 of the Fan Application Manual, reviews the methods of making field measurements for calculating the actual performance of the fan and system. AMCA Publication 801 - Industrial Process/Power Generation Fans: Specification Guidelines provides information on testing and rating power plant fans and covers construction features and related accessories. Sample equipment specifications are included which outline the information a fan manufacturer requires to select the best fan for an application. Common fan industry practices are also defined and explained. AMCA Publication 802 - Establishing Performance Using Laboratory Models outlines methods used to determine the performance of full size power plant fans from tests of models. Provides information on variables that affect fan ratings and establishes rules and limitations in converting the performance of geometrically similar fans. AMCA Publication 850 - Heavy Duty Dampers for Isolation and Control provides basic pertinent information in order to simplify communications between damper manufacturers and designers, specifiers, and users of such equipment. External Shading Devices in Commercial Buildings - The Impact on Energy Use, Peak Demand, and Glare Control by John Carmody details the several advantages that contribute to a more sustainable building such as reducing solar gain, peak electricity demand and glare conditions. It is intended to help the designer quickly narrow the range of possibilities and understand the impact of shading devices in commercial office buildings during the early stages of design. Fan Acoustics - Noise Generation and Control Methods by Alain Guédel discusses the sources of noise generation in fan construction and system installation. == Testing Laboratory == The AMCA testing laboratory is an A2LA accredited laboratory that tests air control and air movement devices for members of the air control and air movement industry. The AMCA lab comprises the following: Four Reverberant Sound Rooms ranging in size from 6,300 cu.ft. to 61,700 cu.ft. Two Water Test Facilities with chambers capable of simulating eight inches (200 mm) of rain-fall per hour and wind speeds of 50 mph (80 km/h). Multi Nozzle Chambers that are capable of measuring airflow up to 88,000 cfm. Circulator Fan Facility capable of testing 96 inch fans. Acoustic Duct Silencer Facility ISO/IEC 17025 accredited AMCA International also oversees 40 accredited laboratories and two independent, accredited laboratories located in Taiwan and Singapore. Additional independent AMCA accredited laboratories are under construction in Korea and China. == History == The Air Movement and Control Association, International was founded in 1955 when the National Association of Fan Manufacturers (NAFM) combined with the Power Fan Manufacturers Association (PFMA) and the Industrial Unit Heater Association (IUHA). Originally known as the Air Moving and Conditioning Association, AMCA was retitled in 1960 to its current name. In 1996, the AMCA Board of Directors added the term 'International' to AMCA's name in order to better indicate the global scope of AMCA's membership. In 1923, the first edition of the Fan Test Codes was developed as a result of problems encountered by the U.S. Navy in regards to performance ratings of fans being procured during World War 1. To resolve the issue of variations in testing methods and performance ratings, a joint committee of NAFM and the American Society of Heating and Ventilating Engineers (ASHVE) was formed to develop a standard test code for fans. When NAFM combined with PFMA and IUHA, the organization's major concern was the accuracy and practicality of the pitot traverse method of testing, and a committee was formed to study various test methods and develop a new test code. To aid in the study, AMCA sponsored research by the Battelle Memorial Institute to compare the test results using the pitot tube test methods and nozzle test methods. The result of this effort was a new revision of the test code, which was published in 1960 as AMCA Standard Test Code for Air Moving Devices, Bulletin 210. Standard 210 became widely accepted and known as virtually the only standard used in the United States and Canada. In 1985, AMCA expanded its scope to include air control devices, such as louvers, dampers, and airflow measurement stations. In 1996, AMCA's first accredited laboratory, ITRI, began testing in Taiwan. In 2008, AMCA's second independent accredited laboratory, AFMA, began testing in Singapore. == See also == Sheet Metal and Air Conditioning Contractors' National Association ACCA ASHRAE ASTM Centrifugal fan == References == == External links == official website
Wikipedia/Air_Movement_and_Control_Association
Heat recovery ventilation (HRV), also known as mechanical ventilation heat recovery (MVHR) is a ventilation system that recovers energy by operating between two air sources at different temperatures. It is used to reduce the heating and cooling demands of buildings. By recovering the residual heat in the exhaust gas, the fresh air introduced into the air conditioning system is preheated (or pre-cooled) before it enters the room, or the air cooler of the air conditioning unit performs heat and moisture treatment. A typical heat recovery system in buildings comprises a core unit, channels for fresh and exhaust air, and blower fans. Building exhaust air is used as either a heat source or heat sink, depending on the climate conditions, time of year, and requirements of the building. Heat recovery systems typically recover about 60–95% of the heat in the exhaust air and have significantly improved the energy efficiency of buildings. Energy recovery ventilation (ERV) is the energy recovery process in residential and commercial HVAC systems that exchanges the energy contained in normally exhausted air of a building or conditioned space, using it to treat (precondition) the incoming outdoor ventilation air. The specific equipment involved may be called an Energy Recovery Ventilator, also commonly referred to simply as an ERV. An ERV is a type of air-to-air heat exchanger that transfers latent heat as well as sensible heat. Because both temperature and moisture are transferred, ERVs are described as total enthalpic devices. In contrast, a heat recovery ventilator (HRV) can only transfer sensible heat. HRVs can be considered sensible only devices because they only exchange sensible heat. In other words, all ERVs are HRVs, but not all HRVs are ERVs. It is incorrect to use the terms HRV, AAHX (air-to-air heat exchanger), and ERV interchangeably. During the warmer seasons, an ERV system pre-cools and dehumidifies; during cooler seasons the system humidifies and pre-heats. An ERV system helps HVAC design meet ventilation and energy standards (e.g., ASHRAE), improves indoor air quality and reduces total HVAC equipment capacity, thereby reducing energy consumption. ERV systems enable an HVAC system to maintain a 40-50% indoor relative humidity, essentially in all conditions. ERV's must use power for a blower to overcome the pressure drop in the system, hence incurring a slight energy demand. == Working principle == A heat recovery system is designed to supply conditioned air to the occupied space to maintain a certain temperature. A heat recovery system helps keep a house ventilated while recovering heat being emitted from the inside environment. The purpose of heat recovery systems is to transfer the thermal energy from one fluid to another fluid, from one fluid to a solid, or from a solid surface to a fluid at different temperatures and in thermal contact. There is no direct interaction between fluid and fluid or fluid and solid in most heat recovery systems. In some heat recovery systems, fluid leakage is observed due to pressure differences between fluids, resulting in a mixture of the two fluids.The purpose of an energy recovery system is to reduce the energy required for heating, cooling, or ventilating the space by repurposing the exhaust air's energy. == Types == === Thermal wheel === === Fixed plate heat exchanger === Fixed plate heat exchangers have no moving parts, and consist of alternating layers of plates that are separated and sealed. Typical flow is cross current and since the majority of plates are solid and non permeable, sensible only transfer is the result. The tempering of incoming fresh air is done by a heat or energy recovery core. In this case, the core is made of aluminum or plastic plates. Humidity levels are adjusted through the transferring of water vapor. This is done with a rotating wheel either containing a desiccant material or permeable plates. Enthalpy plates were introduced in 2006 by Paul, a special company for ventilation systems for passive houses. A crosscurrent countercurrent air-to-air heat exchanger built with a humidity permeable material. Polymer fixed-plate countercurrent energy recovery ventilators were introduced in 1998 by Building Performance Equipment (BPE), a residential, commercial, and industrial air-to-air energy recovery manufacturer. These heat exchangers can be both introduced as a retrofit for increased energy savings and fresh air as well as an alternative to new construction. In new construction situations, energy recovery will effectively reduce the required heating/cooling capacity of the system. The percentage of the total energy saved will depend on the efficiency of the device (up to 90% sensible) and the latitude of the building. Due to the need to use multiple sections, fixed plate energy exchangers are often associated with high pressure drop and larger footprints. Due to their inability to offer a high amount of latent energy transfer these systems also have a high chance of frosting in colder climates. The technology patented by Finnish company RecyclingEnergy Int. Corp. is based on a regenerative plate heat exchanger taking advantage of humidity of air by cyclical condensation and evaporation, e.g. latent heat, enabling not only high annual thermal efficiency but also microbe-free plates due to self-cleaning/washing method. Therefore, the unit is called an enthalpy recovery ventilator rather than heat or energy recovery ventilator. Company's patented LatentHeatPump is based on its enthalpy recovery ventilator having COP of 33 in the summer and 15 in the winter. Fixed plate heat exchangers are the most commonly used type of heat exchanger and have been developed for 40 years. Thin metal plates are stacked with a small spacing between plates. Two different air streams pass through these spaces, adjacent to each other. Heat transfer occurs as the temperature transfers through the plate from one air stream to the other. The efficiency of these devices has reached 90% sensible heat efficiency in transferring sensible heat from one air stream to another. The high levels of efficiency are attributed to the high heat transfer coefficients of the materials used, operational pressure and temperature range. === Heat pipes === Heat pipes are a heat recovery device that uses a multi-phase process to transfer heat from one air stream to another. Heat is transferred using an evaporator and condenser within a wicked, sealed pipe containing a fluid which undergoes a constant phase change to transfer heat. The fluid within the pipes changes from a fluid to a gas in the evaporator section, absorbing the thermal energy from the warm air stream. The gas condenses back to a fluid in the condenser section where the thermal energy is dissipated into the cooler air stream raising the temperature. The fluid/gas is transported from one side of the heat pipe to the other through pressure, wick forces or gravity, depending on the arrangement of the heat pipe. === Run-around === Run-around systems are hybrid heat recovery system that incorporates characteristics from other heat recovery technology to form a single device, capable of recovering heat from one air stream and delivering to another a significant distance away. The general case of run-around heat recovery, two fixed plate heat exchangers are located in two separate air streams and are linked by a closed loop containing a fluid that is continually pumped between the two heat exchangers. The fluid is heated and cooled constantly as it flows around the loop, providing heat recovery. The constant flow of the fluid through the loop requires pumps to move between the two heat exchangers. Though this is an additional energy demand, using pumps to circulate fluid is less energy intensive than fans to circulate air. === Phase change materials === Phase change materials, or PCMs, are a technology that is used to store sensible and latent heat within a building structure at a higher storage capacity than standard building materials. PCMs have been studied extensively due to their ability to store heat and transfer heating and cooling demands from conventional peak times to off-peak times. The concept of the thermal mass of a building for heat storage, that the physical structure of the building absorbs heat to help cool the air, has long been understood and investigated. A study of PCMs in comparison to traditional building materials has shown that the thermal storage capacity of PCMs is twelve times higher than standard building materials over the same temperature range. The pressure drop across PCMs has not been investigated to be able to comment on the effect that the material may have on air streams. However, as the PCM can be incorporated directly into the building structure, this would not affect the flow in the same way other heat exchanger technologies do, it can be suggested that there is no pressure loss created by the inclusion of PCMs in the building fabric. == Applications == === Fixed plate heat exchangers === Mardiana et al. integrated a fixed plate heat exchanger into a commercial wind tower, highlighting the advantages of this type of system as a means of zero energy ventilation which can be simply modified. Full scale laboratory testing was undertaken in order to determine the effects and efficiency of the combined system. A wind tower was integrated with a fixed plate heat exchanger and was mounted centrally in a sealed test room. The results from this study indicate that the combination of a wind tower passive ventilation system and a fixed plate heat recovery device could provide an effective combined technology to recover waste heat from exhaust air and cool incoming warm air with zero energy demand. Though no quantitative data for the ventilation rates within the test room was provided, it can be assumed that due to the high-pressure loss across the heat exchanger that these were significantly reduced from the standard operation of a wind tower. Further investigation of this combined technology is essential in understanding the air flow characteristics of the system. === Heat pipes === Due to the low-pressure loss of heat pipe systems, more research has been conducted into the integration of this technology into passive ventilation than other heat recovery systems. Commercial wind towers were again used as the passive ventilation system for integrating this heat recovery technology. This further enhances the suggestion that commercial wind towers provide a worthwhile alternative to mechanical ventilation, capable of supplying and exhausting air at the same time. === Run-around systems === Flaga-Maryanczyk et al. conducted a study in Sweden which examined a passive ventilation system which integrated a run-around system using a ground source heat pump as the heat source to warm incoming air. Experimental measurements and weather data were taken from the passive house used in the study. A CFD model of the passive house was created with the measurements taken from the sensors and weather station used as input data. The model was run to calculate the effectiveness of the run-around system and the capabilities of the ground source heat pump. Ground source heat pumps provide a reliable source of consistent thermal energy when buried 10–20 m below the ground surface. The ground temperature is warmer than the ambient air in winter and cooler than the ambient air in summer, providing both a heat source and a heat sink. It was found that in February, the coldest month in the climate, the ground source heat pump was capable of delivering almost 25% of the heating needs of the house and occupants. === Phase change materials === The majority of research interest in PCMs is the application of phase change material integration into traditional porous building materials such as concrete and wall boards. Kosny et al. analyzed the thermal performance of buildings that have PCM-enhanced construction materials within the structure. Analysis showed that the addition of PCMs is beneficial in terms of improving thermal performance. A significant drawback of PCM used in a passive ventilation system for heat recovery is the lack of instantaneous heat transfer across different airstreams. Phase change materials are a heat storage technology, whereby the heat is stored within the PCM until the air temperature has fallen to a significant level where it can be released back into the air stream. No research has been conducted into the use of PCMs between two airstreams of different temperatures where continuous, instantaneous heat transfer can occur. An investigation into this area would be beneficial for passive ventilation heat recovery research. == Advantages and disadvantages == Source: === Types of energy recovery devices === **Total energy exchange only available on hygroscopic units and condensate return units == Environmental impacts == Source: Energy saving is one of the key issues for both fossil fuel consumption and the protection of the global environment. The rising cost of energy and global warming underlined that developing improved energy systems is necessary to increase energy efficiency while reducing greenhouse gas emissions. One of the most effective ways to reduce energy demand is to use energy more efficiently. Therefore, waste heat recovery is becoming popular in recent years since it improves energy efficiency. About 26% of industrial energy is still wasted as hot gas or fluid in many countries. However, during last two decades there has been remarkable attention to recover waste heat from various industries and to optimize the units which are used to absorb heat from waste gases. Thus, these attempts enhance reducing of global warming as well as of energy demand. === Energy consumption === In most industrialized countries, HVAC is responsible for one-third of the total energy consumption. Moreover, cooling and dehumidifying fresh ventilation air compose 20–40% of the total energy load for HVAC in hot and humid climatic regions. However, that percentage can be higher where 100% fresh air ventilation is required. This means more energy is needed to meet the fresh air requirements of the occupants. Heat recovery is becoming more necessary due to an increased energy cost for the treatment of fresh air. The main purpose of heat recovery systems is to mitigate the energy consumption of buildings for heating, cooling, and ventilation by recovering the waste heat. In this regard, stand-alone or combined heat recovery systems can be incorporated into residential or commercial buildings for energy saving. Reduction in energy consumption levels can also notably contribute in reducing greenhouse gas emissions. == Energy recovery ventilation == === Importance === Nearly half of global energy is used in buildings,and half of heating/cooling cost is caused by ventilation when it is done by the "open window" method according to the regulations. Secondly, energy generation and grid is made to meet the peak demand of power. To use proper ventilation; recovery is a cost-efficient, sustainable and quick way to reduce global energy consumption and give better indoor air quality (IAQ) and protect buildings, and environment. === Methods of transfer === During the cooling season, the system works to cool and dehumidify the incoming, outside air. To do this, the system takes the rejected heat and sends it into the exhaust airstream. Subsequently, this air cools the condenser coil at a lower temperature than if the rejected heat had not entered the exhaust airstream. During the heating seasons, the system works in reverse. Instead of discharging the heat into the exhaust airstream, the system draws heat from the exhaust airstream in order to pre-heat the incoming air. At this stage, the air passes through a primary unit and then into the space being conditioned. With this type of system, it is normal during the cooling seasons for the exhaust air to be cooler than the ventilation air and, during the heating seasons, warmer than the ventilation air. It is for this reason the system works efficiently and effectively. The coefficient of performance (COP) will increase as the conditions become more extreme (i.e., more hot and humid for cooling and colder for heating). === Efficiency === The efficiency of an ERV system is the ratio of energy transferred between the two air streams compared with the total energy transported through the heat exchanger. With the variety of products on the market, efficiency will vary as well. Some of these systems have been known to have heat exchange efficiencies as high as 70-80% while others have as low as 50%. Even though this lower figure is preferable to the basic HVAC system, it is not up to par with the rest of its class. Studies are being done to increase the heat transfer efficiency to 90%. The use of modern low-cost gas-phase heat exchanger technology will allow for significant improvements in efficiency. The use of high conductivity porous material is believed to produce an exchange effectiveness in excess of 90%, producing a five times improvement in energy recovery. The Home Ventilating Institute (HVI) has developed a standard test for any and all units manufactured within the United States. Regardless, not all have been tested. It is imperative to investigate efficiency claims, comparing data produced by HVI as well as that produced by the manufacturer. (Note: all units sold in Canada are placed through the R-2000 program, a standard test equivalent to the HVI test). == Exhaust air heat pump == An exhaust air heat pump (EAHP) extracts heat from the exhaust air of a building and transfers the heat to the supply air, hot tap water and/or hydronic heating system (underfloor heating, radiators). This requires at least mechanical exhaust but mechanical supply is optional; see mechanical ventilation. This type of heat pump requires a certain air exchange rate to maintain its output power. Since the inside air is approximately 20–22 degrees Celsius all year round, the maximum output power of the heat pump is not varying with the seasons and outdoor temperature. Air leaving the building when the heat pump's compressor is running is usually at around −1° in most versions. Thus, the unit is extracting heat from the air that needs to be changed (at a rate of around a half an air change per hour). Air entering the house is of course generally warmer than the air processed through the unit so there is a net 'gain'. Care must be taken that these are only used in the correct type of houses. Exhaust air heat pumps have minimum flow rates so that when installed in a small flat, the airflow chronically over-ventilates the flat and increases the heat loss by drawing in large amounts of unwanted outside air. There are some models though that can take in additional outdoor air to negate this and this air is also feed to the compressor to avoid over ventilation.For most earlier exhaust air heat pumps there will be a low heat output to the hot water and heating of just around 1.8 kW from the compressor/heat pump process, but if that falls short of the building's requirements additional heat will be automatically triggered in the form of immersion heaters or an external gas boiler. The immersion heater top-up could be substantial ( if you select the wrong unit), and when a unit with a 6 kW immersion heater operates at the full output it will cost £1 per hour to run. == Issues == Between 2009 and 2013, some 15,000 brand new social homes were built in the UK with NIBE EAHPs used as primary heating. Owners and housing association tenants reported crippling electric bills. High running costs are usual with exhaust air heat pumps and should be expected, due to the very small heat recovery with these units. Typically the ventilation air stream is around 31 litres per second and the heat recovery is 750W and no more. All additional heat necessary to provide heating and hot water is from electricity, either compressor electrical input or immersion heater. At outside temperatures below 0 degrees Celsius, this type of heat pump removes more heat from a home than it supplies. Over a year around 60% of the energy input to a property with an exhaust air heat pump will be from electricity. Many families are still battling with developers to have their EAHP systems replaced with more reliable and efficient heating, noting the success of residents in Coventry. == See also == == References == == External links == Animation explaining simply how HRV works Heat recovery in Industry Energy and Heat Recovery Ventilators (ERV/HRV) Write-up of Single Room MHRV (SRMHRV) in UK home Builder Insight Bulletin - Heat Recovery Ventilation http://www.engineeringtoolbox.com/heat-recovery-efficiency-d_201.html Energy and Heat Recovery Ventilators (ERV/HRV)
Wikipedia/Energy_recovery_ventilation
Entropy is one of the few quantities in the physical sciences that require a particular direction for time, sometimes called an arrow of time. As one goes "forward" in time, the second law of thermodynamics says, the entropy of an isolated system can increase, but not decrease. Thus, entropy measurement is a way of distinguishing the past from the future. In thermodynamic systems that are not isolated, local entropy can decrease over time, accompanied by a compensating entropy increase in the surroundings; examples include objects undergoing cooling, living systems, and the formation of typical crystals. Much like temperature, despite being an abstract concept, everyone has an intuitive sense of the effects of entropy. For example, it is often very easy to tell the difference between a video being played forwards or backwards. A video may depict a wood fire that melts a nearby ice block; played in reverse, it would show a puddle of water turning a cloud of smoke into unburnt wood and freezing itself in the process. Surprisingly, in either case, the vast majority of the laws of physics are not broken by these processes, with the second law of thermodynamics being one of the only exceptions. When a law of physics applies equally when time is reversed, it is said to show T-symmetry; in this case, entropy is what allows one to decide if the video described above is playing forwards or in reverse as intuitively we identify that only when played forwards the entropy of the scene is increasing. Because of the second law of thermodynamics, entropy prevents macroscopic processes showing T-symmetry. When studying at a microscopic scale, the above judgements cannot be made. Watching a single smoke particle buffeted by air, it would not be clear if a video was playing forwards or in reverse, and, in fact, it would not be possible as the laws which apply show T-symmetry. As it drifts left or right, qualitatively it looks no different; it is only when the gas is studied at a macroscopic scale that the effects of entropy become noticeable (see Loschmidt's paradox). On average it would be expected that the smoke particles around a struck match would drift away from each other, diffusing throughout the available space. It would be an astronomically improbable event for all the particles to cluster together, yet the movement of any one smoke particle cannot be predicted. By contrast, certain subatomic interactions involving the weak nuclear force violate the conservation of parity, but only very rarely. According to the CPT theorem, this means they should also be time irreversible, and so establish an arrow of time. This, however, is neither linked to the thermodynamic arrow of time, nor has anything to do with the daily experience of time irreversibility. == Overview == The second law of thermodynamics allows for the entropy to remain the same regardless of the direction of time. If the entropy is constant in either direction of time, there would be no preferred direction. However, the entropy can only be a constant if the system is in the highest possible state of disorder, such as a gas that always was, and always will be, uniformly spread out in its container. The existence of a thermodynamic arrow of time implies that the system is highly ordered in one time direction only, which would by definition be the "past". Thus this law is about the boundary conditions rather than the equations of motion. The second law of thermodynamics is statistical in nature, and therefore its reliability arises from the huge number of particles present in macroscopic systems. It is not impossible, in principle, for all 6 × 1023 atoms in a mole of a gas to spontaneously migrate to one half of a container; it is only fantastically unlikely—so unlikely that no macroscopic violation of the Second Law has ever been observed. The thermodynamic arrow is often linked to the cosmological arrow of time, because it is ultimately about the boundary conditions of the early universe. According to the Big Bang theory, the Universe was initially very hot with energy distributed uniformly. For a system in which gravity is important, such as the universe, this is a low-entropy state (compared to a high-entropy state of having all matter collapsed into black holes, a state to which the system may eventually evolve). As the Universe grows, its temperature drops, which leaves less energy [per unit volume of space] available to perform work in the future than was available in the past. Additionally, perturbations in the energy density grow (eventually forming galaxies and stars). Thus the Universe itself has a well-defined thermodynamic arrow of time. But this does not address the question of why the initial state of the universe was that of low entropy. If cosmic expansion were to halt and reverse due to gravity, the temperature of the Universe would once again grow hotter, but its entropy would also continue to increase due to the continued growth of perturbations and the eventual black hole formation, until the latter stages of the Big Crunch when entropy would be lower than now. == An example of apparent irreversibility == Consider the situation in which a large container is filled with two separated liquids, for example a dye on one side and water on the other. With no barrier between the two liquids, the random jostling of their molecules will result in them becoming more mixed as time passes. However, if the dye and water are mixed then one does not expect them to separate out again when left to themselves. A movie of the mixing would seem realistic when played forwards, but unrealistic when played backwards. If the large container is observed early on in the mixing process, it might be found only partially mixed. It would be reasonable to conclude that, without outside intervention, the liquid reached this state because it was more ordered in the past, when there was greater separation, and will be more disordered, or mixed, in the future. Now imagine that the experiment is repeated, this time with only a few molecules, perhaps ten, in a very small container. One can easily imagine that by watching the random jostling of the molecules it might occur—by chance alone—that the molecules became neatly segregated, with all dye molecules on one side and all water molecules on the other. That this can be expected to occur from time to time can be concluded from the fluctuation theorem; thus it is not impossible for the molecules to segregate themselves. However, for a large number of molecules it is so unlikely that one would have to wait, on average, many times longer than the current age of the universe for it to occur. Thus a movie that showed a large number of molecules segregating themselves as described above would appear unrealistic and one would be inclined to say that the movie was being played in reverse. See Boltzmann's second law as a law of disorder. == Mathematics of the arrow == The mathematics behind the arrow of time, entropy, and basis of the second law of thermodynamics derive from the following set-up, as detailed by Carnot (1824), Clapeyron (1832), and Clausius (1854): Here, as common experience demonstrates, when a hot body T1, such as a furnace, is put into physical contact, such as being connected via a body of fluid (working body), with a cold body T2, such as a stream of cold water, energy will invariably flow from hot to cold in the form of heat Q, and given time the system will reach equilibrium. Entropy, defined as Q/T, was conceived by Rudolf Clausius as a function to measure the molecular irreversibility of this process, i.e. the dissipative work the atoms and molecules do on each other during the transformation. In this diagram, one can calculate the entropy change ΔS for the passage of the quantity of heat Q from the temperature T1, through the "working body" of fluid (see heat engine), which was typically a body of steam, to the temperature T2. Moreover, one could assume, for the sake of argument, that the working body contains only two molecules of water. Next, if we make the assignment, as originally done by Clausius: S = Q T {\displaystyle S={\frac {Q}{T}}} Then the entropy change or "equivalence-value" for this transformation is: Δ S = S f i n a l − S i n i t i a l {\displaystyle \Delta S=S_{\mathit {final}}-S_{\mathit {initial}}\,} which equals: Δ S = ( Q T 2 − Q T 1 ) {\displaystyle \Delta S=\left({\frac {Q}{T_{2}}}-{\frac {Q}{T_{1}}}\right)} and by factoring out Q, we have the following form, as was derived by Clausius: Δ S = Q ( 1 T 2 − 1 T 1 ) {\displaystyle \Delta S=Q\left({\frac {1}{T_{2}}}-{\frac {1}{T_{1}}}\right)} Thus, for example, if Q was 50 units, T1 was initially 100 degrees, and T2 was 1 degree, then the entropy change for this process would be 49.5. Hence, entropy increased for this process, the process took a certain amount of "time", and one can correlate entropy increase with the passage of time. For this system configuration, subsequently, it is an "absolute rule". This rule is based on the fact that all natural processes are irreversible by virtue of the fact that molecules of a system, for example two molecules in a tank, not only do external work (such as to push a piston), but also do internal work on each other, in proportion to the heat used to do work (see: Mechanical equivalent of heat) during the process. Entropy accounts for the fact that internal inter-molecular friction exists. == Correlations == An important difference between the past and the future is that in any system (such as a gas of particles) its initial conditions are usually such that its different parts are uncorrelated, but as the system evolves and its different parts interact with each other, they become correlated. For example, whenever dealing with a gas of particles, it is always assumed that its initial conditions are such that there is no correlation between the states of different particles (i.e. the speeds and locations of the different particles are completely random, up to the need to conform with the macrostate of the system). This is closely related to the second law of thermodynamics: For example, in a finite system interacting with finite heat reservoirs, entropy is equivalent to system-reservoir correlations, and thus both increase together. Take for example (experiment A) a closed box that is, at the beginning, half-filled with ideal gas. As time passes, the gas obviously expands to fill the whole box, so that the final state is a box full of gas. This is an irreversible process, since if the box is full at the beginning (experiment B), it does not become only half-full later, except for the very unlikely situation where the gas particles have very special locations and speeds. But this is precisely because we always assume that the initial conditions in experiment B are such that the particles have random locations and speeds. This is not correct for the final conditions of the system in experiment A, because the particles have interacted between themselves, so that their locations and speeds have become dependent on each other, i.e. correlated. This can be understood if we look at experiment A backwards in time, which we'll call experiment C: now we begin with a box full of gas, but the particles do not have random locations and speeds; rather, their locations and speeds are so particular, that after some time they all move to one half of the box, which is the final state of the system (this is the initial state of experiment A, because now we're looking at the same experiment backwards!). The interactions between particles now do not create correlations between the particles, but in fact turn them into (at least seemingly) random, "canceling" the pre-existing correlations. The only difference between experiment C (which defies the Second Law of Thermodynamics) and experiment B (which obeys the Second Law of Thermodynamics) is that in the former the particles are uncorrelated at the end, while in the latter the particles are uncorrelated at the beginning. In fact, if all the microscopic physical processes are reversible (see discussion below), then the Second Law of Thermodynamics can be proven for any isolated system of particles with initial conditions in which the particles states are uncorrelated. To do this, one must acknowledge the difference between the measured entropy of a system—which depends only on its macrostate (its volume, temperature etc.)—and its information entropy, which is the amount of information (number of computer bits) needed to describe the exact microstate of the system. The measured entropy is independent of correlations between particles in the system, because they do not affect its macrostate, but the information entropy does depend on them, because correlations lower the randomness of the system and thus lowers the amount of information needed to describe it. Therefore, in the absence of such correlations the two entropies are identical, but otherwise the information entropy is smaller than the measured entropy, and the difference can be used as a measure of the amount of correlations. Now, by Liouville's theorem, time-reversal of all microscopic processes implies that the amount of information needed to describe the exact microstate of an isolated system (its information-theoretic joint entropy) is constant in time. This joint entropy is equal to the marginal entropy (entropy assuming no correlations) plus the entropy of correlation (mutual entropy, or its negative mutual information). If we assume no correlations between the particles initially, then this joint entropy is just the marginal entropy, which is just the initial thermodynamic entropy of the system, divided by the Boltzmann constant. However, if these are indeed the initial conditions (and this is a crucial assumption), then such correlations form with time. In other words, there is a decreasing mutual entropy (or increasing mutual information), and for a time that is not too long—the correlations (mutual information) between particles only increase with time. Therefore, the thermodynamic entropy, which is proportional to the marginal entropy, must also increase with time (note that "not too long" in this context is relative to the time needed, in a classical version of the system, for it to pass through all its possible microstates—a time that can be roughly estimated as τ e S {\displaystyle \tau e^{S}} , where τ {\displaystyle \tau } is the time between particle collisions and S is the system's entropy. In any practical case this time is huge compared to everything else). Note that the correlation between particles is not a fully objective quantity. One cannot measure the mutual entropy, one can only measure its change, assuming one can measure a microstate. Thermodynamics is restricted to the case where microstates cannot be distinguished, which means that only the marginal entropy, proportional to the thermodynamic entropy, can be measured, and, in a practical sense, always increases. == Arrow of time in various phenomena == Phenomena that occur differently according to their time direction can ultimately be linked to the second law of thermodynamics, for example ice cubes melt in hot coffee rather than assembling themselves out of the coffee and a block sliding on a rough surface slows down rather than speeds up. The idea that we can remember the past and not the future is called the "psychological arrow of time" and it has deep connections with Maxwell's demon and the physics of information; memory is linked to the second law of thermodynamics if one views it as correlation between brain cells (or computer bits) and the outer world: Since such correlations increase with time, memory is linked to past events, rather than to future events. == Current research == Current research focuses mainly on describing the thermodynamic arrow of time mathematically, either in classical or quantum systems, and on understanding its origin from the point of view of cosmological boundary conditions. === Dynamical systems === Some current research in dynamical systems indicates a possible "explanation" for the arrow of time. There are several ways to describe the time evolution of a dynamical system. In the classical framework, one considers an ordinary differential equation, where the parameter is explicitly time. By the very nature of differential equations, the solutions to such systems are inherently time-reversible. However, many of the interesting cases are either ergodic or mixing, and it is strongly suspected that mixing and ergodicity somehow underlie the fundamental mechanism of the arrow of time. While the strong suspicion may be but a fleeting sense of intuition, it cannot be denied that, when there are multiple parameters, the field of partial differential equations comes into play. In such systems there is the Feynman–Kac formula in play, which assures for specific cases, a one-to-one correspondence between specific linear stochastic differential equation and partial differential equation. Therefore, any partial differential equation system is tantamount to a random system of a single parameter, which is not reversible due to the aforementioned correspondence. Mixing and ergodic systems do not have exact solutions, and thus proving time irreversibility in a mathematical sense is (as of 2006) impossible. The concept of "exact" solutions is an anthropic one. Does "exact" mean the same as closed form in terms of already know expressions, or does it mean simply a single finite sequence of strokes of a/the writing utensil/human finger? There are myriad of systems known to humanity that are abstract and have recursive definitions but no non-self-referential notation currently exists. As a result of this complexity, it is natural to look elsewhere for different examples and perspectives. Some progress can be made by studying discrete-time models or difference equations. Many discrete-time models, such as the iterated functions considered in popular fractal-drawing programs, are explicitly not time-reversible, as any given point "in the present" may have several different "pasts" associated with it: indeed, the set of all pasts is known as the Julia set. Since such systems have a built-in irreversibility, it is inappropriate to use them to explain why time is not reversible. There are other systems that are chaotic, and are also explicitly time-reversible: among these is the baker's map, which is also exactly solvable. An interesting avenue of study is to examine solutions to such systems not by iterating the dynamical system over time, but instead, to study the corresponding Frobenius-Perron operator or transfer operator for the system. For some of these systems, it can be explicitly, mathematically shown that the transfer operators are not trace-class. This means that these operators do not have a unique eigenvalue spectrum that is independent of the choice of basis. In the case of the baker's map, it can be shown that several unique and inequivalent diagonalizations or bases exist, each with a different set of eigenvalues. It is this phenomenon that can be offered as an "explanation" for the arrow of time. That is, although the iterated, discrete-time system is explicitly time-symmetric, the transfer operator is not. Furthermore, the transfer operator can be diagonalized in one of two inequivalent ways: one that describes the forward-time evolution of the system, and one that describes the backwards-time evolution. As of 2006, this type of time-symmetry breaking has been demonstrated for only a very small number of exactly-solvable, discrete-time systems. The transfer operator for more complex systems has not been consistently formulated, and its precise definition is mired in a variety of subtle difficulties. In particular, it has not been shown that it has a broken symmetry for the simplest exactly-solvable continuous-time ergodic systems, such as Hadamard's billiards, or the Anosov flow on the tangent space of PSL(2,R). === Quantum mechanics === Research on irreversibility in quantum mechanics takes several different directions. One avenue is the study of rigged Hilbert spaces, and in particular, how discrete and continuous eigenvalue spectra intermingle. For example, the rational numbers are completely intermingled with the real numbers, and yet have a unique, distinct set of properties. It is hoped that the study of Hilbert spaces with a similar inter-mingling will provide insight into the arrow of time. Another distinct approach is through the study of quantum chaos by which attempts are made to quantize systems as classically chaotic, ergodic or mixing. The results obtained are not dissimilar from those that come from the transfer operator method. For example, the quantization of the Boltzmann gas, that is, a gas of hard (elastic) point particles in a rectangular box reveals that the eigenfunctions are space-filling fractals that occupy the entire box, and that the energy eigenvalues are very closely spaced and have an "almost continuous" spectrum (for a finite number of particles in a box, the spectrum must be, of necessity, discrete). If the initial conditions are such that all of the particles are confined to one side of the box, the system very quickly evolves into one where the particles fill the entire box. Even when all of the particles are initially on one side of the box, their wave functions do, in fact, permeate the entire box: they constructively interfere on one side, and destructively interfere on the other. Irreversibility is then argued by noting that it is "nearly impossible" for the wave functions to be "accidentally" arranged in some unlikely state: such arrangements are a set of zero measure. Because the eigenfunctions are fractals, much of the language and machinery of entropy and statistical mechanics can be imported to discuss and argue the quantum case. === Cosmology === Some processes that involve high energy particles and are governed by the weak force (such as K-meson decay) defy the symmetry between time directions. However, all known physical processes do preserve a more complicated symmetry (CPT symmetry), and are therefore unrelated to the second law of thermodynamics, or to the day-to-day experience of the arrow of time. A notable exception is the wave function collapse in quantum mechanics, an irreversible process which is considered either real (by the Copenhagen interpretation) or apparent only (by the many-worlds interpretation of quantum mechanics). In either case, the wave function collapse always follows quantum decoherence, a process which is understood to be a result of the second law of thermodynamics. The universe was in a uniform, high density state at its very early stages, shortly after the Big Bang. The hot gas in the early universe was near thermodynamic equilibrium (see Horizon problem); in systems where gravitation plays a major role, this is a state of low entropy, due to the negative heat capacity of such systems (this is in contrary to non-gravitational systems where thermodynamic equilibrium is a state of maximum entropy). Moreover, due to its small volume compared to future epochs, the entropy was even lower as gas expansion increases its entropy. Thus the early universe can be considered to be highly ordered. Note that the uniformity of this early near-equilibrium state has been explained by the theory of cosmic inflation. According to this theory the universe (or, rather, its accessible part, a radius of 46 billion light years around Earth) evolved from a tiny, totally uniform volume (a portion of a much bigger universe), which expanded greatly; hence it was highly ordered. Fluctuations were then created by quantum processes related to its expansion, in a manner supposed to be such that these fluctuations went through quantum decoherence, so that they became uncorrelated for any practical use. This is supposed to give the desired initial conditions needed for the Second Law of Thermodynamics; different decoherent states ultimately evolved to different specific arrangements of galaxies and stars. The universe is apparently an open universe, so that its expansion will never terminate, but it is an interesting thought experiment to imagine what would have happened had the universe been closed. In such a case, its expansion would stop at a certain time in the distant future, and then begin to shrink. Moreover, a closed universe is finite. It is unclear what would happen to the second law of thermodynamics in such a case. One could imagine at least two different scenarios, though in fact only the first one is plausible, as the other requires a highly smooth cosmic evolution, contrary to what is observed: The broad consensus among the scientific community today is that smooth initial conditions lead to a highly non-smooth final state, and that this is in fact the source of the thermodynamic arrow of time. Gravitational systems tend to gravitationally collapse to compact bodies such as black holes (a phenomenon unrelated to wavefunction collapse), so the universe would end in a Big Crunch that is very different than a Big Bang run in reverse, since the distribution of the matter would be highly non-smooth; as the universe shrinks, such compact bodies merge to larger and larger black holes. It may even be that it is impossible for the universe to have both a smooth beginning and a smooth ending. Note that in this scenario the energy density of the universe in the final stages of its shrinkage is much larger than in the corresponding initial stages of its expansion (there is no destructive interference, unlike in the second scenario described below), and consists of mostly black holes rather than free particles. A highly controversial view is that instead, the arrow of time will reverse. The quantum fluctuations—which in the meantime have evolved into galaxies and stars—will be in superposition in such a way that the whole process described above is reversed—i.e., the fluctuations are erased by destructive interference and total uniformity is achieved once again. Thus the universe ends in a Big Crunch, which is similar to its beginning in the Big Bang. Because the two are totally symmetric, and the final state is very highly ordered, entropy must decrease close to the end of the universe, so that the second law of thermodynamics reverses when the universe shrinks. This can be understood as follows: in the very early universe, interactions between fluctuations created entanglement (quantum correlations) between particles spread all over the universe; during the expansion, these particles became so distant that these correlations became negligible (see quantum decoherence). At the time the expansion halts and the universe starts to shrink, such correlated particles arrive once again at contact (after circling around the universe), and the entropy starts to decrease—because highly correlated initial conditions may lead to a decrease in entropy. Another way of putting it, is that as distant particles arrive, more and more order is revealed because these particles are highly correlated with particles that arrived earlier. In this scenario, the cosmological arrow of time is the reason for both the thermodynamic arrow of time and the quantum arrow of time. Both will slowly disappear as the universe will come to a halt, and will later be reversed. In the first and more consensual scenario, it is the difference between the initial state and the final state of the universe that is responsible for the thermodynamic arrow of time. This is independent of the cosmological arrow of time. == See also == Arrow of time Cosmic inflation Entropy H-theorem History of entropy Loschmidt's paradox == References == == Further reading == Kardar, Mehran (2007). Statistical Physics of Particles. Cambridge University Press. ISBN 978-0-521-87342-0. OCLC 860391091. Halliwell, J.J.; et al. (1994). Halliwell, Jonathan J. (ed.). Physical origins of time asymmetry (1st paperback ed.). Cambridge: Cambridge Univ. Press. ISBN 978-0-521-56837-1. (technical). Mackey, Michael C. (1992). Time's arrow: The origins of thermodynamic behavior. Springer study edition (1st ed.). New York Berlin: Springer. ISBN 978-3-540-94093-7. OCLC 28585247. ... it is shown that for there to be a global evolution of the entropy to its maximal value ... it is necessary and sufficient that the system have a property known as exactness. ... these criteria suggest that all currently formulated physical laws may not be at the foundation of the thermodynamic behavior we observe every day of our lives. (page xi) Dover has reprinted the monograph in 2003 (ISBN 0486432432). For a short paper listing "the essential points of that argument, correcting presentation points that were confusing ... and emphasizing conclusions more forcefully than previously" see Mackey, Michael C. (2001). "Microscopic Dynamics and the Second Law of Thermodynamics" (PDF). In Mugnai, D.; Ranfagni, A.; Schulman, L. S.; Istituto italiano per gli studi filosofici; Istituto di ricerca sulle onde elettromagnetiche (Italy) (eds.). Time's arrows, quantum measurement, and superluminal behavior: International Conference TAQMSB: Palazzo Serra di Cassano, Via Monte di Dio 14, 80132 Napoli, October 3-5, 2000. Monografie scientifiche. Roma: Consiglio nazionale delle ricerche. pp. 49–65. ISBN 978-88-8080-024-8. Archived from the original (PDF) on 2011-07-25. Carroll, Sean M. (2010). From eternity to here: the quest for the ultimate theory of time (1st ed.). New York, NY: Dutton. ISBN 978-0-525-95133-9. == External links == Thermodynamic Asymmetry in Time at the online Stanford Encyclopedia of Philosophy
Wikipedia/Entropy_(arrow_of_time)
Building science is the science and technology-driven collection of knowledge to provide better indoor environmental quality (IEQ), energy-efficient built environments, and occupant comfort and satisfaction. Building physics, architectural science, and applied physics are terms used for the knowledge domain that overlaps with building science. In building science, the methods used in natural and hard sciences are widely applied, which may include controlled and quasi-experiments, randomized control, physical measurements, remote sensing, and simulations. On the other hand, methods from social and soft sciences, such as case study, interviews & focus group, observational method, surveys, and experience sampling, are also widely used in building science to understand occupant satisfaction, comfort, and experiences by acquiring qualitative data. One of the recent trends in building science is a combination of the two different methods. For instance, it is widely known that occupants' thermal sensation and comfort may vary depending on their sex, age, emotion, experiences, etc. even in the same indoor environment. Despite the advancement in data extraction and collection technology in building science, objective measurements alone can hardly represent occupants' state of mind such as comfort and preference. Therefore, researchers are trying to measure both physical contexts and understand human responses to figure out complex interrelationships. Building science traditionally includes the study of indoor thermal environment, indoor acoustic environment, indoor light environment, indoor air quality, and building resource use, including energy and building material use. These areas are studied in terms of physical principles, relationship to building occupant health, comfort, and productivity, and how they can be controlled by the building envelope and electrical and mechanical systems. The National Institute of Building Sciences (NIBS) additionally includes the areas of building information modeling, building commissioning, fire protection engineering, seismic design and resilient design within its scope. One of the applications of building science is to provide predictive capability to optimize the building performance and sustainability of new and existing buildings, understand or prevent building failures, and guide the design of new techniques and technologies. == Applications == During the architectural design process, building science knowledge is used to inform design decisions to optimize building performance. Design decisions can be made based on knowledge of building science principles and established guidelines, such as the NIBS Whole Building Design Guide (WBDG) and the collection of ASHRAE Standards related to building science. Computational tools can be used during design to simulate building performance based on input information about the designed building envelope, lighting system, and mechanical system. Models can be used to predict operational energy use, solar heat and radiation distribution, air flow, and other physical phenomena within the building. These tools are valuable for evaluating a design and ensuring it will perform within an acceptable range before construction begins. Many of the available computational tools analyze building performance goals and perform design optimization. The accuracy of the models is influenced by the modeler's knowledge of building science principles and by the amount of validation performed for the specific program. When existing buildings are being evaluated, measurements and computational tools can be used to evaluate performance based on measured existing conditions. An array of in-field testing equipment can be used to measure temperature, moisture, sound levels, air pollutants, or other criteria. Standardized procedures for taking these measurements are provided in the Performance Measurement Protocols for Commercial Buildings. For example, thermal infrared (IR) imaging devices can be used to measure temperatures of building components while the building is in use. These measurements can be used to evaluate how the mechanical system is operating and if there are areas of anomalous heat gain or heat loss through the building envelope. Measurements of conditions in existing buildings are used as part of post occupancy evaluations. Post occupancy evaluations may also include surveys of building occupants to gather data on occupant satisfaction and well-being and to gather qualitative data on building performance that may not have been captured by measurement devices. Many aspects of building science are the responsibility of the architect (in Canada, many architectural firms employ an architectural technologist for this purpose), often in collaboration with the engineering disciplines that have evolved to handle 'non-building envelope' building science concerns: Civil engineering, Structural engineering, Earthquake engineering, Geotechnical engineering, Mechanical engineering, Electrical engineering, Acoustic engineering, & fire code engineering. Even the interior designer will inevitably generate a few building science issues. == Topics == === Daylighting and visual comfort === Daylighting is the controlled admission of natural light, direct sunlight, and diffused skylight into a building to reduce electric lighting and save energy. A daylighting system does not just consist of daylight apertures, such as skylights and windows, but is coupled with a daylight-responsive lighting control system. Daylight positively impacts the psychological and physiological health of a human being by stimulating the human circadian rhythm, which can lower depression, improve sleep quality, reduce lethargy, and prevent illness. However, studies do not always lead to a positive correlation between maximizing daylighting availability and human comfort and health. When large windows exist within the buildings, we need to control the quantity and the quality of the visual environment. A lack of attention to visual comfort issues often makes the best daylighting intentions ineffective due to excessive brightness and high contrast luminance ratios within the space which result in glare. Illuminating Engineering Society (IES)’s Lighting Handbook defines glare as the sensation produced by luminance levels in the visual field, sufficiently greater than those that our eyes can adapt to, that causes discomfort or loss in visual performance or visibility. Glare interferes with visual perception caused by an uncomfortably bright light source or reflection. If the occupants experience visual discomfort from excessive sunlight penetration through the windows of the buildings, they may wish to close the shading devices which would decrease the daylight availability and increase the electric lighting energy consumption. Daylighting and visual comfort is an extensively studied topic in building science that allows for successful harvesting of daylighting and energy savings. It is critical that architects, engineers, and building owners use daylight and glare metrics to evaluate lighting conditions in daylit spaces for occupant health and comfort. === Indoor environmental quality (IEQ) === Indoor environmental quality (IEQ) refers to the quality of a building's environment in relation to the health and wellbeing of those who occupy space within it. IEQ is determined by many factors, including lighting, air quality, and temperature. Workers are often concerned that they have symptoms or health conditions from exposures to contaminants in the buildings where they work. One reason for this concern is that their symptoms often get better when they are not in the building. While research has shown that some respiratory symptoms and illnesses can be associated with damp buildings, it is still unclear what measurements of indoor contaminants show that workers are at risk for disease. In most instances where a worker and his or her physician suspect that the building environment is causing a specific health condition, the information available from medical tests and tests of the environment is not sufficient to establish which contaminants are responsible. Despite uncertainty about what to measure and how to interpret what is measured, research shows that building-related symptoms are associated with building characteristics, including dampness, cleanliness, and ventilation characteristics. Indoor environments are highly complex and building occupants may be exposed to a variety of contaminants (in the form of gases and particles) from office machines, cleaning products, construction activities, carpets and furnishings, perfumes, cigarette smoke, water-damaged building materials, microbial growth (fungal, mold, and bacterial), insects, and outdoor pollutants. Other factors such as indoor temperatures, relative humidity, and ventilation levels can also affect how individuals respond to the indoor environment. Understanding the sources of indoor environmental contaminants and controlling them can often help prevent or resolve building-related worker symptoms. Practical guidance for improving and maintaining the indoor environment is available. Building indoor environment covers the environmental aspects in the design, analysis, and operation of energy-efficient, healthy, and comfortable buildings. Fields of specialization include architecture, HVAC design, thermal comfort, indoor air quality (IAQ), lighting, acoustics, and control systems. === HVAC systems === The mechanical systems, usually a sub-set of the broader Building Services, used to control the temperature, humidity, pressure and other select aspects of the indoor environment are often described as the Heating, Ventilating, and Air-Conditioning (HVAC) systems. These systems have grown in complexity and importance (often consuming around 20% of the total budget in commercial buildings) as occupants demand tighter control of conditions, buildings become larger, and enclosures and passive measures became less important as a means of providing comfort. Building science includes the analysis of HVAC systems for both physical impacts (heat distribution, air velocities, relative humidities, etc.) and for effect on the comfort of the building's occupants. Because occupants' perceived comfort is dependent on factors such as current weather and the type of climate the building is located in, the needs for HVAC systems to provide comfortable conditions will vary across projects. In addition, various HVAC control strategies have been implemented and studied to better contribute to occupants' comfort. In the U.S., ASHRAE has published standards to help building managers and engineers design and operate the system. In the UK, a similar guideline was published by CIBSE. Apart from industry practice, advanced control strategies are widely discussed in research as well. For example, closed-loop feedback control can compare air temperature set-point with sensor measurements; demand response control can help prevent electric power-grid from having peak load by reducing or shifting their usage based on time-varying rate. With the improvement from computational performance and machine learning algorithms, model prediction on cooling and heating load with optimal control can further improve occupants comfort by pre-operating the HVAC system. It is recognized that advanced control strategies implementation is under the scope of developing Building Automation System (BMS) with integrated smart communication technologies, such as Internet of Things (IoT). However, one of the major obstacles identified by practitioners is the scalability of control logics and building data mapping due to the unique nature of building designs. It was estimated that due to inadequate interoperability, building industry loses $15.8 billion annually in the U.S. Recent research projects like Haystack and Brick intend to address the problem by utilizing metadata schema, which could provide more accurate and convenient ways of capturing data points and connection hierarchies in building mechanical systems. With the support of semantic models, automated configuration can further benefit HVAC control commissioning and software upgrades. === Enclosure (envelope) systems === The building enclosure is the part of the building that separates the indoors from the outdoors. This includes the wall, roof, windows, slabs on grade, and joints between all of these. The comfort, productivity, and even health of building occupants in areas near the building enclosure (i.e., perimeter zones) are affected by outdoor influences such as noise, temperature, and solar radiation, and by their ability to control these influences. As part of its function, the enclosure must control (not necessarily block or stop) the flow of moisture, heat, air, vapor, solar radiation, insects, or noise, while resisting the loads imposed on the structure (wind, seismic). Daylight transmittance through glazed components of the facade can be analyzed to evaluate the reduced need for electric lighting. === Building sustainability === Building sustainability, often referred to as sustainable design, integrates strategies to lower building environmental impacts, including lowering both operational carbon, which is the emissions from energy use during a building's life, and embodied carbon, which accounts for the emissions from material production and construction. Building sustainability practices aim to design with consideration for future resources and environmental realities. Buildings are responsible for approximately 40% of global energy consumption and 13% carbon emissions, primarily related to building HVAC systems operation. Reducing operational carbon is critical to mitigate climate change. To address these emissions, renewable energy sources, such as solar and wind energy, are adopted by the building industry to support electricity generation. However, the electricity demand profile shows imbalance between supply and demand, which is known as the 'duck curve'. This could impact on maintaining grid system stability. Therefore, other strategies such as thermal energy storage systems are developed to achieve higher levels of sustainability by reducing grid peak power. A push towards zero-energy building also known as Net-Zero Energy Building has been present in the Building Science field. The qualifications for Net Zero Energy Building Certification can be found on the Living Building Challenge website. ==== Embodied Carbon and Decarbonization ==== Embodied carbon refers to the total carbon emissions associated with the entire life cycle of a building material (i.e. material extraction, manufacturing and production, transportation, construction, and end of life). As building performance research has decreased operational carbon, there has been an increase in embodied carbon within the building sector, partly due to the higher material demands of energy-efficient designs. This shift has underscored the need to address embodied carbon alongside operational emissions to achieve holistic decarbonization. Building decarbonization is most impactful during early-stage design, where materials, systems, and structural choices can be optimized to reduce embodied carbon and improve operational efficiency before moving forward in development stages. Structural materials, such as steel and concrete, contribute significantly to a building's embodied carbon footprint. Strategies to mitigate these impacts include material substitution, incorporating recycled and reused materials, and adopting low-carbon manufacturing processes. Challenges in addressing embodied carbon include insufficient data, lack of standardization, cost considerations, and regulatory barriers. Reliable databases are often limited, region-specific, and inconsistent, making it difficult to apply universally. Existing standards are often voluntary and vary in scope, making comparisons and benchmarking difficult. Life cycle assessment standards for evaluating building embodied carbon include ISO 14040, ISO 14044, EN 15978, PAS 2050, and ReCiPe. These frameworks provide structured approaches to evaluate and quantify life cycle environmental impacts, such as embodied carbon. Addressing embodied carbon is a growing aspect of building science, becoming critical for advancing building sustainability efforts and reducing the environmental impact of the built environment. === Post-Occupancy Evaluation (POE) === POE is a survey-based method to measure the building performance after the built environment was occupied. The occupant responses were collected through structured or open inquiries. Statistical methods and data visualization were often used to suggest which aspects(features) of the building were supportive or problematic to the occupants. The results may become design knowledge for architects to design new buildings or provide a data-basis to improve the current environment. == Certification == Although there are no direct or integrated professional architecture or engineering certifications for building science, there are independent professional credentials associated with the disciplines. Building science is typically a specialization within the broad areas of architecture or engineering practice. However, there are professional organizations offering individual professional credentials in specialized areas. Some of the most prominent green building rating systems are: BREEAM (Building Research Establishment Environmental Assessment Method), which is the world's longest established sustainable building assessment system, developed by the Building Research Establishment; LEED (Leadership in Energy and Environmental Design), developed by the U.S. Green Building Council; Green Star (Australia), which is the main green building rating system in Australia, developed by the Green Building Council of Australia; WELL which is delivered by the International WELL Building Institute and administered by the Green Business Certification Inc.; CASBEE (Comprehensive Assessment System for Built Environment Efficiency), which is the main green building rating system in Japan; Living Building Challenge, developed by the International Living Future Institute; Passivhaus (Passive House), developed by the Passive House Institute, which is an internationally recognized, performance-based energy standard in construction. EcoChef, a sustainability certification designed specifically for commercial kitchens, evaluating energy efficiency, waste reduction, and operational performance in the foodservice industry, developed by Forward Dining Solutions. There are other building sustainability accreditation and certification institutions as well. Also in the US, contractors certified by the Building Performance Institute, an independent organization, advertise that they operate businesses as Building Scientists. This is questionable due to their lack of scientific background and credentials. On the other hand, more formal building science experience is true in Canada for most of the Certified Energy Advisors. Many of these trades and technologists require and receive some training in very specific areas of building science (e.g., air tightness, or thermal insulation). == List of principal building science journals == Building and Environment: This international journal publishes original research papers and review articles related to building science, urban physics, and human interaction with the indoor and outdoor built environment. The journal's most cited articles cover topics such as occupant behavior in buildings, green building certification systems, and tunnel ventilation systems. Publisher: Elsevier. Impact Factor (2019): 4.971 Energy and Buildings: This international journal publishes articles with explicit links to energy use in buildings. The aim is to present new research results, and new proven practice aimed at reducing the energy needs of a building and improving indoor air quality. The journal's most cited articles cover topics such as prediction models for building energy consumption, optimization models of HVAC systems, and life cycle assessment. Publisher: Elsevier. Impact Factor (2019): 4.867 Indoor Air: This international journal publishes papers reflecting the broad categories of interest in the field of indoor environment of non-industrial buildings, including health effects, thermal comfort, monitoring and modelling, source characterization, and ventilation (architecture) and other environmental control techniques. The journal's most cited articles cover topics such as the impact of indoor air pollutants and thermal conditions on occupant performance, the movement of droplets in indoor environments, and the effects of ventilation rates on occupant health. Publisher: John Wiley & Sons. Impact Factor (2019): 4.739 Architectural Science Review: Founded at the University of Sydney, Australia in 1958, this journal aims to promote the development, accumulation, and application of scientific knowledge on a wide range of environmental topics. According to the journal description, the topics may include but not limited to building science and technology, environmental sustainability, structures and materials, audio and acoustics, illumination, thermal systems, building physics, building services, building climatology, building economics, ergonomics, history and theory of architectural science, the social sciences of architecture. Publisher: Taylor & Francis Group Building Research and Information: This journal focuses on buildings, building stocks and their supporting systems. Unique to BRI is a holistic and transdisciplinary approach to buildings, which acknowledges the complexity of the built environment and other systems over their life. Published articles utilize conceptual and evidence-based approaches which reflect the complexity and linkages between culture, environment, economy, society, organizations, quality of life, health, well-being, design and engineering of the built environment. The journal's most cited articles cover topics such as the gap between performance and actual energy consumption, barriers and drivers for sustainable building, and the politics of resilient cities. Publisher: Taylor & Francis Group. Impact Factor (2019): 3.887 Journal of Building Performance Simulation: This international, peer-reviewed journal publishes high quality research and state of the art “integrated” papers to promote scientifically thorough advancement of all the areas of non-structural performance of a building and particularly in heat transfer, air, moisture transfer. The journal's most cited articles cover topics such as co-simulation of building energy and control systems, the Buildings library, and the impact of occupant's behavior on building energy demand. Publisher: Taylor & Francis Group. Impact Factor (2019): 3.458 LEUKOS: This journal publishes engineering developments, scientific discoveries, and experimental results related to light applications. Topics of interest include optical radiation, light generation, light control, light measurement, lighting design, daylighting, energy management, energy economics, and sustainability. The journal's most cited articles cover topics such as lighting design metrics, psychological processes influencing lighting quality, and the effects of lighting quality and energy-efficiency on task performance, mood, health, satisfaction, and comfort. Publisher: Taylor & Francis Group. Impact Factor (2019): 2.667 Building Simulation: This international journal publishes original, high quality, peer-reviewed research papers and review articles dealing with modeling and simulation of buildings including their systems. The goal is to promote the field of building science and technology to such a level that modeling will eventually be used in every aspect of building construction as a routine instead of an exception. Of particular interest are papers that reflect recent developments and applications of modeling tools and their impact on advances of building science and technology. Publisher: Springer Nature. Impact Factor (2019): 2.472 Applied Acoustics: This journal covers research findings related to practical applications of acoustics in engineering and science. The journal's most cited articles related to building science cover topics such as the prediction of the sound absorption of natural materials, the implementation of low-cost urban acoustic monitoring devices, and sound absorption of natural kenaf fibers. Publisher: Elsevier. Impact Factor (2019): 2.440 Lighting Research & Technology: This journal covers all aspects of light and lighting, including the human response to light, light generation, light control, light measurement, lighting design equipment, daylighting, energy efficiency of lighting design, and sustainability. The journal's most cited articles cover topics such as light as a circadian stimulus for architectural lighting, human perceptions of color rendition, and the influence of color gamut size and shape on color preference. Publisher: SAGE Publishing. Impact Factor (2019): 2.226 == See also == Architectural engineering Architectural Institute of Japan Architecture ASHRAE Building enclosure commissioning Central Building Research Institute, India Galvanic corrosion Indoor air quality Kansas Building Science Institute National Institute of Building Sciences Passive House Seismic analysis Sustainable refurbishment Vapor barrier == References ==
Wikipedia/Building_science
Infrared thermography (IRT), thermal video or thermal imaging, is a process where a thermal camera captures and creates an image of an object by using infrared radiation emitted from the object in a process, which are examples of infrared imaging science. Thermographic cameras usually detect radiation in the long-infrared range of the electromagnetic spectrum (roughly 9,000–14,000 nanometers or 9–14 μm) and produce images of that radiation, called thermograms. Since infrared radiation is emitted by all objects with a temperature above absolute zero according to the black body radiation law, thermography makes it possible to see one's environment with or without visible illumination. The amount of radiation emitted by an object increases with temperature; therefore, thermography allows one to see variations in temperature. When viewed through a thermal imaging camera, warm objects stand out well against cooler backgrounds; humans and other warm-blooded animals become easily visible against the environment, day or night. As a result, thermography is particularly useful to the military and other users of surveillance cameras. Some physiological changes in human beings and other warm-blooded animals can also be monitored with thermal imaging during clinical diagnostics. Thermography is used in allergy detection and veterinary medicine. Some alternative medicine practitioners promote its use for breast screening, despite the FDA warning that "those who opt for this method instead of mammography may miss the chance to detect cancer at its earliest stage". Government and airport personnel used thermography to detect suspected swine flu cases during the 2009 pandemic. Thermography has a long history, although its use has increased dramatically with the commercial and industrial applications of the past fifty years. Firefighters use thermography to see through smoke, to find persons, and to localize the base of a fire. Maintenance technicians use thermography to locate overheating joints and sections of power lines, which are a sign of impending failure. Building construction technicians can see thermal signatures that indicate heat leaks in faulty thermal insulation and can use the results to improve the efficiency of heating and air-conditioning units. The appearance and operation of a modern thermographic camera is often similar to a camcorder. Often the live thermogram reveals temperature variations so clearly that a photograph is not necessary for analysis. A recording module is therefore not always built-in. Specialized thermal imaging cameras use focal plane arrays (FPAs) that respond to longer wavelengths (mid- and long-wavelength infrared). The most common types are InSb, InGaAs, HgCdTe and QWIP FPA. The newest technologies use low-cost, uncooled microbolometers as FPA sensors. Their resolution is considerably lower than that of optical cameras, mostly 160×120 or 320×240 pixels, up to 1280 × 1024 for the most expensive models. Thermal imaging cameras are much more expensive than their visible-spectrum counterparts, and higher-end models are often export-restricted due to the military uses for this technology. Older bolometers or more sensitive models such as InSb require cryogenic cooling, usually by a miniature Stirling cycle refrigerator or liquid nitrogen. == Thermal energy == Thermal images, or thermograms, are visual displays of the total infrared energy emitted, transmitted, and reflected by an object. Because there are multiple sources of the infrared energy, it is sometimes difficult to get an accurate temperature of an object using this method. A thermal imaging camera uses processing algorithms to reconstruct a temperature image. Note that the image shows an approximation of the temperature of an object, as the camera integrates multiple sources of data in the areas surrounding the object to estimate its temperature. This phenomenon may become clearer upon consideration of the formula: Incident Radiant Power = Emitted Radiant Power + Transmitted Radiant Power + Reflected Radiant Power; where incident radiant power is the radiant power profile when viewed through a thermal imaging camera. Emitted radiant power is generally what is intended to be measured; transmitted radiant power is the radiant power that passes through the subject from a remote thermal source, and; reflected radiant power is the amount of radiant power that reflects off the surface of the object from a remote thermal source. This phenomenon occurs everywhere, all the time. It is a process known as radiant heat exchange, since radiant power × time equals radiant energy. However, in the case of infrared thermography, the above equation is used to describe the radiant power within the spectral wavelength passband of the thermal imaging camera in use. The radiant heat exchange requirements described in the equation apply equally at every wavelength in the electromagnetic spectrum. If the object is radiating at a higher temperature than its surroundings, then power transfer takes place radiating from warm to cold following the principle stated in the second law of thermodynamics. So if there is a cool area in the thermogram, that object will be absorbing radiation emitted by surrounding warm objects. The ability of objects to emit is called emissivity, to absorb radiation is called absorptivity. Under outdoor environments, convective cooling from wind may also need to be considered when trying to get an accurate temperature reading. == Emissivity == Emissivity (or emissivity coefficient) represents a material's ability to emit thermal radiation, which is an optical property of matter. A material's emissivity can theoretically range from 0 (completely not-emitting) to 1 (completely emitting). An example of a substance with low emissivity would be silver, with an emissivity coefficient of 0.02. An example of a substance with high emissivity would be asphalt, with an emissivity coefficient of .98. A black body is a theoretical object with an emissivity of 1 that radiates thermal radiation characteristic of its contact temperature. That is, if the contact temperature of a thermally uniform black body radiator were 50 °C (122 °F), it would emit the characteristic black-body radiation of 50 °C (122 °F). An ordinary object emits less infrared radiation than a theoretical black body. In other words, the ratio of the actual emission to the maximum theoretical emission is an object's emissivity. Each material has a different emissivity which may vary by temperature and infrared wavelength. For example, clean metal surfaces have emissivity that decreases at longer wavelengths; many dielectric materials, such as quartz (SiO2), sapphire (Al2O3), calcium fluoride (CaF2), etc. have emissivity that increases at longer wavelength; simple oxides, such as iron oxide (Fe2O3) display relatively flat emissivity in the infrared spectrum. == Measurement == A thermal imaging camera requires a series of mathematical algorithms to build a visible image, since the camera is only able to see electromagnetic radiation invisible to the human eye. The output image can be in JPG or any other image formats. The spectrum and amount of thermal radiation depend strongly on an object's surface temperature. This enables thermal imaging of an object's temperature. However, other factors also influence the received radiation, which limits the accuracy of this technique: for example, the emissivity of the object. For a non-contact temperature measurement, the emissivity setting needs to be set properly. An object of low emissivity could have its temperature underestimated by the detector, since it only detects emitted infrared rays. For a quick estimate, a thermographer may refer to an emissivity table for a given type of object, and enter that value into the imager. It would then calculate the object's contact temperature based on the entered emissivity and the infrared radiation as detected by the imager. For a more accurate measurement, a thermographer may apply a standard material of known, high emissivity to the surface of the object. The standard material might be an industrial emissivity spray produced specifically for the purpose, or as simple as standard black insulation tape, with an emissivity of about 0.97. The object's known temperature can then be measured using the standard emissivity. If desired, the object's actual emissivity (on a part of the object not covered by the standard material) can be determined by adjusting the imager's setting to the known temperature. There are situations, however, when such an emissivity test is not possible due to dangerous or inaccessible conditions, then the thermographer must rely on tables. Other variables can affect the measurement, including absorption and ambient temperature of the transmitting medium (usually air). Also, surrounding infrared radiation can be reflected in the object. All these settings will affect the calculated temperature of the object being viewed. === Color scale === Images from infrared cameras tend to be monochrome because the cameras generally use an image sensor that does not distinguish different wavelengths of infrared radiation. Color image sensors require a complex construction to differentiate wavelengths, and color has less meaning outside of the normal visible spectrum because the differing wavelengths do not map uniformly into the color vision system used by humans. Sometimes these monochromatic images are displayed in pseudo-color, where changes in color are used rather than changes in intensity to display changes in the signal. This technique, called density slicing, is useful because although humans have much greater dynamic range in intensity detection than color overall, the ability to see fine intensity differences in bright areas is fairly limited. In temperature measurement the brightest (warmest) parts of the image are customarily colored white, intermediate temperatures reds and yellows, and the dimmest (coolest) parts black. A scale should be shown next to a false color image to relate colors to temperatures. == Cameras == A thermographic camera (also called an infrared camera or thermal imaging camera, thermal camera or thermal imager) is a device that creates an image using infrared (IR) radiation, similar to a normal camera that forms an image using visible light. Instead of the 400–700 nanometre (nm) range of the visible light camera, infrared cameras are sensitive to wavelengths from about 1,000 nm (1 micrometre or μm) to about 14,000 nm (14 μm). The practice of capturing and analyzing the data they provide is called thermography. Thermal cameras convert the energy in the far infrared wavelength into a visible light display. All objects above absolute zero emit thermal infrared energy, so thermal cameras can passively see all objects, regardless of ambient light. However, most thermal cameras are sensitive to objects warmer than −50 °C (−58 °F). Some specification parameters of an infrared camera system are number of pixels, frame rate, responsivity, noise-equivalent power, noise-equivalent temperature difference (NETD), spectral band, distance-to-spot ratio (D:S), minimum focus distance, sensor lifetime, minimum resolvable temperature difference (MRTD), field of view, dynamic range, input power, and mass and volume. Their resolution is considerably lower than that of optical cameras, often around 160×120 or 320×240 pixels, although more expensive ones can achieve a resolution of 1280×1024 pixels. Thermographic cameras are much more expensive than their visible-spectrum counterparts, though low-performance add-on thermal cameras for smartphones became available for hundreds of US dollars in 2014. === Types === Thermographic cameras can be broadly divided into two types: those with cooled infrared image detectors and those with uncooled detectors. ==== Cooled infrared detectors ==== Cooled detectors are typically contained in a vacuum-sealed case or Dewar and cryogenically cooled. Cooling is necessary for the operation of the semiconductor materials used. Typical operating temperatures range from 4 K (−269 °C) to just below room temperature, depending on the detector technology. Most modern cooled detectors operate in the 60 Kelvin (K) to 100 K range (-213 to -173 °C), depending on type and performance level. Without cooling, these sensors (which detect and convert light in much the same way as common digital cameras, but are made of different materials) would be 'blinded' or flooded by their own radiation. The drawbacks of cooled infrared cameras are that they are expensive both to produce and to run. Cooling is both energy-intensive and time-consuming. The camera may need several minutes to cool down before it can begin working. The most commonly used cooling systems are peltier coolers which, although inefficient and limited in cooling capacity, are relatively simple and compact. To obtain better image quality or for imaging low temperature objects Stirling cryocoolers are needed. Although the cooling apparatus may be comparatively bulky and expensive, cooled infrared cameras provide greatly superior image quality compared to uncooled ones, particularly of objects near or below room temperature. Additionally, the greater sensitivity of cooled cameras also allow the use of higher F-number lenses, making high performance long focal length lenses both smaller and cheaper for cooled detectors. An alternative to Stirling coolers is to use gases bottled at high pressure, nitrogen being a common choice. The pressurised gas is expanded via a micro-sized orifice and passed over a miniature heat exchanger resulting in regenerative cooling via the Joule–Thomson effect. For such systems the supply of pressurized gas is a logistical concern for field use. Materials used for cooled infrared detection include photodetectors based on a wide range of narrow gap semiconductors including indium antimonide (3-5 μm), indium arsenide, mercury cadmium telluride (MCT) (1-2 μm, 3-5 μm, 8-12 μm), lead sulfide, and lead selenide. Infrared photodetectors can also be created with structures of high bandgap semiconductors such as in quantum well infrared photodetectors. Cooled bolometer technologies can be superconducting or non-superconducting. Superconducting detectors offer extreme sensitivity, with some able to register individual photons. For example, ESA's Superconducting camera (SCAM). However, they are not in regular use outside of scientific research. In principle, superconducting tunneling junction devices could be used as infrared sensors because of their very narrow gap. Small arrays have been demonstrated, but they have not been broadly adopted for use because their high sensitivity requires careful shielding from background radiation. ==== Uncooled infrared detectors ==== Uncooled thermal cameras use a sensor operating at ambient temperature, or a sensor stabilized at a temperature close to ambient using small temperature control elements. Modern uncooled detectors all use sensors that work by the change of resistance, voltage or current when heated by infrared radiation. These changes are then measured and compared to the values at the operating temperature of the sensor. In uncooled detectors the temperature differences at the sensor pixels are minute; a 1 °C difference at the scene induces just a 0.03 °C difference at the sensor. The pixel response time is also fairly slow, at the range of tens of milliseconds. Uncooled infrared sensors can be stabilized to an operating temperature to reduce image noise, but they are not cooled to low temperatures and do not require bulky, expensive, energy consuming cryogenic coolers. This makes infrared cameras smaller and less costly. However, their resolution and image quality tend to be lower than cooled detectors. This is due to differences in their fabrication processes, limited by currently available technology. An uncooled thermal camera also needs to deal with its own heat signature. Uncooled detectors are mostly based on pyroelectric and ferroelectric materials or microbolometer technology. The material are used to form pixels with highly temperature-dependent properties, which are thermally insulated from the environment and read electronically. Ferroelectric detectors operate close to phase transition temperature of the sensor material; the pixel temperature is read as the highly temperature-dependent polarization charge. The achieved NETD of ferroelectric detectors with f/1 optics and 320×240 sensors is 70-80 mK. A possible sensor assembly consists of barium strontium titanate bump-bonded by polyimide thermally insulated connection. Silicon microbolometers can reach NETD down to 20 mK. They consist of a layer of amorphous silicon, or a thin film vanadium(V) oxide sensing element suspended on silicon nitride bridge above the silicon-based scanning electronics. The electric resistance of the sensing element is measured once per frame. Current improvements of uncooled focal plane arrays (UFPA) are focused primarily on higher sensitivity and pixel density. In 2013 DARPA announced a five-micron LWIR camera that uses a 1280 × 720 focal plane array (FPA). Some of the materials used for the sensor arrays are amorphous silicon (a-Si), vanadium(V) oxide (VOx), lanthanum barium manganite (LBMO), lead zirconate titanate (PZT), lanthanum doped lead zirconate titanate (PLZT), lead scandium tantalate (PST), lead lanthanum titanate (PLT), lead titanate (PT), lead zinc niobate (PZN), lead strontium titanate (PSrT), barium strontium titanate (BST), barium titanate (BT), antimony sulfoiodide (SbSI), and polyvinylidene difluoride (PVDF). === CCD and CMOS thermography === Non-specialized charge-coupled device (CCD) and CMOS sensors have most of their spectral sensitivity in the visible light wavelength range. However, by utilizing the "trailing" area of their spectral sensitivity, namely the part of the infrared spectrum called near-infrared (NIR), and by using off-the-shelf CCTV camera it is possible under certain circumstances to obtain true thermal images of objects with temperatures at about 280 °C (536 °F) and higher. At temperatures of 600 °C and above, inexpensive cameras with CCD and CMOS sensors have also been used for pyrometry in the visible spectrum. They have been used for soot in flames, burning coal particles, heated materials, SiC filaments, and smoldering embers. This pyrometry has been performed using external filters or only the sensor's Bayer filters. It has been performed using color ratios, grayscales, and/or a hybrid of both. === Infrared films === Infrared (IR) film is sensitive to black-body radiation in the 250 to 500 °C (482 to 932 °F) range, while the range of thermography is approximately −50 to 2,000 °C (−58 to 3,632 °F). So, for an IR film to work thermographically, the measured object must be over 250 °C (482 °F) or be reflecting infrared radiation from something that is at least that hot. === Comparison with night-vision devices === Starlight-type night-vision devices generally only magnify ambient light and are not thermal imagers. Some infrared cameras marketed as night vision are sensitive to near-infrared just beyond the visual spectrum, and can see emitted or reflected near-infrared in complete visual darkness. However, these are not usually used for thermography due to the high equivalent black-body temperature required, but are instead used with active near-IR illumination sources. == Passive vs. active thermography == All objects above the absolute zero temperature (0 K) emit infrared radiation. Hence, an excellent way to measure thermal variations is to use an infrared sensing device, usually a focal plane array (FPA) infrared camera capable of detecting radiation in the mid (3 to 5 μm) and long (7 to 14 μm) wave infrared bands, denoted as MWIR and LWIR, corresponding to two of the high transmittance infrared windows. Abnormal temperature profiles at the surface of an object are an indication of a potential problem. In passive thermography, the features of interest are naturally at a higher or lower temperature than the background. Passive thermography has many applications such as surveillance of people on a scene and medical diagnosis (specifically thermology). In active thermography, an energy source is required to produce a thermal contrast between the feature of interest and the background. The active approach is necessary in many cases given that the inspected parts are usually in equilibrium with the surroundings. Given the super-linearities of the black-body radiation, active thermography can also be used to enhance the resolution of imaging systems beyond their diffraction limit or to achieve super-resolution microscopy. == Advantages == Thermography shows a visual picture so temperatures over a large area can be compared. It is capable of catching moving targets in real time. It is able to find deterioration, i.e., higher temperature components prior to their failure. It can be used to measure or observe in areas inaccessible or hazardous for other methods. It is a non-destructive test method. It can be used to find defects in shafts, pipes, and other metal or plastic parts. It can be used to detect objects in dark areas. It has some medical application, essentially in physiotherapy. == Limitations and disadvantages == Quality thermography cameras often have a high price (often US$3,000 or more) due to the expense of the larger pixel array (state of the art 1280×1024), although less expensive models (with pixel arrays of 40×40 up to 160×120 pixels) are also available. Fewer pixels compared to traditional cameras reduce the image quality making it more difficult to distinguish proximate targets within the same field of view. There is also a difference in refresh rate. Some cameras may only have a refreshing value of 5 –15 Hz, other (e.g. FLIR X8500sc) 180 Hz or even more in no full window mode. Also the lens can be integrated or not. Many models do not provide the irradiance measurements used to construct the output image; the loss of this information without a correct calibration for emissivity, distance, and ambient temperature and relative humidity entails that the resultant images are inherently incorrect measurements of temperature. Images can be difficult to interpret accurately when based upon certain objects, specifically objects with erratic temperatures, although this problem is reduced in active thermal imaging. Thermographic cameras create thermal images based on the radiant heat energy it receives. As radiation levels are influenced by the emissivity and reflection of radiation such as sunlight from the surface being measured this causes errors in the measurements. Most cameras have ±2% accuracy or worse in measurement of temperature and are not as accurate as contact methods. Methods and instruments are limited to directly detecting surface temperatures. == Applications == Thermography finds many uses, and thermal imaging cameras are excellent tools for the maintenance of electrical and mechanical systems in industry and commerce. For example, firefighters use it to see through smoke, find people, and localize hotspots of fires. Power line maintenance technicians locate overheating joints and parts, a telltale sign of their failure, to eliminate potential hazards. Where thermal insulation becomes faulty, building construction technicians can see heat leaks to improve the efficiencies of cooling or heating air-conditioning. By using proper camera settings, electrical systems can be scanned and problems can be found. Faults with steam traps in steam heating systems are easy to locate. In the energy savings area, thermal imaging cameras can see the effective radiation temperature of an object as well as what that object is radiating towards, which can help locate sources of thermal leaks and overheated regions. Cooled infrared cameras can be found at major astronomy research telescopes, even those that are not infrared telescopes. Examples include telescopes such as UKIRT, the Spitzer Space Telescope, WISE and the James Webb Space Telescope For automotive night vision, thermal imaging cameras are also installed in some luxury cars to aid the driver, the first being the 2000 Cadillac DeVille. In smartphones, a thermal camera was first integrated into the Cat S60 in 2016. === Industry === In manufacturing, engineering and research, thermography can be used for: Process control Research and development of new products Condition monitoring Electrical distribution equipment diagnosis and maintenance, such as transformer yards and distribution panels Nondestructive testing Fault diagnosis and troubleshooting Program process monitoring Quality control in production environments Predictive maintenance (early failure warning) on mechanical and electrical equipment Data center monitoring Inspecting photovoltaic power plants In building inspection, thermography can be used in: Roof inspection, such as for low slope and flat roofing Building diagnostics, including building envelope inspections, and energy losses in buildings Locating pest infestations Energy auditing of building insulation and detection of refrigerant leaks Home performance Moisture detection in walls and roofs (and thus in turn often part of mold remediation) Masonry wall structural analysis === Health === Some physiological activities, particularly responses such as fever, in human beings and other warm-blooded animals can also be monitored with non-contact thermography. This can be compared to contact thermography such as with traditional thermometers. Healthcare-related uses include: Dynamic angiothermography Peripheral vascular disease screening. Medical imaging in infrared Thermography (medical) - Medical testing for diagnosis Carotid artery stenosis (CAS) screening through skin thermal maps. Active Dynamic Thermography (ADT) for medical applications. Neuromusculoskeletal disorders. Extracranial cerebral and facial vascular disease. Facial emotion recognition. Thyroid gland abnormalities. Various other neoplastic, metabolic, and inflammatory conditions. === Security and defence === Thermography is often used in surveillance, security, firefighting, law enforcement, and anti-terrorism: Quarantine monitoring of visitors to a country Technical surveillance counter-measures Search and rescue operations Firefighting operations UAV surveillance In weapons systems, thermography can be used in military and police target detection and acquisition: Forward-looking infrared Infrared search and track Night vision Infrared targeting Thermal weapon sight In computer hacking, a thermal attack is an approach that exploits heat traces left after interacting with interfaces, such as touchscreens or keyboards, to uncover the user's input. === Other applications === Other areas in which these techniques are used: Thermal mapping Archaeological kite aerial thermography Thermology Veterinary thermal imaging Thermal imaging in ornithology and other wildlife monitoring Nighttime wildlife photography Stereo vision Chemical imaging Volcanology Agriculture, e.g., Seed-counting machine Baby monitoring systems Chemical imaging Pollution effluent detection Aerial archaeology Flame detector Meteorology (thermal images from weather satellites are used to determine cloud temperature/height and water vapor concentrations, depending on the wavelength) Cricket Umpire Decision Review System. To detect faint contact of the ball with the bat (and hence a heat patch signature on the bat after contact). Autonomous navigation == Standards == ASTM International (ASTM) ASTM C1060, Standard Practice for Thermographic Inspection of Insulation Installations in Envelope Cavities of Frame Buildings ASTM C1153, Standard Practice for the Location of Wet Insulation in Roofing Systems Using Infrared Imaging ATSM D4788, Standard Test Method for Detecting Delamination in Bridge Decks Using Infrared Thermography ASTM E1186, Standard Practices for Air Leakage Site Detection in Building Envelopes and Air Barrier Systems ASTM E1934, Standard Guide for Examining Electrical and Mechanical Equipment with Infrared Thermography International Organization for Standardization (ISO) ISO 6781, Thermal insulation – Qualitative detection of thermal irregularities in building envelopes – Infrared method ISO 18434-1, Condition monitoring and diagnostics of machines – Thermography – Part 1: General procedures ISO 18436-7, Condition monitoring and diagnostics of machines – Requirements for qualification and assessment of personnel – Part 7: Thermography === Regulation === Higher-end thermographic cameras are often deemed dual-use military grade equipment, and are export-restricted, particularly if the resolution is 640×480 or greater, unless the refresh rate is 9 Hz or less. The export from the USA of specific thermal cameras is regulated by International Traffic in Arms Regulations. == In biology == Thermography, by strict definition, is a measurement using an instrument, but some living creatures have natural organs that function as counterparts to bolometers, and thus possess a crude type of thermal imaging capability. This is called thermoception. One of the best known examples is infrared sensing in snakes. == History == === Discovery and research of infrared radiation === Infrared was discovered in 1800 by Sir William Herschel as a form of radiation beyond red light. These "infrared rays" (infra is the Latin prefix for "below") were used mainly for thermal measurement. There are four basic laws of IR radiation: Kirchhoff's law of thermal radiation, Stefan–Boltzmann law, Planck's law, and Wien's displacement law. The development of detectors was mainly focused on the use of thermometers and bolometers until World War I. A significant step in the development of detectors occurred in 1829, when Leopoldo Nobili, using the Seebeck effect, created the first known thermocouple, fabricating an improved thermometer, a crude thermopile. He described this instrument to Macedonio Melloni. Initially, they jointly developed a greatly improved instrument. Subsequently, Melloni worked alone, creating an instrument in 1833 (a multielement thermopile) that could detect a person 10 metres away. The next significant step in improving detectors was the bolometer, invented in 1880 by Samuel Pierpont Langley. Langley and his assistant Charles Greeley Abbot continued to make improvements in this instrument. By 1901, it could detect radiation from a cow from 400 metres away and was sensitive to differences in temperature of one hundred thousandths (0.00001 C) of a degree Celsius. The first commercial thermal imaging camera was sold in 1965 for high voltage power line inspections. The first civil sector application of IR technology may have been a device to detect the presence of icebergs and steamships using a mirror and thermopile, patented in 1913. This was soon outdone by the first accurate IR iceberg detector, which did not use thermopiles, patented in 1914 by R.D. Parker. This was followed by G.A. Barker's proposal to use the IR system to detect forest fires in 1934. The technique was not genuinely industrialized until it was used to analyze heating uniformity in hot steel strips in 1935. === First thermographic camera === In 1929, Hungarian physicist Kálmán Tihanyi invented the infrared-sensitive (night vision) electronic television camera for anti-aircraft defense in Britain. The first American thermographic camera developed was an infrared line scanner. This was created by the US military and Texas Instruments in 1947 and took one hour to produce a single image. While several approaches were investigated to improve the speed and accuracy of the technology, one of the most crucial factors dealt with scanning an image, which the AGA company was able to commercialize using a cooled photoconductor. The first British infrared linescan system was Yellow Duckling of the mid-1950s. This used a continuously rotating mirror and detector, with Y-axis scanning by the motion of the carrier aircraft. Although unsuccessful in its intended application of submarine tracking by wake detection, it was applied to land-based surveillance and became the foundation of military IR linescan. This work was further developed at the Royal Signals and Radar Establishment in the UK when they discovered that mercury cadmium telluride was a photoconductor that required much less cooling. Honeywell in the United States also developed arrays of detectors that could cool at a lower temperature, but they scanned mechanically. This method had several disadvantages which could be overcome using an electronic scanning system. In 1969 Michael Francis Tompsett at English Electric Valve Company in the UK patented a camera that scanned pyro-electronically and which reached a high level of performance after several other breakthroughs during the 1970s. Tompsett also proposed an idea for solid-state thermal-imaging arrays, which eventually led to modern hybridized single-crystal-slice imaging devices. By using video camera tubes such as vidicons with a pyroelectric material such as triglycine sulfate (TGS) as their targets, a vidicon sensitive over a broad portion of the infrared spectrum is possible. This technology was a precursor to modern microbolometer technology, and mainly used in firefighting thermal cameras. === Smart sensors === One of the essential areas of development for security systems was for the ability to intelligently evaluate a signal, as well as warning of a threat's presence. Under the encouragement of the US Strategic Defense Initiative, "smart sensors" began to appear. These are sensors that could integrate sensing, signal extraction, processing, and comprehension. There are two main types of smart sensors. One, similar to what is called a "vision chip" when used in the visible range, allow for preprocessing using smart sensing techniques due to the increase in growth of integrated microcircuitry. The other technology is more oriented to specific use and fulfills its preprocessing goal through its design and structure. Towards the end of the 1990s, the use of infrared was moving towards civilian use. There was a dramatic lowering of costs for uncooled arrays, which along with the significant increase in developments, led to a dual-use market encompassing both civilian and military uses. These uses include environmental control, building/art analysis, functional medical diagnostics, and car guidance and collision avoidance systems. == See also == == References == best thermal scanning service provider in india == External links == Infrared Tube, infrared imaging science demonstrations Compix, Some uses of thermographic images in electronics Thermographic Images, Infrared pictures Uncooled thermal imaging works round the clock by Lawrence Mayes Archaeological aerial thermography IR Thermometry & Thermography Applications Repository , IR Thermometry & Thermography Applications Repository
Wikipedia/Thermographic_camera
In thermodynamics, the internal energy of a system is expressed in terms of pairs of conjugate variables such as temperature and entropy, pressure and volume, or chemical potential and particle number. In fact, all thermodynamic potentials are expressed in terms of conjugate pairs. The product of two quantities that are conjugate has units of energy or sometimes power. For a mechanical system, a small increment of energy is the product of a force times a small displacement. A similar situation exists in thermodynamics. An increment in the energy of a thermodynamic system can be expressed as the sum of the products of certain generalized "forces" that, when unbalanced, cause certain generalized "displacements", and the product of the two is the energy transferred as a result. These forces and their associated displacements are called conjugate variables. The thermodynamic force is always an intensive variable and the displacement is always an extensive variable, yielding an extensive energy transfer. The intensive (force) variable is the derivative of the internal energy with respect to the extensive (displacement) variable, while all other extensive variables are held constant. The thermodynamic square can be used as a tool to recall and derive some of the thermodynamic potentials based on conjugate variables. In the above description, the product of two conjugate variables yields an energy. In other words, the conjugate pairs are conjugate with respect to energy. In general, conjugate pairs can be defined with respect to any thermodynamic state function. Conjugate pairs with respect to entropy are often used, in which the product of the conjugate pairs yields an entropy. Such conjugate pairs are particularly useful in the analysis of irreversible processes, as exemplified in the derivation of the Onsager reciprocal relations. == Overview == Just as a small increment of energy in a mechanical system is the product of a force times a small displacement, so an increment in the energy of a thermodynamic system can be expressed as the sum of the products of certain generalized "forces" which, when unbalanced, cause certain generalized "displacements" to occur, with their product being the energy transferred as a result. These forces and their associated displacements are called conjugate variables. For example, consider the p V {\displaystyle pV} conjugate pair. The pressure p {\displaystyle p} acts as a generalized force: Pressure differences force a change in volume d V {\displaystyle \mathrm {d} V} , and their product is the energy lost by the system due to work. Here, pressure is the driving force, volume is the associated displacement, and the two form a pair of conjugate variables. In a similar way, temperature differences drive changes in entropy, and their product is the energy transferred by heat transfer. The thermodynamic force is always an intensive variable and the displacement is always an extensive variable, yielding an extensive energy. The intensive (force) variable is the derivative of the (extensive) internal energy with respect to the extensive (displacement) variable, with all other extensive variables held constant. The theory of thermodynamic potentials is not complete until one considers the number of particles in a system as a variable on par with the other extensive quantities such as volume and entropy. The number of particles is, like volume and entropy, the displacement variable in a conjugate pair. The generalized force component of this pair is the chemical potential. The chemical potential may be thought of as a force which, when imbalanced, pushes an exchange of particles, either with the surroundings, or between phases inside the system. In cases where there are a mixture of chemicals and phases, this is a useful concept. For example, if a container holds liquid water and water vapor, there will be a chemical potential (which is negative) for the liquid which pushes the water molecules into the vapor (evaporation) and a chemical potential for the vapor, pushing vapor molecules into the liquid (condensation). Only when these "forces" equilibrate, and the chemical potential of each phase is equal, is equilibrium obtained. The most commonly considered conjugate thermodynamic variables are (with corresponding SI units): Thermal parameters: Temperature: T {\displaystyle T} (K) Entropy: S {\displaystyle S} (J K−1) Mechanical parameters: Pressure: p {\displaystyle p} (Pa= J m−3) Volume: V {\displaystyle V} (m3 = J Pa−1) or, more generally, Stress: σ i j {\displaystyle \sigma _{ij}\,} (Pa= J m−3) Volume × Strain: V × ε i j {\displaystyle V\times \varepsilon _{ij}} (m3 = J Pa−1) Material parameters: chemical potential: μ {\displaystyle \mu } (J) particle number: N {\displaystyle N} (particles or mole) For a system with different types i {\displaystyle i} of particles, a small change in the internal energy is given by: d U = T d S − p d V + ∑ i μ i d N i , {\displaystyle \mathrm {d} U=T\,\mathrm {d} S-p\,\mathrm {d} V+\sum _{i}\mu _{i}\,\mathrm {d} N_{i}\,,} where U {\displaystyle U} is internal energy, T {\displaystyle T} is temperature, S {\displaystyle S} is entropy, p {\displaystyle p} is pressure, V {\displaystyle V} is volume, μ i {\displaystyle \mu _{i}} is the chemical potential of the i {\displaystyle i} -th particle type, and N i {\displaystyle N_{i}} is the number of i {\displaystyle i} -type particles in the system. Here, the temperature, pressure, and chemical potential are the generalized forces, which drive the generalized changes in entropy, volume, and particle number respectively. These parameters all affect the internal energy of a thermodynamic system. A small change d U {\displaystyle \mathrm {d} U} in the internal energy of the system is given by the sum of the flow of energy across the boundaries of the system due to the corresponding conjugate pair. These concepts will be expanded upon in the following sections. While dealing with processes in which systems exchange matter or energy, classical thermodynamics is not concerned with the rate at which such processes take place, termed kinetics. For this reason, the term thermodynamics is usually used synonymously with equilibrium thermodynamics. A central notion for this connection is that of quasistatic processes, namely idealized, "infinitely slow" processes. Time-dependent thermodynamic processes far away from equilibrium are studied by non-equilibrium thermodynamics. This can be done through linear or non-linear analysis of irreversible processes, allowing systems near and far away from equilibrium to be studied, respectively. == Pressure/volume and stress/strain pairs == As an example, consider the p V {\displaystyle pV} conjugate pair. The pressure acts as a generalized force – pressure differences force a change in volume, and their product is the energy lost by the system due to mechanical work. Pressure is the driving force, volume is the associated displacement, and the two form a pair of conjugate variables. The above holds true only for non-viscous fluids. In the case of viscous fluids and plastic and elastic solids, the pressure force is generalized to the stress tensor, and changes in volume are generalized to the volume multiplied by the strain tensor. These then form a conjugate pair. If σ i j {\displaystyle \sigma _{ij}} is the ij component of the stress tensor, and ε i j {\displaystyle \varepsilon _{ij}} is the ij component of the strain tensor, then the mechanical work done as the result of a stress-induced infinitesimal strain ε i j {\displaystyle \mathrm {\varepsilon } _{ij}} is: δ w = V ∑ i j σ i j d ε i j {\displaystyle \delta w=V\sum _{ij}\sigma _{ij}\,\mathrm {d} \varepsilon _{ij}} or, using Einstein notation for the tensors, in which repeated indices are assumed to be summed: δ w = V σ i j d ε i j {\displaystyle \delta w=V\sigma _{ij}\,\mathrm {d} \varepsilon _{ij}} In the case of pure compression (i.e. no shearing forces), the stress tensor is simply the negative of the pressure times the unit tensor so that δ w = V ( − p δ i j ) d ε i j = − ∑ k p V d ε k k {\displaystyle \delta w=V\,(-p\delta _{ij})\,\mathrm {d} \varepsilon _{ij}=-\sum _{k}pV\,\mathrm {d} \varepsilon _{kk}} The trace of the strain tensor ( ε k k {\displaystyle \varepsilon _{kk}} ) is the fractional change in volume so that the above reduces to δ w = − p d V {\displaystyle \delta w=-p\mathrm {d} V} as it should. == Temperature/entropy pair == In a similar way, temperature differences drive changes in entropy, and their product is the energy transferred by heating. Temperature is the driving force, entropy is the associated displacement, and the two form a pair of conjugate variables. The temperature/entropy pair of conjugate variables is the only heat term; the other terms are essentially all various forms of work. == Chemical potential/particle number pair == The chemical potential is like a force which pushes an increase in particle number. In cases where there are a mixture of chemicals and phases, this is a useful concept. For example, if a container holds water and water vapor, there will be a chemical potential (which is negative) for the liquid, pushing water molecules into the vapor (evaporation) and a chemical potential for the vapor, pushing vapor molecules into the liquid (condensation). Only when these "forces" equilibrate is equilibrium obtained. == See also == Generalized coordinate and generalized force: analogous conjugate variable pairs found in classical mechanics. Intensive and extensive properties Bond graph == References == == Further reading == Lewis, Gilbert Newton; Randall, Merle (1961). Thermodynamics. Revised by Kenneth S. Pitzer and Leo Brewer (2nd ed.). New York City: McGraw-Hill Book. ISBN 9780071138093. {{cite book}}: ISBN / Date incompatibility (help) Callen, Herbert B. (1998). Thermodynamics and an Introduction to Thermostatistics (2nd ed.). New York: John Wiley & Sons. ISBN 978-0-471-86256-7.
Wikipedia/Conjugate_variables_(thermodynamics)
Biological thermodynamics (Thermodynamics of biological systems) is a science that explains the nature and general laws of thermodynamic processes occurring in living organisms as nonequilibrium thermodynamic systems that convert the energy of the Sun and food into other types of energy. The nonequilibrium thermodynamic state of living organisms is ensured by the continuous alternation of cycles of controlled biochemical reactions, accompanied by the release and absorption of energy, which provides them with the properties of phenotypic adaptation and a number of others. == History == In 1935, the first scientific work devoted to the thermodynamics of biological systems was published - the book of the Hungarian-Russian theoretical biologist Erwin S. Bauer (1890-1938) "Theoretical Biology". E. Bauer formulated the "Universal Law of Biology" in the following edition: "All and only living systems are never in equilibrium and perform constant work at the expense of their free energy against the equilibrium required by the laws of physics and chemistry under existing external conditions". This law can be considered the 1st law of thermodynamics of biological systems. In 1957, German-British physician and biochemist Hans Krebs and British-American biochemist Hans Kornberg in the book "Energy Transformations in Living Matter" first described the thermodynamics of biochemical reactions. In their works, H. Krebs and Hans Kornberg showed how in living cells, as a result of biochemical reactions, adenosine triphosphate (ATP) is synthesized from food, which is the main source of energy of living organisms (the Krebs–Kornberg cycle). In 2006, the Israeli-Russian scientist Boris Dobroborsky (1945) published the book "Thermodynamics of Biological Systems", in which the general principles of functioning of living organisms from the perspective of nonequilibrium thermodynamics were formulated for the first time and the nature and properties of their basic physiological functions were explained. == The main provisions of the theory of thermodynamics of biological systems == A living organism is a thermodynamic system of an active type (in which energy transformations occur), striving for a stable nonequilibrium thermodynamic state. The nonequilibrium thermodynamic state in plants is achieved by continuous alternation of phases of solar energy consumption as a result of photosynthesis and subsequent biochemical reactions, as a result of which adenosine triphosphate (ATP) is synthesized in the daytime, and the subsequent release of energy during the splitting of ATP mainly in the dark. Thus, one of the conditions for the existence of life on Earth is the alternation of light and dark time of day. In animals, the processes of alternating cycles of biochemical reactions of ATP synthesis and cleavage occur automatically. Moreover, the processes of alternating cycles of biochemical reactions at the levels of organs, systems and the whole organism, for example, respiration, heart contractions and others occur with different periods and externally manifest themselves in the form of biorhythms. At the same time, the stability of the nonequilibrium thermodynamic state, optimal under certain conditions of vital activity, is provided by feedback systems through the regulation of biochemical reactions in accordance with the Lyapunov stability theory. This principle of vital activity was formulated by B. Dobroborsky in the form of the 2nd law of thermodynamics of biological systems in the following wording: The stability of the nonequilibrium thermodynamic state of biological systems is ensured by the continuous alternation of phases of energy consumption and release through controlled reactions of synthesis and cleavage of ATP. The following consequences follow from this law: 1. In living organisms, no process can occur continuously, but must alternate with the opposite direction: inhalation with exhalation, work with rest, wakefulness with sleep, synthesis with cleavage, etc. 2. The state of a living organism is never static, and all its physiological and energy parameters are always in a state of continuous fluctuations relative to the average values both in frequency and amplitude. This principle of functioning of living organisms provides them with the properties of phenotypic adaptation and a number of others. === See also === Bioenergetics Ecological energetics Harris-Benedict Equations Stress (biology) == References == == Further reading == Haynie, D. (2001). Biological Thermodynamics (textbook). Cambridge: Cambridge University Press. Lehninger, A., Nelson, D., & Cox, M. (1993). Principles of Biochemistry, 2nd Ed (textbook). New York: Worth Publishers. Alberty, Robert, A. (2006). Biochemical Thermodynamics: Applications of Mathematica (Methods of Biochemical Analysis), Wiley-Interscience. == External links == Cellular Thermodynamics - Wolfe, J. (2002), Encyclopedia of Life Sciences. Bioenergetics
Wikipedia/Biological_thermodynamics
Atmospheric thermodynamics is the study of heat-to-work transformations (and their reverse) that take place in the Earth's atmosphere and manifest as weather or climate. Atmospheric thermodynamics use the laws of classical thermodynamics, to describe and explain such phenomena as the properties of moist air, the formation of clouds, atmospheric convection, boundary layer meteorology, and vertical instabilities in the atmosphere. Atmospheric thermodynamic diagrams are used as tools in the forecasting of storm development. Atmospheric thermodynamics forms a basis for cloud microphysics and convection parameterizations used in numerical weather models and is used in many climate considerations, including convective-equilibrium climate models. == Overview == The atmosphere is an example of a non-equilibrium system. Atmospheric thermodynamics describes the effect of buoyant forces that cause the rise of less dense (warmer) air, the descent of more dense air, and the transformation of water from liquid to vapor (evaporation) and its condensation. Those dynamics are modified by the force of the pressure gradient and that motion is modified by the Coriolis force. The tools used include the law of energy conservation, the ideal gas law, specific heat capacities, the assumption of isentropic processes (in which entropy is a constant), and moist adiabatic processes (during which no energy is transferred as heat). Most of tropospheric gases are treated as ideal gases and water vapor, with its ability to change phase from vapor, to liquid, to solid, and back is considered one of the most important trace components of air. Advanced topics are phase transitions of water, homogeneous and in-homogeneous nucleation, effect of dissolved substances on cloud condensation, role of supersaturation on formation of ice crystals and cloud droplets. Considerations of moist air and cloud theories typically involve various temperatures, such as equivalent potential temperature, wet-bulb and virtual temperatures. Connected areas are energy, momentum, and mass transfer, turbulence interaction between air particles in clouds, convection, dynamics of tropical cyclones, and large scale dynamics of the atmosphere. The major role of atmospheric thermodynamics is expressed in terms of adiabatic and diabatic forces acting on air parcels included in primitive equations of air motion either as grid resolved or subgrid parameterizations. These equations form a basis for the numerical weather and climate predictions. == History == In the early 19th century thermodynamicists such as Sadi Carnot, Rudolf Clausius, and Émile Clapeyron developed mathematical models on the dynamics of fluid bodies and vapors related to the combustion and pressure cycles of atmospheric steam engines; one example is the Clausius–Clapeyron equation. In 1873, thermodynamicist Willard Gibbs published "Graphical Methods in the Thermodynamics of Fluids." These sorts of foundations naturally began to be applied towards the development of theoretical models of atmospheric thermodynamics which drew the attention of the best minds. Papers on atmospheric thermodynamics appeared in the 1860s that treated such topics as dry and moist adiabatic processes. In 1884 Heinrich Hertz devised first atmospheric thermodynamic diagram (emagram). Pseudo-adiabatic process was coined by von Bezold describing air as it is lifted, expands, cools, and eventually precipitates its water vapor; in 1888 he published voluminous work entitled "On the thermodynamics of the atmosphere". In 1911 von Alfred Wegener published a book "Thermodynamik der Atmosphäre", Leipzig, J. A. Barth. From here the development of atmospheric thermodynamics as a branch of science began to take root. The term "atmospheric thermodynamics", itself, can be traced to Frank W. Very's 1919 publication: "The radiant properties of the earth from the standpoint of atmospheric thermodynamics" (Occasional scientific papers of the Westwood Astrophysical Observatory). By the late 1970s various textbooks on the subject began to appear. Today, atmospheric thermodynamics is an integral part of weather forecasting. === Chronology === 1751 Charles Le Roy recognized dew point temperature as point of saturation of air 1782 Jacques Charles made hydrogen balloon flight measuring temperature and pressure in Paris 1784 Concept of variation of temperature with height was suggested 1801–1803 John Dalton developed his laws of pressures of vapours 1804 Joseph Louis Gay-Lussac made balloon ascent to study weather 1805 Pierre Simon Laplace developed his law of pressure variation with height 1841 James Pollard Espy publishes paper on convection theory of cyclone energy 1856 William Ferrel presents dynamics causing westerlies 1889 Hermann von Helmholtz and John William von Bezold used the concept of potential temperature, von Bezold used adiabatic lapse rate and pseudoadiabat 1893 Richard Asman constructs first aerological sonde (pressure-temperature-humidity) 1894 John Wilhelm von Bezold used concept of equivalent temperature 1926 Sir Napier Shaw introduced tephigram 1933 Tor Bergeron published paper on "Physics of Clouds and Precipitation" describing precipitation from supercooled (due to condensational growth of ice crystals in presence of water drops) 1946 Vincent J. Schaeffer and Irving Langmuir performed the first cloud seeding experiment 1986 K. Emanuel conceptualizes tropical cyclone as Carnot heat engine == Applications == === Hadley Circulation === The Hadley Circulation can be considered as a heat engine. The Hadley circulation is identified with rising of warm and moist air in the equatorial region with the descent of colder air in the subtropics corresponding to a thermally driven direct circulation, with consequent net production of kinetic energy. The thermodynamic efficiency of the Hadley system, considered as a heat engine, has been relatively constant over the 1979~2010 period, averaging 2.6%. Over the same interval, the power generated by the Hadley regime has risen at an average rate of about 0.54 TW per yr; this reflects an increase in energy input to the system consistent with the observed trend in the tropical sea surface temperatures. === Tropical cyclone Carnot cycle === The thermodynamic behavior of a hurricane can be modelled as a heat engine that operates between the heat reservoir of the sea at a temperature of about 300K (27 °C) and the heat sink of the tropopause at a temperature of about 200K (−72 °C) and in the process converts heat energy into mechanical energy of winds. Parcels of air traveling close to the sea surface take up heat and water vapor, the warmed air rises and expands and cools as it does so causes condensation and precipitation. The rising air, and condensation, produces circulatory winds that are propelled by the Coriolis force, which whip up waves and increase the amount of warm moist air that powers the cyclone. Both a decreasing temperature in the upper troposphere or an increasing temperature of the atmosphere close to the surface will increase the maximum winds observed in hurricanes. When applied to hurricane dynamics it defines a Carnot heat engine cycle and predicts maximum hurricane intensity. === Water vapor and global climate change === The Clausius–Clapeyron relation shows how the water-holding capacity of the atmosphere increases by about 8% per Celsius increase in temperature. (It does not directly depend on other parameters like the pressure or density.) This water-holding capacity, or "equilibrium vapor pressure", can be approximated using the August-Roche-Magnus formula e s ( T ) = 6.1094 exp ⁡ ( 17.625 T T + 243.04 ) {\displaystyle e_{s}(T)=6.1094\exp \left({\frac {17.625T}{T+243.04}}\right)} (where e s ( T ) {\displaystyle e_{s}(T)} is the equilibrium or saturation vapor pressure in hPa, and T {\displaystyle T} is temperature in degrees Celsius). This shows that when atmospheric temperature increases (e.g., due to greenhouse gases) the absolute humidity should also increase exponentially (assuming a constant relative humidity). However, this purely thermodynamic argument is subject of considerable debate because convective processes might cause extensive drying due to increased areas of subsidence, efficiency of precipitation could be influenced by the intensity of convection, and because cloud formation is related to relative humidity. == See also == Atmospheric convection Atmospheric temperature Atmospheric wave Chemical thermodynamics Cloud physics Equilibrium thermodynamics Fluid dynamics Non-equilibrium thermodynamics Thermodynamics == Special topics == Lorenz, E. N., 1955, Available potential energy and the maintenance of the general circulation, Tellus, 7, 157–167. Emanuel, K, 1986, Part I. An air-sea interaction theory for tropical cyclones, J. Atmos. Sci. 43, 585, (energy cycle of the mature hurricane has been idealized here as Carnot engine that converts heat energy extracted from the ocean to mechanical energy). == References == == Further reading ==
Wikipedia/Atmospheric_thermodynamics
A forced-air central heating system is one which uses air as its heat transfer medium. These systems rely on ductwork, vents, and plenums as means of air distribution, separate from the actual heating and air conditioning systems. The return plenum carries the air from several large return grills (vents) to a central air handler for re-heating. The supply plenum directs air from the central unit to the rooms which the system is designed to heat. Regardless of type, all air handlers consist of an air filter, blower, heat exchanger/element/coil, and various controls. Like any other kind of central heating system, thermostats are used to control forced air heating systems. Forced air heating is the type of central heating most commonly installed in North America. It is much less common in Europe, where hydronic heating predominates, especially in the form of hot-water radiators. == Types == === Natural gas/propane/oil/coal/wood === Heat is produced via combustion of fuel. A heat exchanger keeps the combustion byproducts from entering the air stream. A ribbon style (long with holes), inshot (torch-like), or oil type burner is located in the heat exchanger. Ignition is provided by an electric spark, standing pilot, or hot surface igniter. Safety devices ensure that combustion gases and/or unburned fuel do not accumulate in the event of an ignition failure or venting failure. === Electric === A simple electric heating element warms the air. When the thermostat calls for heat, blower and element come on at the same time. When thermostat is "satisfied", blower and element shut off. Requires very little maintenance. Usually more expensive to operate than a natural gas furnace. === Heat pump === Extracts heat from the environment, using either the ground or air as the source, via the refrigeration cycle. Requires less energy than electric resistance heating and possibly more efficient than fossil fuel fired furnaces (gas/oil/coal). Air source types may not be suitable for cold climates unless used with backup (secondary) source of heat. Newer models may still provide heat when coping with temperatures below 0 °C (32 °F). A refrigerant coil is located in the air handler instead of a burner/heat exchanger. The system can also be used for cooling, just as any central air-conditioning system. See Heat pumps === Hydronic coil === Combines hydronic (hot water) heating with a forced air delivery Heat is produced via combustion of fuel (gas/propane/oil) in a boiler A heat exchanger (hydronic coil) is placed in the air handler similar to the refrigerant coil in a Heat Pump system or a Central AC. Copper is often specified in supply and return manifolds and in tube coils. Heated water is pumped through the heat exchanger then back to the boiler to be reheated ==== Sequence of operation ==== Thermostat calls for heat Source of ignition is provided at the boiler Circulator initiates water flow to the hydronic coil (heat exchanger) Once the heat exchanger warms up, the main blower is activated When call for heat ceases, the boiler and circulator turn off Blower shuts off after period of time (depending on the particular equipment involved this may be a fixed or programmable amount of time) == Self-balancing mechanism == The basis of any CAV regulator is the self-balancing mechanism. It is the design of this mechanism that determines the accuracy of maintaining the set flow rate, noise level, minimum regulator resistance, flow range and other parameters. There are different designs of the self-balancing mechanism that largely determine the technical characteristics of CAV regulators: Self-balancing mechanism based on a silicone adjustment diaphragm that changes its volume depending on the air pressure in the duct, thereby increasing or decreasing the air flow area. Self-balancing mechanism with overlapping section. The self-balancing damper with spring automatically closes the remaining part of the cross-section depending on the duct pressures. Self-balancing mechanism with connector for flow adjustment. Typically, the regulator damper is made of lightweight aluminum, and the self-balancing mechanism consists of plastic levers and transmission, a steel spring and silicone vibration dampers, which are necessary to prevent auto-oscillation. == CAV and VAV == An alternative to a constant air volume system is a variable air volume (VAV) system. Variable air volume systems are generally more complex than their CAV counterparts because they must utilize temperature control and control the actual volume of air blown into each room. Although more difficult to design and implement, a VAV system is more energy efficient than a CAV system because the components of a variable airflow design operate only as needed. == Advantages and disadvantages == Compared to water, air masses have a lower heat capacity. This means that they cool down faster, but they also raise the room temperature in a short time. Low thermal inertia allows literally in a few minutes to heat different in volume buildings. At the same time, all the heat goes only to heat the rooms. === Systems with air-heating units === Disadvantages: high noise level, disperse dust, each unit requires a supply of heat transfer fluid and electricity, have a high gradient of air temperature over height. Advantages: does not require large cross-sectional ducts, has a long spray range === Air heating systems combined with supply ventilation === Disadvantages: require ducts with large cross-sections, it is necessary to reserve the supply unit and pump in the piping assembly, have a high gradient of air temperature over the height, have a small range of the jet. Advantages: presentable from a design point of view (only the grilles are visible), inexpensive (considering the combination with the ventilation system). == See also == Forced-air gas Copper in heat exchangers == References ==
Wikipedia/Forced-air
A home energy monitor is a device that provides information about a personal electrical energy usage to a consumer of electricity. Devices may display the amount of electricity used, plus the cost of energy used and estimates of greenhouse gas emissions. The purpose of such devices is to assist in the management of power consumption. Several initiatives have been launched to increase the usage of home energy monitors. Studies have shown a reduction of home energy when the devices are used. == Description == A home energy monitor provides information about electrical energy usage to electricity consumers (e.g., homeowners).. In addition to the amount of electrical usage, devices may display other information, including the cost of energy used and estimates of greenhouse gas emissions. The purpose of such devices is to assist in the management of power consumption. Home energy monitors consist of a measuring component and a display component. Electricity use is measured with an inductive clamp placed around the electric main, through an optical port on the electric meter, by sensing the meter's actions, communicating with a smart meter, or via a direct connection to the electrical system. Some plug-in units can store their readings when not connected. The display portion may be remote from the measurement component, communicating with the sensor via cable, network, power line communications, or radio. Online displays are also available, which allow the user to use an internet connected display to show near real-time consumption. == Initiatives == === Australia === In January 2009 the government of the state of Queensland, Australia began offering wireless energy monitors as part of its ClimateSmart Home Service program. y August 2009, nearly 100,000 homes had signed up for the service, and by August 2010, the number had increased to 200,000 homes. In mid-2013 the government of the state of Victoria, Australia enabled Zigbee-based In-Home Displays to be connected to Victorian Smart Meter. From September 2019, the Victorian households are eligible to avail rebates for home energy monitor installation under the Victorian Energy Upgrades Program. === Google PowerMeter === Google PowerMeter was a software project of Google's philanthropic arm, Google.org, to help consumers track their home electricity usage that ran from October 5, 2009 to September 16, 2011. == Studies == Various studies have shown a reduction in home energy use of 4-15% through use of home energy display. A study by Hydro One using the PowerCost Monitor deployed in 500 Ontario homes showed an average 6.5% drop in total electricity use when compared with a similarly sized control group. Based on these results, Hydro One subsequently offered power monitors to 30,000 customers for $8.99 shipping and handling. A study in the city of Sabadell, Spain in 2009 using the Efergy e2 in 29 households during a six-month period found a drop of 11.8% in weekly consumption between the first and last weeks of the campaign. On a monthly basis, the savings were 14.3%. Expected annual CO2 emissions for all households were estimated to reduce by 4.1 tonnes; projected emissions savings for 2020 were 180.6 tonnes. == See also == AlertMe Energy management software Google PowerMeter Energy conservation Hohm Home automation Kill A Watt Nonintrusive load monitoring Smart meter Wattmeter == References == == External links == gov.uk Saving electricity with a home energy monitor
Wikipedia/Home_energy_monitor
Endoreversible thermodynamics is a subset of irreversible thermodynamics aimed at making more realistic assumptions about heat transfer than are typically made in reversible thermodynamics. It gives an upper bound on the power that can be derived from a real process that is lower than that predicted by Carnot for a Carnot cycle, and accommodates the exergy destruction occurring as heat is transferred irreversibly. It is also called finite-time thermodynamics, entropy generation minimization, or thermodynamic optimization. == History == Endoreversible thermodynamics was discovered multiple times, with Reitlinger (1929), Novikov (1957) and Chambadal (1957), although it is most often attributed to Curzon & Ahlborn (1975). Reitlinger derived it by considering a heat exchanger receiving heat from a finite hot stream fed by a combustion process. A brief review of the history of rediscoveries is in. == Efficiency at maximal power == Consider a semi-ideal heat engine, in which heat transfer takes time, according to Fourier's law of heat conduction: Q ˙ ∝ Δ T {\displaystyle {\dot {Q}}\propto \Delta T} , but other operations happen instantly. Its maximal efficiency is the standard Carnot result, but it requires heat transfer to be reversible (quasistatic), thus taking infinite time. At maximum power output, its efficiency is the Chambadal–Novikov efficiency: η = 1 − T L T H = 1 − 1 − η C a r n o t {\displaystyle \eta =1-{\sqrt {\frac {T_{L}}{T_{H}}}}=1-{\sqrt {1-\eta _{Carnot}}}} Due to occasional confusion about the origins of the above equation, it is sometimes named the Chambadal–Novikov–Curzon–Ahlborn efficiency. === Derivation === This derivation is a slight simplification of Curzon & Ahlborn. Consider a heat engine, with a single working fluid cycling around the engine. On one side, the working fluid has temperature T H ′ {\displaystyle T_{H}'} , and is in direct contact with the hot heat bath. On the other side, it has temperature T L ′ {\displaystyle T_{L}'} , and is in direct contact with the cold heat bath. The heat flow into the engine is Q ˙ H = k H ( T H − T H ′ ) {\displaystyle {\dot {Q}}_{H}=k_{H}(T_{H}-T_{H}')} , where k H {\displaystyle k_{H}} is the heat conduction coefficient. The heat flow out of the engine is Q ˙ L = k L ( T L ′ − T L ) {\displaystyle {\dot {Q}}_{L}=k_{L}(T_{L}'-T_{L})} . The power output of the engine is W ˙ = Q ˙ H − Q ˙ L {\displaystyle {\dot {W}}={\dot {Q}}_{H}-{\dot {Q}}_{L}} . Side note: if one cycle of the engine takes time t {\displaystyle t} , and during this time, it is in contact with the hot side only for a time t H {\displaystyle t_{H}} , then we can reduce to this case by replacing k H {\displaystyle k_{H}} with k H t H t {\displaystyle k_{H}{\frac {t_{H}}{t}}} . Similar comments apply to the cold side. By Carnot theorem, we have η = W ˙ Q ˙ H ≤ 1 − T L ′ T H ′ {\displaystyle \eta ={\frac {\dot {W}}{{\dot {Q}}_{H}}}\leq 1-{\frac {T_{L}'}{T_{H}'}}} . This then gives us a problem of constraint optimization: { max T H ′ , T L ′ W ˙ W ˙ Q ˙ H ≤ 1 − T L ′ T H ′ {\displaystyle {\begin{cases}\max _{T_{H}',T_{L}'}{\dot {W}}\\{\frac {\dot {W}}{{\dot {Q}}_{H}}}\leq 1-{\frac {T_{L}'}{T_{H}'}}\end{cases}}} This can be solved by typical methods, such as Lagrange multipliers, giving us T H ′ = x T H ; T L ′ = x T L ; x = k H T H + k L T L k H + k L {\displaystyle T_{H}'=x{\sqrt {T_{H}}};\quad T_{L}'=x{\sqrt {T_{L}}};\quad x={\frac {k_{H}{\sqrt {T_{H}}}+k_{L}{\sqrt {T_{L}}}}{k_{H}+k_{L}}}} at which point the engine is operating at efficiency η = 1 − T L T H {\displaystyle \eta =1-{\sqrt {\frac {T_{L}}{T_{H}}}}} . In particular, if k L ≫ k H {\displaystyle k_{L}\gg k_{H}} , then we have T H ′ = T H T L ; T L ′ = T L {\displaystyle T_{H}'={\sqrt {T_{H}T_{L}}};\quad T_{L}'=T_{L}} This is often the case with practical heat engines in power generation plants, where the work fluid can only spend a small amount of time with the hot bath (nuclear reactor core, coal furnance, etc), but a much larger amount of time with the cold bath (open atmosphere, a large body of water, etc). === Experimental data === For some typical cycles, the above equation (note that absolute temperatures must be used) gives the following results: As shown, the endoreversible efficiency much more closely models the observed data. However, such an engine violates Carnot's principle which states that work can be done any time there is a difference in temperature. The fact that the hot and cold reservoirs are not at the same temperature as the working fluid they are in contact with means that work can and is done at the hot and cold reservoirs. The result is tantamount to coupling the high and low temperature parts of the cycle, so that the cycle collapses. In the Carnot cycle, the working fluid must always remain constant temperatures, as the heat reservoirs they are in contact with and that they are separated by adiabatic transformations which prevent thermal contact. The efficiency was first derived by William Thomson in his study of an unevenly heated body in which the adiabatic partitions between bodies at different temperatures are removed and maximum work is performed. It is well known that the final temperature is the geometric mean temperature T H T L {\displaystyle {\sqrt {T_{H}T_{L}}}} so that the efficiency is the Carnot efficiency for an engine working between T H {\displaystyle T_{H}} and T H T L {\displaystyle {\sqrt {T_{H}T_{L}}}} . == See also == An introduction to endoreversible thermodynamics is given in the thesis by Katharina Wagner. It is also introduced by Hoffman et al. A thorough discussion of the concept, together with many applications in engineering, is given in the book by Hans Ulrich Fuchs. == References ==
Wikipedia/Endoreversible_thermodynamics
Thermodynamics is expressed by a mathematical framework of thermodynamic equations which relate various thermodynamic quantities and physical properties measured in a laboratory or production process. Thermodynamics is based on a fundamental set of postulates, that became the laws of thermodynamics. == Introduction == One of the fundamental thermodynamic equations is the description of thermodynamic work in analogy to mechanical work, or weight lifted through an elevation against gravity, as defined in 1824 by French physicist Sadi Carnot. Carnot used the phrase motive power for work. In the footnotes to his famous On the Motive Power of Fire, he states: “We use here the expression motive power to express the useful effect that a motor is capable of producing. This effect can always be likened to the elevation of a weight to a certain height. It has, as we know, as a measure, the product of the weight multiplied by the height to which it is raised.” With the inclusion of a unit of time in Carnot's definition, one arrives at the modern definition for power: P = W t = ( m g ) h t {\displaystyle P={\frac {W}{t}}={\frac {(mg)h}{t}}} During the latter half of the 19th century, physicists such as Rudolf Clausius, Peter Guthrie Tait, and Willard Gibbs worked to develop the concept of a thermodynamic system and the correlative energetic laws which govern its associated processes. The equilibrium state of a thermodynamic system is described by specifying its "state". The state of a thermodynamic system is specified by a number of extensive quantities, the most familiar of which are volume, internal energy, and the amount of each constituent particle (particle numbers). Extensive parameters are properties of the entire system, as contrasted with intensive parameters which can be defined at a single point, such as temperature and pressure. The extensive parameters (except entropy) are generally conserved in some way as long as the system is "insulated" to changes to that parameter from the outside. The truth of this statement for volume is trivial, for particles one might say that the total particle number of each atomic element is conserved. In the case of energy, the statement of the conservation of energy is known as the first law of thermodynamics. A thermodynamic system is in equilibrium when it is no longer changing in time. This may happen in a very short time, or it may happen with glacial slowness. A thermodynamic system may be composed of many subsystems which may or may not be "insulated" from each other with respect to the various extensive quantities. If we have a thermodynamic system in equilibrium in which we relax some of its constraints, it will move to a new equilibrium state. The thermodynamic parameters may now be thought of as variables and the state may be thought of as a particular point in a space of thermodynamic parameters. The change in the state of the system can be seen as a path in this state space. This change is called a thermodynamic process. Thermodynamic equations are now used to express the relationships between the state parameters at these different equilibrium state. The concept which governs the path that a thermodynamic system traces in state space as it goes from one equilibrium state to another is that of entropy. The entropy is first viewed as an extensive function of all of the extensive thermodynamic parameters. If we have a thermodynamic system in equilibrium, and we release some of the extensive constraints on the system, there are many equilibrium states that it could move to consistent with the conservation of energy, volume, etc. The second law of thermodynamics specifies that the equilibrium state that it moves to is in fact the one with the greatest entropy. Once we know the entropy as a function of the extensive variables of the system, we will be able to predict the final equilibrium state. (Callen 1985) == Notation == Some of the most common thermodynamic quantities are: The conjugate variable pairs are the fundamental state variables used to formulate the thermodynamic functions. The most important thermodynamic potentials are the following functions: Thermodynamic systems are typically affected by the following types of system interactions. The types under consideration are used to classify systems as open systems, closed systems, and isolated systems. Common material properties determined from the thermodynamic functions are the following: The following constants are constants that occur in many relationships due to the application of a standard system of units. == Laws of thermodynamics == The behavior of a thermodynamic system is summarized in the laws of Thermodynamics, which concisely are: Zeroth law of thermodynamics If A, B, C are thermodynamic systems such that A is in thermal equilibrium with B and B is in thermal equilibrium with C, then A is in thermal equilibrium with C. The zeroth law is of importance in thermometry, because it implies the existence of temperature scales. In practice, C is a thermometer, and the zeroth law says that systems that are in thermodynamic equilibrium with each other have the same temperature. The law was actually the last of the laws to be formulated. First law of thermodynamics d U = δ Q − δ W {\displaystyle dU=\delta Q-\delta W} where d U {\displaystyle dU} is the infinitesimal increase in internal energy of the system, δ Q {\displaystyle \delta Q} is the infinitesimal heat flow into the system, and δ W {\displaystyle \delta W} is the infinitesimal work done by the system. The first law is the law of conservation of energy. The symbol δ {\displaystyle \delta } instead of the plain d, originated in the work of German mathematician Carl Gottfried Neumann and is used to denote an inexact differential and to indicate that Q and W are path-dependent (i.e., they are not state functions). In some fields such as physical chemistry, positive work is conventionally considered work done on the system rather than by the system, and the law is expressed as d U = δ Q + δ W {\displaystyle dU=\delta Q+\delta W} . Second law of thermodynamics The entropy of an isolated system never decreases: d S ≥ 0 {\displaystyle dS\geq 0} for an isolated system. A concept related to the second law which is important in thermodynamics is that of reversibility. A process within a given isolated system is said to be reversible if throughout the process the entropy never increases (i.e. the entropy remains unchanged). Third law of thermodynamics S = 0 {\displaystyle S=0} when T = 0 {\displaystyle T=0} The third law of thermodynamics states that at the absolute zero of temperature, the entropy is zero for a perfect crystalline structure. Onsager reciprocal relations – sometimes called the Fourth law of thermodynamics J u = L u u ∇ ( 1 / T ) − L u r ∇ ( m / T ) {\displaystyle \mathbf {J} _{u}=L_{uu}\,\nabla (1/T)-L_{ur}\,\nabla (m/T)} J r = L r u ∇ ( 1 / T ) − L r r ∇ ( m / T ) {\displaystyle \mathbf {J} _{r}=L_{ru}\,\nabla (1/T)-L_{rr}\,\nabla (m/T)} The fourth law of thermodynamics is not yet an agreed upon law (many supposed variations exist); historically, however, the Onsager reciprocal relations have been frequently referred to as the fourth law. == The fundamental equation == The first and second law of thermodynamics are the most fundamental equations of thermodynamics. They may be combined into what is known as fundamental thermodynamic relation which describes all of the changes of thermodynamic state functions of a system of uniform temperature and pressure. As a simple example, consider a system composed of a number of k different types of particles and has the volume as its only external variable. The fundamental thermodynamic relation may then be expressed in terms of the internal energy as: d U = T d S − p d V + ∑ i = 1 k μ i d N i {\displaystyle dU=TdS-pdV+\sum _{i=1}^{k}\mu _{i}dN_{i}} Some important aspects of this equation should be noted: (Alberty 2001), (Balian 2003), (Callen 1985) The thermodynamic space has k+2 dimensions The differential quantities (U, S, V, Ni) are all extensive quantities. The coefficients of the differential quantities are intensive quantities (temperature, pressure, chemical potential). Each pair in the equation are known as a conjugate pair with respect to the internal energy. The intensive variables may be viewed as a generalized "force". An imbalance in the intensive variable will cause a "flow" of the extensive variable in a direction to counter the imbalance. The equation may be seen as a particular case of the chain rule. In other words: d U = ( ∂ U ∂ S ) V , { N i } d S + ( ∂ U ∂ V ) S , { N i } d V + ∑ i ( ∂ U ∂ N i ) S , V , { N j ≠ i } d N i {\displaystyle dU=\left({\frac {\partial U}{\partial S}}\right)_{V,\{N_{i}\}}dS+\left({\frac {\partial U}{\partial V}}\right)_{S,\{N_{i}\}}dV+\sum _{i}\left({\frac {\partial U}{\partial N_{i}}}\right)_{S,V,\{N_{j\neq i}\}}dN_{i}} from which the following identifications can be made: ( ∂ U ∂ S ) V , { N i } = T {\displaystyle \left({\frac {\partial U}{\partial S}}\right)_{V,\{N_{i}\}}=T} ( ∂ U ∂ V ) S , { N i } = − p {\displaystyle \left({\frac {\partial U}{\partial V}}\right)_{S,\{N_{i}\}}=-p} ( ∂ U ∂ N i ) S , V , { N j ≠ i } = μ i {\displaystyle \left({\frac {\partial U}{\partial N_{i}}}\right)_{S,V,\{N_{j\neq i}\}}=\mu _{i}} These equations are known as "equations of state" with respect to the internal energy. (Note - the relation between pressure, volume, temperature, and particle number which is commonly called "the equation of state" is just one of many possible equations of state.) If we know all k+2 of the above equations of state, we may reconstitute the fundamental equation and recover all thermodynamic properties of the system. The fundamental equation can be solved for any other differential and similar expressions can be found. For example, we may solve for d S {\displaystyle dS} and find that ( ∂ S ∂ V ) U , { N i } = p T {\displaystyle \left({\frac {\partial S}{\partial V}}\right)_{U,\{N_{i}\}}={\frac {p}{T}}} == Thermodynamic potentials == By the principle of minimum energy, the second law can be restated by saying that for a fixed entropy, when the constraints on the system are relaxed, the internal energy assumes a minimum value. This will require that the system be connected to its surroundings, since otherwise the energy would remain constant. By the principle of minimum energy, there are a number of other state functions which may be defined which have the dimensions of energy and which are minimized according to the second law under certain conditions other than constant entropy. These are called thermodynamic potentials. For each such potential, the relevant fundamental equation results from the same Second-Law principle that gives rise to energy minimization under restricted conditions: that the total entropy of the system and its environment is maximized in equilibrium. The intensive parameters give the derivatives of the environment entropy with respect to the extensive properties of the system. The four most common thermodynamic potentials are: After each potential is shown its "natural variables". These variables are important because if the thermodynamic potential is expressed in terms of its natural variables, then it will contain all of the thermodynamic relationships necessary to derive any other relationship. In other words, it too will be a fundamental equation. For the above four potentials, the fundamental equations are expressed as: d U ( S , V , N i ) = T d S − p d V + ∑ i μ i d N i {\displaystyle dU\left(S,V,{N_{i}}\right)=TdS-pdV+\sum _{i}\mu _{i}dN_{i}} d F ( T , V , N i ) = − S d T − p d V + ∑ i μ i d N i {\displaystyle dF\left(T,V,N_{i}\right)=-SdT-pdV+\sum _{i}\mu _{i}dN_{i}} d H ( S , p , N i ) = T d S + V d p + ∑ i μ i d N i {\displaystyle dH\left(S,p,N_{i}\right)=TdS+Vdp+\sum _{i}\mu _{i}dN_{i}} d G ( T , p , N i ) = − S d T + V d p + ∑ i μ i d N i {\displaystyle dG\left(T,p,N_{i}\right)=-SdT+Vdp+\sum _{i}\mu _{i}dN_{i}} The thermodynamic square can be used as a tool to recall and derive these potentials. == First order equations == Just as with the internal energy version of the fundamental equation, the chain rule can be used on the above equations to find k+2 equations of state with respect to the particular potential. If Φ is a thermodynamic potential, then the fundamental equation may be expressed as: d Φ = ∑ i ∂ Φ ∂ X i d X i {\displaystyle d\Phi =\sum _{i}{\frac {\partial \Phi }{\partial X_{i}}}dX_{i}} where the X i {\displaystyle X_{i}} are the natural variables of the potential. If γ i {\displaystyle \gamma _{i}} is conjugate to X i {\displaystyle X_{i}} then we have the equations of state for that potential, one for each set of conjugate variables. γ i = ∂ Φ ∂ X i {\displaystyle \gamma _{i}={\frac {\partial \Phi }{\partial X_{i}}}} Only one equation of state will not be sufficient to reconstitute the fundamental equation. All equations of state will be needed to fully characterize the thermodynamic system. Note that what is commonly called "the equation of state" is just the "mechanical" equation of state involving the Helmholtz potential and the volume: ( ∂ F ∂ V ) T , { N i } = − p {\displaystyle \left({\frac {\partial F}{\partial V}}\right)_{T,\{N_{i}\}}=-p} For an ideal gas, this becomes the familiar PV=NkBT. === Euler integrals === Because all of the natural variables of the internal energy U are extensive quantities, it follows from Euler's homogeneous function theorem that U = T S − p V + ∑ i μ i N i {\displaystyle U=TS-pV+\sum _{i}\mu _{i}N_{i}} Substituting into the expressions for the other main potentials we have the following expressions for the thermodynamic potentials: F = − p V + ∑ i μ i N i {\displaystyle F=-pV+\sum _{i}\mu _{i}N_{i}} H = T S + ∑ i μ i N i {\displaystyle H=TS+\sum _{i}\mu _{i}N_{i}} G = ∑ i μ i N i {\displaystyle G=\sum _{i}\mu _{i}N_{i}} Note that the Euler integrals are sometimes also referred to as fundamental equations. === Gibbs–Duhem relationship === Differentiating the Euler equation for the internal energy and combining with the fundamental equation for internal energy, it follows that: 0 = S d T − V d p + ∑ i N i d μ i {\displaystyle 0=SdT-Vdp+\sum _{i}N_{i}d\mu _{i}} which is known as the Gibbs-Duhem relationship. The Gibbs-Duhem is a relationship among the intensive parameters of the system. It follows that for a simple system with r components, there will be r+1 independent parameters, or degrees of freedom. For example, a simple system with a single component will have two degrees of freedom, and may be specified by only two parameters, such as pressure and volume for example. The law is named after Willard Gibbs and Pierre Duhem. == Second order equations == There are many relationships that follow mathematically from the above basic equations. See Exact differential for a list of mathematical relationships. Many equations are expressed as second derivatives of the thermodynamic potentials (see Bridgman equations). === Maxwell relations === Maxwell relations are equalities involving the second derivatives of thermodynamic potentials with respect to their natural variables. They follow directly from the fact that the order of differentiation does not matter when taking the second derivative. The four most common Maxwell relations are: The thermodynamic square can be used as a tool to recall and derive these relations. === Material properties === Second derivatives of thermodynamic potentials generally describe the response of the system to small changes. The number of second derivatives which are independent of each other is relatively small, which means that most material properties can be described in terms of just a few "standard" properties. For the case of a single component system, there are three properties generally considered "standard" from which all others may be derived: Compressibility at constant temperature or constant entropy β T or S = − 1 V ( ∂ V ∂ p ) T , N or S , N {\displaystyle \beta _{T{\text{ or }}S}=-{1 \over V}\left({\partial V \over \partial p}\right)_{T,N{\text{ or }}S,N}} Specific heat (per-particle) at constant pressure or constant volume c p or V = T N ( ∂ S ∂ T ) p or V {\displaystyle c_{p{\text{ or }}V}={\frac {T}{N}}\left({\partial S \over \partial T}\right)_{p{\text{ or }}V}~} Coefficient of thermal expansion α p = 1 V ( ∂ V ∂ T ) p {\displaystyle \alpha _{p}={\frac {1}{V}}\left({\frac {\partial V}{\partial T}}\right)_{p}} These properties are seen to be the three possible second derivative of the Gibbs free energy with respect to temperature and pressure. == Thermodynamic property relations == Properties such as pressure, volume, temperature, unit cell volume, bulk modulus and mass are easily measured. Other properties are measured through simple relations, such as density, specific volume, specific weight. Properties such as internal energy, entropy, enthalpy, and heat transfer are not so easily measured or determined through simple relations. Thus, we use more complex relations such as Maxwell relations, the Clapeyron equation, and the Mayer relation. Maxwell relations in thermodynamics are critical because they provide a means of simply measuring the change in properties of pressure, temperature, and specific volume, to determine a change in entropy. Entropy cannot be measured directly. The change in entropy with respect to pressure at a constant temperature is the same as the negative change in specific volume with respect to temperature at a constant pressure, for a simple compressible system. Maxwell relations in thermodynamics are often used to derive thermodynamic relations. The Clapeyron equation allows us to use pressure, temperature, and specific volume to determine an enthalpy change that is connected to a phase change. It is significant to any phase change process that happens at a constant pressure and temperature. One of the relations it resolved to is the enthalpy of vaporization at a provided temperature by measuring the slope of a saturation curve on a pressure vs. temperature graph. It also allows us to determine the specific volume of a saturated vapor and liquid at that provided temperature. In the equation below, L {\displaystyle L} represents the specific latent heat, T {\displaystyle T} represents temperature, and Δ v {\displaystyle \Delta v} represents the change in specific volume. d P d T = L T Δ v {\displaystyle {\frac {\mathrm {d} P}{\mathrm {d} T}}={\frac {L}{T\Delta v}}} The Mayer relation states that the specific heat capacity of a gas at constant volume is slightly less than at constant pressure. This relation was built on the reasoning that energy must be supplied to raise the temperature of the gas and for the gas to do work in a volume changing case. According to this relation, the difference between the specific heat capacities is the same as the universal gas constant. This relation is represented by the difference between Cp and Cv: Cp – Cv = R == See also == Thermodynamics Timeline of thermodynamics == Notes == == References == Alberty, R. A. (2001). "Use of Legendre transforms in chemical thermodynamics" (PDF). Pure Appl. Chem. 73 (8): 1349–1380. doi:10.1351/pac200173081349. Atkins, Peter; de Paula, Julio (2002). Physical Chemistry (7th ed.). W.H. Freeman and Company. ISBN 978-0-7167-3539-7. Chapters 1 - 10, Part 1: Equilibrium. Balian, Roger (2003). "Entropy – A Protean Concept" (PDF). Poincaré Seminar 2: 119-45. Archived from the original (PDF) on 2007-01-04. Retrieved 2006-12-16. Bridgman, P.W. (1914). "A Complete Collection of Thermodynamic Formulas". Phys. Rev. 3 (4): 273. Bibcode:1914PhRv....3..273B. doi:10.1103/PhysRev.3.273. Callen, Herbert B. (1985). Thermodynamics and an Introduction to Themostatistics (2nd ed.). New York: John Wiley & Sons. ISBN 978-0-471-86256-7. Landsberg, Peter T. (1990). Thermodynamics and Statistical Mechanics. New York: Dover Publications, Inc. (reprinted from Oxford University Press, 1978) Lewis, G.N.; Randall, M. (1961). Thermodynamics (2nd ed.). New York: McGraw-Hill Book Company. Schroeder, Daniel V. (2000). Thermal Physics. San Francisco: Addison Wesley Longman. ISBN 978-0-201-38027-9. Silbey, Robert J.; et al. (2004). Physical Chemistry (4th ed.). New Jersey: Wiley.
Wikipedia/Thermodynamic_equations
A deep energy retrofit (DER) is an energy conservation project in an existing building that leads to an overall improvement in building performance. While there is no exact definition for a deep energy retrofit, it can be characterized as a whole-building analysis and construction process that aims to reduce on-site energy use by 50% or more using existing technologies, materials and construction practices. Reductions are calculated against baseline energy use using data from utility bills. Such a retrofit reaps multifold (energy and non-energy) benefits beyond energy cost savings, unlike conventional energy retrofit. It may also involve remodeling the building to achieve a harmony in energy, indoor air quality, durability, and thermal comfort. An integrated project delivery method is recommended for a deep energy retrofit project. An over-time approach in a deep energy retrofitting project provides a solution to the large upfront costs problem in all-at-once execution of the project. A deep energy retrofit is a whole-building analysis and construction process that achieves much larger energy savings than conventional energy retrofits. Deep energy retrofits can be applied to both residential and non-residential ("commercial") buildings. A deep energy retrofit typically results in energy savings of 30 percent or more, perhaps spread over several years, and may significantly improve the building value. == Climate Change == 82% of final energy consumption in buildings was supplied by fossil fuels in 2015. The energy-related CO2 emissions account for the environmental impact due to a building. The Global Status Report 2017 prepared by the International Energy Agency (IEA) for the Global Alliance for Buildings and Construction (GABC) highlights the significance of the buildings and construction sector in global energy consumption and related emissions again. Deep energy retrofits will assist in achieving the global climate goals laid down in the Paris Agreement. == Deep energy retrofits vs. Conventional energy retrofits == Conventional energy retrofits focus on isolated system upgrades (i.e. lighting and (HVAC) Heating, ventilation, and air conditioning equipment). While these retrofits are generally simple, fast and comparatively inexpensive, deep energy retrofits, while expensive, replace systems to be more energy efficient. Deep energy retrofits require a systems-thinking approach compared to the traditional approach followed for a conventional retrofit – home weatherization or typical home performance upgrade. Systems thinking evaluates the interactions between the different isolated components in the building. For example, Home Performance with ENERGY STAR offers a comprehensive, whole-house approach to improving a home's energy efficiency, comfort and safety while helping to reduce the energy costs by up to 20%. In addition to the efficiency measures taken for a building, a deep energy retrofit requires occupants' proactive role in energy conservation. This approach usually takes into account all the energy uses in the home, as well as the activities of the occupants. Deep energy retrofits achieve greater energy efficiency by taking a whole-building approach, addressing many systems at once. It is usually more economical and convenient to take this approach on buildings with overall poor efficiency performance, with multiple systems nearing the end of useful life, and perhaps other reasons. === Occupant behavior === The overall success of the deep energy retrofit project can depend upon the inclusion of occupants in all the phases of the project. The phases include – project recruitment, project planning and use. Occupant behavior requires the project to focus on building owners' needs and wants as much as the technical specifications. This ascertains actual performance, cost-effectiveness, willingness to progress from a design to an actual implementation, and occupant satisfaction. Also, evidence suggests that building simulation models can become more accurate for a given structure when actual operational information, such as thermostat set-points, appliance usage, etc. are included. In Europe, for funding for retrofitting, there are unique European Union bank initiatives that offer funds or help, including the Joint European Support for Sustainable Investment in City Areas initiative and the European Local Energy Assistance (ELENA) project. === Over-time Retrofit === Over-time retrofit is the implementation of a retrofit project which is planned in a step-by-step manner at intervals of time within a stipulated duration. Such an approach is usually sought for deep energy retrofits over an all-at-once approach to reduce the burden of large upfront costs. Thus, an over-time retrofit can be a more viable option when there are capital constraints. Research in the United Kingdom has demonstrated that retrofits carried out over-time can achieve levels of home performance equal to those achieved by all-at-once DERs and select projects have been successful in the United States. The pros and cons of an over-time retrofit are compared as follows: It is important to note that, for example, an overtime retrofit project could be able to stipulate the occupants' need but could perform sub-optimal technically. It could also prove to be costlier. There is a lack of tools to execute over-time projects efficiently. ==== Strategies to increase success ==== Detailed planning must be inculcated from the beginning. It is recommended to include post-occupancy evaluation at each stage of implementation to deal with modifications required in future stages. Home performance should be tracked at each stage using utility bills or feedback devices. This helps in achieving the set-target for energy consumption. It must be kept in mind to implement building envelope and passive design elements before making major heating, ventilation, and air conditioning (HVAC) and technology investments. This will help to reduce the load parameters for heating, ventilation, and air conditioning (HVAC) design. The technology investments should also come later to have an innovation advantage. Over-time retrofits can be guided by these strategies to be successful. === Design and Construction Process === Deep energy retrofit projects have different phases governing them – pre-panning, project planning, construction, and test out. For the design and construction process, a set of defined project needs, opportunities, goals and objectives should be created. This determines the overall project. Walker et al. provide design and construction process guidance which can be followed flexibly in deep energy retrofit projects in residential homes. == Energy efficiency measures == Cluett and Amann (2014) found the most commonly implemented efficiency measures in the US for residential buildings. They are broadly listed as follows. === Building shell improvements === Insulation improvements, usually to the foundation walls/slabs, above grade walls, floors, roof, and attic surfaces that make up the thermal envelope Attention to air sealing, particularly in areas that are harder to address without being paired with improvements to the insulation shell === Upgrades to heating, cooling, and hot water systems === Upgrade to non-atmospheric vented combustion units that either vent directly outside or are electric only Upgrade to units that are correctly sized for the heating and cooling load demands of an altered building Improvement to or replacement of the existing distribution systems for heating, cooling, and/or hot water, including changes to ductwork, water piping, and wastewater heat recovery The deep energy retrofit specifications for various elements vary from climate to climate zones. == Process == A Level III energy audit, as defined by ASHRAE, is required in order to complete a commercial building deep energy retrofit. Also known as an investment grade audit, this type of energy audit features analysis of the interactions between efficiency strategies and their life cycle cost. Upon selection and implementation of measures, the energy savings are verified using the International Performance Measurement and Verification Protocol. == Tools == Deep energy retrofits make use of energy modelling tools that integrate with an organization's pro forma or other financial decision making mechanisms. Smartphone technologies have simplified the retrofit process as a number of audit and retrofit tools have appeared over the last several years to speed up retrofits and maximize efficiency in the field. == Ratings == A building that has undergone a deep energy retrofit is usually well positioned for a green building rating such as LEED. == Energy and Non-energy Benefits == There have been a number of studies to determine and quantify the benefits afforded to owners, tenants, and various other stakeholders from the successful completion of deep energy retrofits. The following tabulation by the Rocky Mountain Institute lays the efficiency measures undertaken in a deep energy retrofit project in correspondence to the building performance improvements and therefore, the quantifiable and non-quantifiable values generated from implementation of such a project. == Policy framework for retrofitting == A paradigm shift is needed to achieve the motive of climate change mitigation through retrofitting. This shift is underpinned by a greater need to propagate behavioral change rather than just the technology implementation. The framework should move from a project focus outlook towards an understanding of a larger scale execution that includes social awareness and interests. Hence, there is the need for laying down large scale retrofitting programs that support the idea of cities as active sites to inculcate newer technologies. === Global === "Buildings will also be particularly affected by the effects of climate change: storms, flooding and seepages, reduced durability of some building materials and increased risk of structure damage or collapse (e.g. from severe storms) could all decrease building lifetime, while increasing health-related risks such as deteriorating indoor climate." (The GABC Global Roadmap) To counter the global temperature-rise problem, a decision was reached at the Paris agreement in 2015, wherein member nations pledged to maintain temperatures below 2°C, compared to pre-industrial levels. The Global Status Report 2017 underscores the importance and potential of deep energy retrofitting among other solutions in achieving climate mitigation goals. Deep energy retrofitting is one of the solutions for reducing the carbon footprint of buildings. The report found that buildings & the construction industry together accounted for 36% of global final energy use & 39% of energy-related CO2 emissions. It calls for a 30% improvement, by 2030, in energy-use intensity (i.e. energy use per square meter) of the building sector, as compared to the 2015 levels, to achieve the Paris agreement goals. Though a growing number of countries have laid down policies to building energy performance improvements, a rapidly growing buildings sector, especially in developing countries, has offset those improvements. The report states that the efficiency improvements, including building envelope measures, represent nearly 2400 EJ in cumulative energy offsets to 2060 – more than all the final energy consumed by the global buildings sector over the last 20 years. The report asserts that an aggressive scaling up of deep building energy renovations of the existing global stock is one of the important steps ahead. It refers to the Global Alliance for Buildings and Construction (GABC) Global Roadmap for building sector sustainability. The GABC Global Roadmap intends to 'accelerate the improvement of existing buildings' performance' towards energy-efficient, zero GHG emissions and resilient buildings well before the end of the century taking the following steps globally: Significant increase of renovation operations including energy efficiency. Upgrade of the level of energy efficiency of each operation, in line with long-term standards. === USA === An analysis for a 50% dip in energy consumption & carbon emissions by the US by 2050 translates to comprehensive energy efficiency retrofits in more than half the existing buildings. The policy framework for retrofitting in the USA is directed at state and local levels. These efforts are supported by the national government. Hundreds of such programs exist, from basic energy audits and provision of financial rebates, to comprehensive programs that aim to optimize the entire house. Carine et al. summarize the elements present mostly in the best programs as: Retrofit consultancy for consumers. Marketing to boost the demand-supply in this industry. Training, certification of retrofit contractors. Provision of rebates, upfront discounts. Investment in R&D. Building-efficiency labelling. The Home Performance with Energy Star program is run by many bodies in the US, with the aid of the US Department of Energy. This project reports an average cost of $3500 per home retrofitted, with a distribution of 57%, 14%, 29% to homeowner incentives, contractor incentives, & administrative costs respectively. In the commercial domain, the Energy Star Program by the US Environmental Protection Agency aims to reduce the carbon footprint of buildings. According to this initiative, owners benchmark their buildings on a scale of 1–100. Those scoring 75 & above receive 'Energy Star' designation; while the others are encouraged to follow upgrade strategies for better performance. Nearly 500,000 properties, representing about half of US commercial building floor area has been benchmarked as of 2016, with a total 29,500 buildings receiving the 'Energy Star' rating to that point. Some major obstacles in its path of the retrofitting industry include: High initial investment. Complexity of retrofits. Lack of awareness regarding retrofitting. Shortage of affordable financing. === UK === Retrofitting in the UK: Challenges and Policy Landscape The UK faces parallel challenges in achieving significant carbon emission reductions through retrofitting, with its own unique policy landscape. Unlike the US, where retrofitting policies often operate at state levels, UK programs are heavily influenced by central government schemes like the Energy Company Obligation (ECO) and local initiatives driven by devolved administrations. Despite these efforts, retrofitting remains complex due to a high proportion of older housing stock, a lack of skilled labor, and fragmented delivery mechanisms. Key UK programs and organizations include: The Green Homes Grant, which provided homeowners with vouchers for energy efficiency measures before its closure. The ECO4 scheme, targeting vulnerable households and aiming to reduce fuel poverty through energy efficiency upgrades. Organizations like Her Retrofit Space (https://www.herretrofitspace.com), which specifically supports women professionals in the retrofit industry and promotes sustainable refurbishment through training, CPD events, and networking opportunities. Elements of Effective Retrofitting Programs Drawing parallels from international best practices, Carine et al. summarize the key elements often present in effective retrofitting programs: Retrofit consultancy for consumers (e.g., UK-based advisory services by Energy Saving Trust). Marketing to boost the demand-supply balance in the retrofit industry. Training and certification of retrofit professionals (e.g., PAS 2035 standards in the UK). Financial incentives and grants for homeowners and landlords. Investment in R&D for innovative materials and approaches. Building-efficiency labelling systems like the UK’s Energy Performance Certificate (EPC). The UK’s commitment to achieving net-zero emissions by 2050 necessitates a similarly ambitious approach to retrofitting. Research shows that comprehensive retrofitting across the UK’s building stock could yield significant reductions in energy consumption and carbon emissions while addressing issues like fuel poverty and poor indoor air quality. Comparison of Costs and Frameworks In the USA, programs like Home Performance with Energy Star report an average cost of $3500 per home retrofitted, allocated across homeowner incentives (57%), contractor incentives (14%), and administrative costs (29%). In contrast, UK schemes often struggle with higher upfront costs and administrative challenges, particularly in ensuring equitable access to financial support. Commercial Retrofitting in the UK Similar to the US Energy Star Program, the UK employs initiatives like the Better Buildings Partnership (BBP) and benchmarking tools to reduce the carbon footprint of commercial properties. However, adoption rates in the UK lag behind targets due to challenges like complex property ownership structures and limited incentives for private landlords. Obstacles in the UK Retrofitting Industry The UK retrofitting sector faces several barriers: High initial investment, particularly for deep retrofits. Complexity of retrofitting older and heritage properties common in the UK. Limited consumer awareness and understanding of retrofit benefits. Shortage of skilled professionals, especially women, highlighting the importance of initiatives like Her Retrofit Space in addressing this gap. Fragmented funding mechanisms and inconsistent policy support. The UK’s success in scaling retrofits will depend on addressing these challenges while fostering collaboration between government, industry, and advocacy. This will ensure a just transition to net-zero that includes diverse voices and expertise. == Notable case studies == === The Empire State Building === The Empire State Building has undergone a deep energy retrofit process that was completed in 2013. The project team, consisting of representatives from Johnson Controls, Rocky Mountain Institute, Clinton Climate Initiative, and Jones Lang LaSalle will have achieved an annual energy use reduction of 38% and $4.4 million. For example, the 6,500 windows were remanufactured onsite into superwindows which block heat but pass light. Air conditioning operating costs on hot days were reduced and this saved $17 million of the project's capital cost immediately, partly funding other retrofitting. Receiving a gold Leadership in Energy and Environmental Design (LEED) rating in September 2011, the Empire State Building is the tallest LEED certified building in the United States. === Indianapolis City-County Building === The Indianapolis City-County Building underwent a deep energy retrofit process in 2011, which has achieved an annual energy reduction of 46% and $750,000 annual energy saving. Upon completion, the project team, consisting of representatives from the Indianapolis-Marion County Building Authority, Indianapolis Office of Sustainability, Rocky Mountain Institute, and Performance Services will have achieved an annual energy reduction of 46% and $750,000 annual energy savings. == Market sizing == === United States === A business case study by The Rockefeller Foundation sizes the potential of the retrofitting market in the USA. It projects a $279 billion investment opportunity. The residential sector, followed by commercial and institutional sectors, offers the largest business impact. Scaling up retrofitting efforts can create 3.3 billion direct and indirect cumulative job years in the United States. == Criticism == === Cost-effectiveness === Cost effectiveness can be achieved when the annual energy cost savings can equal or exceed the annual loan costs. Their perfect balance is referred as neutral net-monthly costs. Cost effectiveness could be a key driver in decision making related to deep energy retrofit projects. A study by Less et al. (2015) found that: The most cost-effective projects were the ones in poor conditions – low efficiency equipment and little insulation. Such buildings did not pursue deep retrofit. The least cost effective projects were the ones having low pre-retrofit utility bills. But they had aggressive retrofit plans. Such a project cannot be said to be a failure because cost-effectiveness may not be the project goal. Less et al. (2015) found that on average, the U.S. deep energy retrofits were cash-flow neutral on a monthly basis. However, variability was large, with some projects substantially reducing net-monthly costs and others substantially increasing net-costs. Questionable cost-effectiveness is thus, seen as a barrier to widespread of deep energy retrofits. === Energy savings and evaluation === Although many modeling tools are available to assess home energy savings, the inaccuracy of their predictions (compared to actual energy use measurements) limits their usefulness. Cluett et al. point that the pilot programs should monitor actual energy savings to evaluate project impact and help calibrate estimation tools. == See also == Rocky Mountain Institute Efficient energy use Quadruple glazing Leadership in Energy and Environmental Design Sustainable refurbishment Zero-energy building Zero heating building Northwest Energy Efficiency Alliance United States Department of Energy Energy Savings Performance Contract == References ==
Wikipedia/Deep_energy_retrofit
The clean air delivery rate (CADR) is a figure of merit that is the cubic feet per minute (CFM) of air that has had all the particles of a given size distribution removed. For air filters that have air flowing through them, it is the fraction of particles (of a particular size distribution) that have been removed from the air, multiplied by the air flow rate (in CFM) through the device. More precisely, it is the CFM of air in a 1,008-cubic-foot (28.5 m3) room that has had all the particles of a given size distribution removed from the air, over and above the rate at which the particles are naturally falling out of the air. Different filters have different abilities to remove different particle distributions, so three CADR's for a given device are typically measured: smoke, pollen, and dust. By combining the amount of airflow and particle removal efficiency, consumers are less likely to be misled by a high efficiency filter that is filtering a small amount of air, or by a high volume of air that is not being filtered very well. == Applicability == The CADR ratings were developed by the Association of Home Appliance Manufacturers (AHAM) and are measured according to a procedure specified by ANSI/AHAM AC-1. The ratings are recognized by retailers, manufacturers, standards organizations, and government bodies such as the EPA and the Federal Trade Commission. Whole house air cleaners are not covered by the CADR specification because the measurement is performed in a standard 1,008-cubic-foot (28.5 m3) room, the size of a typical house room, which has different airflow patterns than whole-house filters. Measurements are made with the filter running and not running, so particles that naturally fall out of the air are not being counted as part of filter's operation. The measurement only applies to particulate matter, not to gases. Any device or technology that removes particulate matter from the air can be tested for CADR numbers. Anyone with the necessary equipment can perform the ANSI/AHAM AC-1 measurements. The AHAM performs the tests for manufacturers who are paid members of AHAM which choose to use their service, allowing the manufacturer to display a seal that certifies AHAM performed the test. The CADR numbers reflect particulate matter remaining in the air, which has not been captured by the filter or other technology. Some low-efficiency filters employ ionization, which attaches a weak electrostatic charge to particulate matter, which can cause several smaller particles to group together resulting in a lower particle measurement count. Ionization can also cause particulate matter to attach to surfaces such as walls, and flooring, resulting in lower particulate counts in the air, but without having particulate matter permanently removed from the air. The rating is only valid for a given filter as used in a specific equipment design, and when the filter is brand new. The rating is based on a 20-minute test. Choosing a higher- or lower-efficiency filter than the unit was designed for may decrease its ability to filter air. An exception is when a high efficiency filter does not decrease the fan's airflow rate. This is usually achievable only with physically larger or thicker filters, which usually cannot be used in a unit designed for smaller filters. Filters with efficiencies higher than the original may slow the fan's airflow rate down, which may result in a lower CADR rating. Due to the measurement process, the CADR rating is intended for use only with equipment designed for residential spaces. Clean rooms, hospitals, and airplanes use high-efficiency HEPA filters and do not use a CADR rating, but instead may use MERV ratings. == Understanding the rating == The AHAM seal (usually found on the back of an air cleaner's box) lists three CADR numbers, one each for smoke, pollen, and dust. This order is from the smallest to largest particles and corresponds to the most dangerous to the least dangerous particles. The higher the CADR number, the more air it filters per minute for that particle size range. Consumers can use these ratings to compare air cleaners from the various manufacturers. The defined particle size ranges are 0.09–1.0 μm for smoke, 0.5–3 μm for dust, and 5–11 μm for pollen. AHAM recommends following their '2/3' rule. Air filters should be chosen for rooms so that the value of its smoke CADR is equal to or greater than 2/3 the room area in units of square feet (valid for rooms up to 8-foot (2.4 m) in height). This recommendation is based on the assumption that the room will have air exchanged with other rooms at a rate of less than 1 room volume per hour, and that the customer desires at least 80% of the smoke particles removed from the air. For an 8-foot (2.4 m) high room, this means the room volume should be less than or equal to 12 times the CADR value. Much larger rooms can be effectively filtered if there is no air coming from the outside, and if there is no significant continuing source of particulates in the room. MERV 14 filters are capable of reducing smoke particles by approximately 80% when operating at the filter's design velocity, so a CADR smoke rating on a simple filtering unit that uses a MERV 14 filter will be approximately 0.80 times the fan flow rate in CFM. If the filtering unit does not mix the test room's air very well, it may receive a lower CADR measurement because it does not operate as efficiently as it should. If a filtering unit uses a MERV 12 filter that removes roughly 40% of the smoke particles, then it may still obtain a smoke CADR of 80 by filtering 200 cubic feet per minute instead of 100 CFM. Conversely, a 99.97% HEPA filter (MERV 17) that removes over 99.9% of the smoke particles needs to filter 80 cubic feet per minute to get a CADR of 80. This shows the CFM airflow of a unit is always equal to or greater than the CADR rating. Large particles naturally fall out of the air faster than small particles, but the CADR rating is based on how well the filter works over and above this effect. So CADR ratings for dust and pollen come out lower than would be expected by looking only at the filter's efficiency at removing large particles. This "bias" against the filter's efficiency at removing large particles is a relative bias in favor of the filter's ability to remove small (smoke) articles. Since smoke particles are the most difficult to filter (lower filter efficiency relative to large particles), the two effects largely cancel so that CADR ratings are usually similar for both small and large particles. A filter that is very good at removing smoke particles by using a slow fan or electrostatic effects will not get as good CADR numbers for pollen and dust because those particles will fall down and deposit on room surfaces during the test, before the filter has had a chance to collect them. For smoke-sized particles, a MERV 12 filter may function as well as a MERV 14 filter at half its rated air velocity (for smoke particles), and a MERV 14 may function like a MERV 12 at double its rated velocity. This is because smoke-sized particles depend on diffusion (Brownian motion) onto fibers as much as impaction, rather than completely on impaction like dust. A slower air speed gives diffusion more time for the particle to stick to the fiber or previously attached particles. Conversely, a higher filter speed may increase the collection of larger particles because impaction depends on the inertia of the particles. As a filter gets clogged from use, the fan air speed drops so that the effective CADR for smoke may actually rise rather than decrease, while the CADR for dust will be lower from the decrease in fan speed, especially because the particles fall out before they are filtered. If half the room's volume of air is exchanged with other rooms every hour, then HEPA filters are not more effective than dust spot 85% efficiency filters (roughly, MERV 13 or 3M's MPR 1900). The unit's fan speed will be the dominant factor if air from outside the room is coming in too quickly. == History == In the early 1980s, AHAM developed a method for measuring the clean air delivery rate for portable household electric room air cleaners. The resulting standard became an American National Standard in 1988 and was last revised in 2006. Known as ANSI/AHAM AC-1, it measures the air cleaner's ability to reduce tobacco smoke, dust and pollen particles in a room. It also includes a method for calculating the suggested room size. == See also == Association of Home Appliance Manufacturers Air purifier Air filter HEPA Air changes per hour == References == == External links == AHAM Verifide – AHAM Verifide certification info for consumers, manufacturers, and retailers. Indoor Air Facts No. 7 – Residential Air Cleaners – EPA publication. AHAM Verified Air Purifiers CADR Calculation – CADR = [(ACH x L x W x H) / 60] cfm
Wikipedia/Clean_air_delivery_rate
In mathematics, certain systems of partial differential equations are usefully formulated, from the point of view of their underlying geometric and algebraic structure, in terms of a system of differential forms. The idea is to take advantage of the way a differential form restricts to a submanifold, and the fact that this restriction is compatible with the exterior derivative. This is one possible approach to certain over-determined systems, for example, including Lax pairs of integrable systems. A Pfaffian system is specified by 1-forms alone, but the theory includes other types of example of differential system. To elaborate, a Pfaffian system is a set of 1-forms on a smooth manifold (which one sets equal to 0 to find solutions to the system). Given a collection of differential 1-forms α i , i = 1 , 2 , … , k {\displaystyle \textstyle \alpha _{i},i=1,2,\dots ,k} on an n {\displaystyle \textstyle n} -dimensional manifold ⁠ M {\displaystyle M} ⁠, an integral manifold is an immersed (not necessarily embedded) submanifold whose tangent space at every point p ∈ N {\displaystyle \textstyle p\in N} is annihilated by (the pullback of) each ⁠ α i {\displaystyle \textstyle \alpha _{i}} ⁠. A maximal integral manifold is an immersed (not necessarily embedded) submanifold i : N ⊂ M {\displaystyle i:N\subset M} such that the kernel of the restriction map on forms i ∗ : Ω p 1 ( M ) → Ω p 1 ( N ) {\displaystyle i^{*}:\Omega _{p}^{1}(M)\rightarrow \Omega _{p}^{1}(N)} is spanned by the α i {\displaystyle \textstyle \alpha _{i}} at every point p {\displaystyle p} of ⁠ N {\displaystyle N} ⁠. If in addition the α i {\displaystyle \textstyle \alpha _{i}} are linearly independent, then N {\displaystyle N} is (⁠ n − k {\displaystyle n-k} ⁠)-dimensional. A Pfaffian system is said to be completely integrable if M {\displaystyle M} admits a foliation by maximal integral manifolds. (Note that the foliation need not be regular; i.e. the leaves of the foliation might not be embedded submanifolds.) An integrability condition is a condition on the α i {\displaystyle \alpha _{i}} to guarantee that there will be integral submanifolds of sufficiently high dimension. == Necessary and sufficient conditions == The necessary and sufficient conditions for complete integrability of a Pfaffian system are given by the Frobenius theorem. One version states that if the ideal I {\displaystyle {\mathcal {I}}} algebraically generated by the collection of αi inside the ring Ω(M) is differentially closed, in other words d I ⊂ I , {\displaystyle d{\mathcal {I}}\subset {\mathcal {I}},} then the system admits a foliation by maximal integral manifolds. (The converse is obvious from the definitions.) == Example of a non-integrable system == Not every Pfaffian system is completely integrable in the Frobenius sense. For example, consider the following one-form on R3 ∖ (0,0,0): θ = z d x + x d y + y d z . {\displaystyle \theta =z\,dx+x\,dy+y\,dz.} If dθ were in the ideal generated by θ we would have, by the skewness of the wedge product θ ∧ d θ = 0. {\displaystyle \theta \wedge d\theta =0.} But a direct calculation gives θ ∧ d θ = ( x + y + z ) d x ∧ d y ∧ d z , {\displaystyle \theta \wedge d\theta =(x+y+z)\,dx\wedge dy\wedge dz,} which is a nonzero multiple of the standard volume form on R3. Therefore, there are no two-dimensional leaves, and the system is not completely integrable. On the other hand, for the curve defined by x = t , y = c , z = e − t / c , t > 0 {\displaystyle x=t,\quad y=c,\quad z=e^{-t/c},\qquad t>0} then θ defined as above is 0, and hence the curve is easily verified to be a solution (i.e. an integral curve) for the above Pfaffian system for any nonzero constant c. == Examples of applications == In pseudo-Riemannian geometry, we may consider the problem of finding an orthogonal coframe θi, i.e., a collection of 1-forms that form a basis of the cotangent space at every point with ⟨ θ i , θ j ⟩ = δ i j {\displaystyle \langle \theta ^{i},\theta ^{j}\rangle =\delta ^{ij}} that are closed (dθi = 0, i = 1, 2, ..., n). By the Poincaré lemma, the θi locally will have the form dxi for some functions xi on the manifold, and thus provide an isometry of an open subset of M with an open subset of Rn. Such a manifold is called locally flat. This problem reduces to a question on the coframe bundle of M. Suppose we had such a closed coframe Θ = ( θ 1 , … , θ n ) . {\displaystyle \Theta =(\theta ^{1},\dots ,\theta ^{n}).} If we had another coframe ⁠ Φ = ( ϕ 1 , … , ϕ n ) {\displaystyle \Phi =(\phi ^{1},\dots ,\phi ^{n})} ⁠, then the two coframes would be related by an orthogonal transformation Φ = M Θ {\displaystyle \Phi =M\Theta } If the connection 1-form is ω, then we have d Φ = ω ∧ Φ {\displaystyle d\Phi =\omega \wedge \Phi } On the other hand, d Φ = ( d M ) ∧ Θ + M ∧ d Θ = ( d M ) ∧ Θ = ( d M ) M − 1 ∧ Φ . {\displaystyle {\begin{aligned}d\Phi &=(dM)\wedge \Theta +M\wedge d\Theta \\&=(dM)\wedge \Theta \\&=(dM)M^{-1}\wedge \Phi .\end{aligned}}} But ω = ( d M ) M − 1 {\displaystyle \omega =(dM)M^{-1}} is the Maurer–Cartan form for the orthogonal group. Therefore, it obeys the structural equation ⁠ d ω + ω ∧ ω = 0 {\displaystyle d\omega +\omega \wedge \omega =0} ⁠, and this is just the curvature of M: Ω = d ω + ω ∧ ω = 0. {\displaystyle \Omega =d\omega +\omega \wedge \omega =0.} After an application of the Frobenius theorem, one concludes that a manifold M is locally flat if and only if its curvature vanishes. == Generalizations == Many generalizations exist to integrability conditions on differential systems that are not necessarily generated by one-forms. The most famous of these are the Cartan–Kähler theorem, which only works for real analytic differential systems, and the Cartan–Kuranishi prolongation theorem. See § Further reading for details. The Newlander–Nirenberg theorem gives integrability conditions for an almost-complex structure. == Further reading == Bryant, Chern, Gardner, Goldschmidt, Griffiths, Exterior Differential Systems, Mathematical Sciences Research Institute Publications, Springer-Verlag, ISBN 0-387-97411-3 Olver, P., Equivalence, Invariants, and Symmetry, Cambridge, ISBN 0-521-47811-1 Ivey, T., Landsberg, J.M., Cartan for Beginners: Differential Geometry via Moving Frames and Exterior Differential Systems, American Mathematical Society, ISBN 0-8218-3375-8 Dunajski, M., Solitons, Instantons and Twistors, Oxford University Press, ISBN 978-0-19-857063-9
Wikipedia/Integrability_conditions_for_differential_systems
In thermodynamics, the interpretation of entropy as a measure of energy dispersal has been exercised against the background of the traditional view, introduced by Ludwig Boltzmann, of entropy as a quantitative measure of disorder. The energy dispersal approach avoids the ambiguous term 'disorder'. An early advocate of the energy dispersal conception was Edward A. Guggenheim in 1949, using the word 'spread'. In this alternative approach, entropy is a measure of energy dispersal or spread at a specific temperature. Changes in entropy can be quantitatively related to the distribution or the spreading out of the energy of a thermodynamic system, divided by its temperature. Some educators propose that the energy dispersal idea is easier to understand than the traditional approach. The concept has been used to facilitate teaching entropy to students beginning university chemistry and biology. == Comparisons with traditional approach == The term "entropy" has been in use from early in the history of classical thermodynamics, and with the development of statistical thermodynamics and quantum theory, entropy changes have been described in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantized energy levels. Such descriptions have tended to be used together with commonly used terms such as disorder and randomness, which are ambiguous, and whose everyday meaning is the opposite of what they are intended to mean in thermodynamics. Not only does this situation cause confusion, but it also hampers the teaching of thermodynamics. Students were being asked to grasp meanings directly contradicting their normal usage, with equilibrium being equated to "perfect internal disorder" and the mixing of milk in coffee from apparent chaos to uniformity being described as a transition from an ordered state into a disordered state. The description of entropy as the amount of "mixedupness" or "disorder," as well as the abstract nature of the statistical mechanics grounding this notion, can lead to confusion and considerable difficulty for those beginning the subject. Even though courses emphasised microstates and energy levels, most students could not get beyond simplistic notions of randomness or disorder. Many of those who learned by practising calculations did not understand well the intrinsic meanings of equations, and there was a need for qualitative explanations of thermodynamic relationships. Arieh Ben-Naim recommends abandonment of the word entropy, rejecting both the 'dispersal' and the 'disorder' interpretations; instead he proposes the notion of "missing information" about microstates as considered in statistical mechanics, which he regards as commonsensical. == Description == Increase of entropy in a thermodynamic process can be described in terms of "energy dispersal" and the "spreading of energy," while avoiding mention of "disorder" except when explaining misconceptions. All explanations of where and how energy is dispersing or spreading have been recast in terms of energy dispersal, so as to emphasise the underlying qualitative meaning. In this approach, the second law of thermodynamics is introduced as "Energy spontaneously disperses from being localized to becoming spread out if it is not hindered from doing so," often in the context of common experiences such as a rock falling, a hot frying pan cooling down, iron rusting, air leaving a punctured tyre and ice melting in a warm room. Entropy is then depicted as a sophisticated kind of "before and after" yardstick — measuring how much energy is spread out over time as a result of a process such as heating a system, or how widely spread out the energy is after something happens in comparison with its previous state, in a process such as gas expansion or fluids mixing (at a constant temperature). The equations are explored with reference to the common experiences, with emphasis that in chemistry the energy that entropy measures as dispersing is the internal energy of molecules. The statistical interpretation is related to quantum mechanics in describing the way that energy is distributed (quantized) amongst molecules on specific energy levels, with all the energy of the macrostate always in only one microstate at one instant. Entropy is described as measuring the energy dispersal for a system by the number of accessible microstates, the number of different arrangements of all its energy at the next instant. Thus, an increase in entropy means a greater number of microstates for the final state than for the initial state, and hence more possible arrangements of a system's total energy at any one instant. Here, the greater 'dispersal of the total energy of a system' means the existence of many possibilities. Continuous movement and molecular collisions visualised as being like bouncing balls blown by air as used in a lottery can then lead on to showing the possibilities of many Boltzmann distributions and continually changing "distribution of the instant", and on to the idea that when the system changes, dynamic molecules will have a greater number of accessible microstates. In this approach, all everyday spontaneous physical happenings and chemical reactions are depicted as involving some type of energy flows from being localized or concentrated to becoming spread out to a larger space, always to a state with a greater number of microstates. This approach does not work as well for very complex cases where the qualitative relation of energy dispersal to entropy change can be so inextricably obscured that it is moot. The entropy of mixing is one of these complex cases, when two or more different substances are mixed at the same temperature and pressure. There will be no net exchange of heat or work, so the entropy increase will be due to the literal spreading out of the motional energy of each substance in the larger combined final volume. Each component's energetic molecules become more separated from one another than they would be in the pure state, when in the pure state they were colliding only with identical adjacent molecules, leading to an increase in its number of accessible microstates. == Current adoption == Variants of the energy dispersal approach have been adopted in number of undergraduate chemistry texts, mainly in the United States. One respected text states: The concept of the number of microstates makes quantitative the ill-defined qualitative concepts of 'disorder' and the 'dispersal' of matter and energy that are used widely to introduce the concept of entropy: a more 'disorderly' distribution of energy and matter corresponds to a greater number of micro-states associated with the same total energy. — Atkins & de Paula (2006): 81  == History == The concept of 'dissipation of energy' was used in Lord Kelvin's 1852 article "On a Universal Tendency in Nature to the Dissipation of Mechanical Energy." He distinguished between two types or "stores" of mechanical energy: "statical" and "dynamical." He discussed how these two types of energy can change from one form to the other during a thermodynamic transformation. When heat is created by any irreversible process (such as friction), or when heat is diffused by conduction, mechanical energy is dissipated, and it is impossible to restore the initial state. Using the word 'spread', an early advocate of the energy dispersal concept was Edward Armand Guggenheim. In the mid-1950s, with the development of quantum theory, researchers began speaking about entropy changes in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantized energy levels, such as by the reactants and products of a chemical reaction. In 1984, the Oxford physical chemist Peter Atkins, in a book The Second Law, written for laypersons, presented a nonmathematical interpretation of what he called the "infinitely incomprehensible entropy" in simple terms, describing the Second Law of thermodynamics as "energy tends to disperse". His analogies included an imaginary intelligent being called "Boltzmann's Demon," who runs around reorganizing and dispersing energy, in order to show how the W in Boltzmann's entropy formula relates to energy dispersion. This dispersion is transmitted via atomic vibrations and collisions. Atkins wrote: "each atom carries kinetic energy, and the spreading of the atoms spreads the energy…the Boltzmann equation therefore captures the aspect of dispersal: the dispersal of the entities that are carrying the energy.": 78, 79  In 1997, John Wrigglesworth described spatial particle distributions as represented by distributions of energy states. According to the second law of thermodynamics, isolated systems will tend to redistribute the energy of the system into a more probable arrangement or a maximum probability energy distribution, i.e. from that of being concentrated to that of being spread out. By virtue of the First law of thermodynamics, the total energy does not change; instead, the energy tends to disperse over the space to which it has access. In his 1999 Statistical Thermodynamics, M.C. Gupta defined entropy as a function that measures how energy disperses when a system changes from one state to another. Other authors defining entropy in a way that embodies energy dispersal are Cecie Starr and Andrew Scott. In a 1996 article, the physicist Harvey S. Leff set out what he called "the spreading and sharing of energy." Another physicist, Daniel F. Styer, published an article in 2000 showing that "entropy as disorder" was inadequate. In an article published in the 2002 Journal of Chemical Education, Frank L. Lambert argued that portraying entropy as "disorder" is confusing and should be abandoned. He has gone on to develop detailed resources for chemistry instructors, equating entropy increase as the spontaneous dispersal of energy, namely how much energy is spread out in a process, or how widely dispersed it becomes – at a specific temperature. == See also == Introduction to entropy == References == == Further reading == Carson, E. M., and Watson, J. R., (Department of Educational and Professional Studies, King's College, London), 2002, "Undergraduate students' understandings of entropy and Gibbs Free energy," University Chemistry Education - 2002 Papers, Royal Society of Chemistry. Lambert, Frank L. (2002). "Disorder - A Cracked Crutch For Supporting Entropy Discussions," Journal of Chemical Education 79: 187-92. Leff, Harvey S. (2007). "Entropy, Its Language, and Interpretation" (PDF). Found Phys. 37 (12). Springer: 1744–1766. Bibcode:2007FoPh...37.1744L. doi:10.1007/s10701-007-9163-3. S2CID 3485226. Retrieved 24 February 2016. === Texts using the energy dispersal approach === Atkins, P. W., Physical Chemistry for the Life Sciences. Oxford University Press, ISBN 0-19-928095-9; W. H. Freeman, ISBN 0-7167-8628-1 Benjamin Gal-Or, "Cosmology, Physics and Philosophy", Springer-Verlag, New York, 1981, 1983, 1987 ISBN 0-387-90581-2 Bell, J., et al., 2005. Chemistry: A General Chemistry Project of the American Chemical Society, 1st ed. W. H. Freeman, 820pp, ISBN 0-7167-3126-6 Brady, J.E., and F. Senese, 2004. Chemistry, Matter and Its Changes, 4th ed. John Wiley, 1256pp, ISBN 0-471-21517-1 Brown, T. L., H. E. LeMay, and B. E. Bursten, 2006. Chemistry: The Central Science, 10th ed. Prentice Hall, 1248pp, ISBN 0-13-109686-9 Ebbing, D.D., and S. D. Gammon, 2005. General Chemistry, 8th ed. Houghton-Mifflin, 1200pp, ISBN 0-618-39941-0 Ebbing, Gammon, and Ragsdale. Essentials of General Chemistry, 2nd ed. Hill, Petrucci, McCreary and Perry. General Chemistry, 4th ed. Kotz, Treichel, and Weaver. Chemistry and Chemical Reactivity, 6th ed. Moog, Spencer, and Farrell. Thermodynamics, A Guided Inquiry. Moore, J. W., C. L. Stanistski, P. C. Jurs, 2005. Chemistry, The Molecular Science, 2nd ed. Thompson Learning. 1248pp, ISBN 0-534-42201-2 Olmsted and Williams, Chemistry, 4th ed. Petrucci, Harwood, and Herring. General Chemistry, 9th ed. Silberberg, M.S., 2006. Chemistry, The Molecular Nature of Matter and Change, 4th ed. McGraw-Hill, 1183pp, ISBN 0-07-255820-2 Suchocki, J., 2004. Conceptual Chemistry 2nd ed. Benjamin Cummings, 706pp, ISBN 0-8053-3228-6 == External links == welcome to entropy site A large website by Frank L. Lambert with links to work on the energy dispersal approach to entropy. The Second Law of Thermodynamics (6)
Wikipedia/Entropy_(energy_dispersal)
An open system is a system that has external interactions. Such interactions can take the form of information, energy, or material transfers into or out of the system boundary, depending on the discipline which defines the concept. An open system is contrasted with the concept of an isolated system which exchanges neither energy, matter, nor information with its environment. An open system is also known as a flow system. The concept of an open system was formalized within a framework that enabled one to interrelate the theory of the organism, thermodynamics, and evolutionary theory. This concept was expanded upon with the advent of information theory and subsequently systems theory. Today the concept has its applications in the natural and social sciences. In the natural sciences an open system is one whose border is permeable to both energy and mass. By contrast, a closed system is permeable to energy but not to matter. The definition of an open system assumes that there are supplies of energy that cannot be depleted; in practice, this energy is supplied from some source in the surrounding environment, which can be treated as infinite for the purposes of study. One type of open system is the radiant energy system, which receives its energy from solar radiation – an energy source that can be regarded as inexhaustible for all practical purposes. == Social sciences == In the social sciences an open system is a process that exchanges material, energy, people, capital and information with its environment. French/Greek philosopher Kostas Axelos argued that seeing the "world system" as inherently open (though unified) would solve many of the problems in the social sciences, including that of praxis (the relation of knowledge to practice), so that various social scientific disciplines would work together rather than create monopolies whereby the world appears only sociological, political, historical, or psychological. Axelos argues that theorizing a closed system contributes to making it closed, and is thus a conservative approach. The Althusserian concept of overdetermination (drawing on Sigmund Freud) posits that there are always multiple causes in every event. David Harvey uses this to argue that when systems such as capitalism enter a phase of crisis, it can happen through one of a number of elements, such as gender roles, the relation to nature/the environment, or crises in accumulation. Looking at the crisis in accumulation, Harvey argues that phenomena such as foreign direct investment, privatization of state-owned resources, and accumulation by dispossession act as necessary outlets when capital has overaccumulated too much in private hands and cannot circulate effectively in the marketplace. He cites the forcible displacement of Mexican and Indian peasants since the 1970s and the 1997 Asian financial crisis, involving "hedge fund raising" of national currencies, as examples of this. Structural functionalists such as Talcott Parsons and neofunctionalists such as Niklas Luhmann have incorporated system theory to describe society and its components. The sociology of religion finds both open and closed systems within the field of religion. == Thermodynamics == == Systems engineering == == See also == Business process Complex system Dynamical system Glossary of systems theory Ludwig von Bertalanffy Maximum power principle Non-equilibrium thermodynamics Open system (computing) Open System Environment Reference Model Openness Phantom loop Thermodynamic system == References == == Further reading == Khalil, E.L. (1995). Nonlinear thermodynamics and social science modeling: fad cycles, cultural development and identificational slips. The American Journal of Economics and Sociology, Vol. 54, Issue 4, pp. 423–438. Weber, B.H. (1989). Ethical Implications Of The Interface Of Natural And Artificial Systems. Delicate Balance: Technics, Culture and Consequences: Conference Proceedings for the Institute of Electrical and Electronics Engineers. == External links == OPEN SYSTEM, Principia Cybernetica Web, 2007.
Wikipedia/Surroundings_(thermodynamics)
Research concerning the relationship between the thermodynamic quantity entropy and both the origin and evolution of life began around the turn of the 20th century. In 1910 American historian Henry Adams printed and distributed to university libraries and history professors the small volume A Letter to American Teachers of History proposing a theory of history based on the second law of thermodynamics and on the principle of entropy. The 1944 book What is Life? by Nobel-laureate physicist Erwin Schrödinger stimulated further research in the field. In his book, Schrödinger originally stated that life feeds on negative entropy, or negentropy as it is sometimes called, but in a later edition corrected himself in response to complaints and stated that the true source is free energy. More recent work has restricted the discussion to Gibbs free energy because biological processes on Earth normally occur at a constant temperature and pressure, such as in the atmosphere or at the bottom of the ocean, but not across both over short periods of time for individual organisms. The quantitative application of entropy balances and Gibbs energy considerations to individual cells is one of the underlying principles of growth and metabolism. Ideas about the relationship between entropy and living organisms have inspired hypotheses and speculations in many contexts, including psychology, information theory, the origin of life, and the possibility of extraterrestrial life. == Early views == In 1863 Rudolf Clausius published his noted memoir On the Concentration of Rays of Heat and Light, and on the Limits of Its Action, wherein he outlined a preliminary relationship, based on his own work and that of William Thomson (Lord Kelvin), between living processes and his newly developed concept of entropy. Building on this, one of the first to speculate on a possible thermodynamic perspective of organic evolution was the Austrian physicist Ludwig Boltzmann. In 1875, building on the works of Clausius and Kelvin, Boltzmann reasoned: The general struggle for existence of animate beings is not a struggle for raw materials – these, for organisms, are air, water and soil, all abundantly available – nor for energy which exists in plenty in any body in the form of heat, but a struggle for [negative] entropy, which becomes available through the transition of energy from the hot sun to the cold earth. In 1876 American civil engineer Richard Sears McCulloh, in his Treatise on the Mechanical Theory of Heat and its Application to the Steam-Engine, which was an early thermodynamics textbook, states, after speaking about the laws of the physical world, that "there are none that are established on a firmer basis than the two general propositions of Joule and Carnot; which constitute the fundamental laws of our subject." McCulloh then goes on to show that these two laws may be combined in a single expression as follows: S = ∫ d Q τ {\displaystyle S=\int {dQ \over \tau }} where S = {\displaystyle S=} entropy d Q = {\displaystyle dQ=} a differential amount of heat passed into a thermodynamic system τ = {\displaystyle \tau =} absolute temperature McCulloh then declares that the applications of these two laws, i.e. what are currently known as the first law of thermodynamics and the second law of thermodynamics, are innumerable: When we reflect how generally physical phenomena are connected with thermal changes and relations, it at once becomes obvious that there are few, if any, branches of natural science which are not more or less dependent upon the great truths under consideration. Nor should it, therefore, be a matter of surprise that already, in the short space of time, not yet one generation, elapsed since the mechanical theory of heat has been freely adopted, whole branches of physical science have been revolutionized by it.: p. 267  McCulloh gives a few of what he calls the "more interesting examples" of the application of these laws in extent and utility. His first example is physiology, wherein he states that "the body of an animal, not less than a steamer, or a locomotive, is truly a heat engine, and the consumption of food in the one is precisely analogous to the burning of fuel in the other; in both, the chemical process is the same: that called combustion." He then incorporates a discussion of Antoine Lavoisier's theory of respiration with cycles of digestion, excretion, and perspiration, but then contradicts Lavoisier with recent findings, such as internal heat generated by friction, according to the new theory of heat, which, according to McCulloh, states that the "heat of the body generally and uniformly is diffused instead of being concentrated in the chest". McCulloh then gives an example of the second law, where he states that friction, especially in the smaller blood vessels, must develop heat. Undoubtedly, some fraction of the heat generated by animals is produced in this way. He then asks: "but whence the expenditure of energy causing that friction, and which must be itself accounted for?" To answer this question he turns to the mechanical theory of heat and goes on to loosely outline how the heart is what he calls a "force-pump", which receives blood and sends it to every part of the body, as discovered by William Harvey, and which "acts like the piston of an engine and is dependent upon and consequently due to the cycle of nutrition and excretion which sustains physical or organic life". It is likely that McCulloh modeled parts of this argument on that of the famous Carnot cycle. In conclusion, he summarizes his first and second law argument as such: Everything physical being subject to the law of conservation of energy, it follows that no physiological action can take place except with expenditure of energy derived from food; also, that an animal performing mechanical work must from the same quantity of food generate less heat than one abstaining from exertion, the difference being precisely the heat equivalent of that of work.: p. 270  == Negative entropy == In the 1944 book What is Life?, Austrian physicist Erwin Schrödinger, who in 1933 had won the Nobel Prize in Physics, theorized that life – contrary to the general tendency dictated by the second law of thermodynamics, which states that the entropy of an isolated system tends to increase – decreases or keeps constant its entropy by feeding on negative entropy. The problem of organization in living systems increasing despite the second law is known as the Schrödinger paradox. In his note to Chapter 6 of What is Life?, however, Schrödinger remarks on his usage of the term negative entropy: Let me say first, that if I had been catering for them [physicists] alone I should have let the discussion turn on free energy instead. It is the more familiar notion in this context. But this highly technical term seemed linguistically too near to energy for making the average reader alive to the contrast between the two things. This, Schrödinger argues, is what differentiates life from other forms of the organization of matter. In this direction, although life's dynamics may be argued to go against the tendency of the second law, life does not in any way conflict with or invalidate this law, because the principle that entropy can only increase or remain constant applies only to a closed system which is adiabatically isolated, meaning no heat can enter or leave, and the physical and chemical processes which make life possible do not occur in adiabatic isolation, i.e. living systems are open systems. Whenever a system can exchange either heat or matter with its environment, an entropy decrease of that system is entirely compatible with the second law. Schrödinger asked the question: "How does the living organism avoid decay?" The obvious answer is: "By eating, drinking, breathing and (in the case of plants) assimilating." While energy from nutrients is necessary to sustain an organism's order, Schrödinger also presciently postulated the existence of other molecules equally necessary for creating the order observed in living organisms: "An organism's astonishing gift of concentrating a stream of order on itself and thus escaping the decay into atomic chaos – of drinking orderliness from a suitable environment – seems to be connected with the presence of the aperiodic solids..." We now know that this "aperiodic" crystal is DNA, and that its irregular arrangement is a form of information. "The DNA in the cell nucleus contains the master copy of the software, in duplicate. This software seems to control by specifying an algorithm, or set of instructions, for creating and maintaining the entire organism containing the cell." DNA and other macromolecules determine an organism's life cycle: birth, growth, maturity, decline, and death. Nutrition is necessary but not sufficient to account for growth in size, as genetics is the governing factor. At some point, virtually all organisms normally decline and die even while remaining in environments that contain sufficient nutrients to sustain life. The controlling factor must be internal and not nutrients or sunlight acting as causal exogenous variables. Organisms inherit the ability to create unique and complex biological structures; it is unlikely for those capabilities to be reinvented or to be taught to each generation. Therefore, DNA must be operative as the prime cause in this characteristic as well. Applying Boltzmann's perspective of the second law, the change of state from a more probable, less ordered, and higher entropy arrangement to one of less probability, more order, and lower entropy (as is seen in biological ordering) calls for a function like that known of DNA. DNA's apparent information-processing function provides a resolution of the Schrödinger paradox posed by life and the entropy requirement of the second law. == Gibbs free energy and biological evolution == In recent years, the thermodynamic interpretation of evolution in relation to entropy has begun to use the concept of the Gibbs free energy, rather than entropy. This is because biological processes on Earth take place at roughly constant temperature and pressure, a situation in which the Gibbs free energy is an especially useful way to express the second law of thermodynamics. The Gibbs free energy is given by: Δ G ≡ Δ H − T Δ S {\displaystyle \Delta G\equiv \Delta H-T\,\Delta S} where G = {\displaystyle G=} Gibbs free energy H = {\displaystyle H=} enthalpy passed into a thermodynamic system T = {\displaystyle T=} absolute temperature of the system S = {\displaystyle S=} entropy and exergy and Gibbs free energy are equivalent if the environment and system temperature are equivalent. Otherwise, Gibbs free energy will be less than the exergy (for systems with temperatures above ambient). The minimization of the Gibbs free energy is a form of the principle of minimum energy (minimum 'free' energy or exergy), which follows from the entropy maximization principle for closed systems. Moreover, the Gibbs free energy equation, in modified form, can be used for open systems, including situations where chemical potential terms are included in the energy balance equation. In a popular 1982 textbook, Principles of Biochemistry, noted American biochemist Albert Lehninger argued that the order produced within cells as they grow and divide is more than compensated for by the disorder they create in their surroundings in the course of growth and division. In short, according to Lehninger, "Living organisms preserve their internal order by taking from their surroundings free energy, in the form of nutrients or sunlight, and returning to their surroundings an equal amount of energy as heat and entropy." Similarly, according to the chemist John Avery, from his 2003 book Information Theory and Evolution, we find a presentation in which the phenomenon of life, including its origin and evolution, as well as human cultural evolution, has its basis in the background of thermodynamics, statistical mechanics, and information theory. The (apparent) paradox between the second law of thermodynamics and the high degree of order and complexity produced by living systems, according to Avery, has its resolution "in the information content of the Gibbs free energy that enters the biosphere from outside sources." In a study titled "Natural selection for least action" published in the Proceedings of the Royal Society A., Ville Kaila and Arto Annila of the University of Helsinki describe how the process of natural selection responsible for such local increase in order may be mathematically derived directly from the expression of the second law equation for connected non-equilibrium open systems. The second law of thermodynamics can be written as an equation of motion to describe evolution, showing how natural selection and the principle of least action can be connected by expressing natural selection in terms of chemical thermodynamics. In this view, evolution explores possible paths to level differences in energy densities and so increase entropy most rapidly. Thus, an organism serves as an energy transfer mechanism, and beneficial mutations allow successive organisms to transfer more energy within their environment. == Counteracting the second law tendency == Second-law analysis is valuable in scientific and engineering analysis in that it provides a number of benefits over energy analysis alone, including the basis for determining energy quality (or exergy content), understanding fundamental physical phenomena, improving performance evaluation and optimization, or in furthering our understanding of living systems. The second law describes a universal tendency towards disorder and uniformity, or internal and external equilibrium. This means that real, non-ideal processes cause entropy production. Entropy can also be transferred to or from a system as well by the flow or transfer of matter and energy. As a result, entropy production does not necessarily cause the entropy of the system to increase. In fact the entropy or disorder in a system can spontaneously decrease, such as an aircraft gas turbine engine cooling down after shutdown, or like water in a cup left outside in sub-freezing winter temperatures. In the latter, a relatively unordered liquid cools and spontaneously freezes into a crystalized structure of reduced disorder as the molecules ‘stick’ together. Although the entropy of the system decreases, the system approaches uniformity with, or becomes more thermodynamically similar to its surroundings. This is a category III process, referring to the four combinations of either entropy (S) up or down, and uniformity (Y) - between system and its environment – either up or down. The second law can be conceptually stated as follows: Matter and energy have the tendency to reach a state of uniformity or internal and external equilibrium, a state of maximum disorder (entropy). Real non-equilibrium processes always produce entropy, causing increased disorder in the universe, while idealized reversible processes produce no entropy and no process is known to exist that destroys entropy. The tendency of a system to approach uniformity may be counteracted, and the system may become more ordered or complex, by the combination of two things, a work or exergy source and some form of instruction or intelligence. Where ‘exergy’ is the thermal, mechanical, electric or chemical work potential of an energy source or flow, and ‘instruction or intelligence’, is understood in the context of, or characterized by, the set of processes that are within category IV. Consider as an example of a category IV process, robotic manufacturing and assembly of vehicles in a factory. The robotic machinery requires electrical work input and instructions, but when completed, the manufactured products have less uniformity with their surroundings, or more complexity (higher order) relative to the raw materials they were made from. Thus, system entropy or disorder decreases while the tendency towards uniformity between the system and its environment is counteracted. In this example, the instructions, as well as the source of work may be internal or external to the system, and they may or may not cross the system boundary. To illustrate, the instructions may be pre-coded and the electrical work may be stored in an energy storage system on-site. Alternatively, the control of the machinery may be by remote operation over a communications network, while the electric work is supplied to the factory from the local electric grid. In addition, humans may directly play, in whole or in part, the role that the robotic machinery plays in manufacturing. In this case, instructions may be involved, but intelligence is either directly responsible, or indirectly responsible, for the direction or application of work in such a way as to counteract the tendency towards disorder and uniformity. As another example, consider the refrigeration of water in a warm environment. Due to refrigeration, heat is extracted or forced to flow from the water. As a result, the temperature and entropy of the water decreases, and the system moves further away from uniformity with its warm surroundings. The important point is that refrigeration not only requires a source of work, it requires designed equipment, as well as pre-coded or direct operational intelligence or instructions to achieve the desired refrigeration effect. Observation is the basis for the understanding that category IV processes require both a source of exergy as well as a source or form of intelligence or instruction. With respect to living systems, sunlight provides the source of exergy for virtually all life on Earth, i.e. sunlight directly (for flora) or indirectly in food (for fauna). Note that the work potential or exergy of sunlight, with a certain spectral and directional distribution, will have a specific value that can be expressed as a percentage of the energy flow or exergy content. Like the Earth as a whole, living things use this energy, converting the energy to other forms (the first law), while producing entropy (the second law), and thereby degrading the exergy or quality of the energy. Sustaining life, or the growth of a seed, for example, requires continual arranging of atoms and molecules into elaborate assemblies required to duplicate living cells. This assembly in living organisms decreases uniformity and disorder, counteracting the universal tendency towards disorder and uniformity described by the second law. In addition to a high quality energy source, counteracting this tendency requires a form of instruction or intelligence, which is contained primarily in the DNA/RNA. In the absence of instruction or intelligence, high quality energy is not enough on its own to produce complex assemblies, such as a house. As an example of category I in contrast to IV, although having a lot of energy or exergy, a second tornado will never re-construct a town destroyed by a previous tornado, instead it increases disorder and uniformity (category I), the very tendency described by the second law. A related line of reasoning is that, even though improbable, over billions of years or trillions of chances, did life come about undirected, from non-living matter in the absence of any intelligence? Related questions someone can ask include; can humans with a supply of food (exergy) live without DNA/RNA, or can a house supplied with electricity be built in the forest without humans or a source of instruction or programming, or can a fridge run with electricity but without its functioning computer control boards? The second law guarantees, that if we build a house it will, over time, have the tendency to fall apart or tend towards a state of disorder. On the other hand, if on walking through a forest we discover a house, we likely conclude that somebody built it, rather than concluding the order came about randomly. We know that living systems, such as the structure and function of a living cell, or the process of protein assembly/folding, are exceedingly complex. Could life have come about without being directed by a source of intelligence – consequently, over time, resulting in such things as the human brain and its intelligence, computers, cities, the quality of love and the creation of music or fine art? The second law tendency towards disorder and uniformity, and the distinction of category IV processes as counteracting this natural tendency, offers valuable insight for us to consider in our search to answer these questions. == Entropy of individual cells == Entropy balancing An entropy balance for an open system, or the change in entropy over time for a system at steady state, can be written as: d S d T = Q ˙ T + ∑ B S B ⋅ d n B + δ S g e n {\displaystyle {\frac {dS}{dT}}={\frac {\dot {Q}}{T}}+\sum _{B}S_{B}\cdot dn_{B}+\delta S_{gen}} Assuming a steady state system, roughly stable pressure-temperature conditions, and exchange through cell surfaces only, this expression can be rewritten to express entropy balance for an individual cell as: d S d T = Q ˙ T + ∑ B S B ⋅ n ˙ B − S X ⋅ | n ˙ X | + S ˙ g e n = 0 {\displaystyle {\frac {dS}{dT}}={\frac {\dot {Q}}{T}}+\sum _{B}S_{B}\cdot {\dot {n}}_{B}-S_{X}\cdot |{\dot {n}}_{X}|+{\dot {S}}_{gen}=0} Where Q ˙ T = {\displaystyle {\frac {\dot {Q}}{T}}=} heat exchange with the environment S B = {\displaystyle S_{B}=} partial molar entropy of metabolite B S X = {\displaystyle S_{X}=} partial molar entropy of structures resulting from growth S ˙ g e n = {\displaystyle {\dot {S}}_{gen}=} rate of entropy production and n ˙ {\displaystyle {\dot {n}}} terms indicate rates of exchange with the environment. This equation can be adapted to describe the entropy balance of a cell, which is useful in reconciling the spontaneity of cell growth with the intuition that the development of complex structures must overall decrease entropy within the cell. From the second law, S ˙ g e n > 0 {\displaystyle {\dot {S}}_{gen}>0} ; due to internal organization resulting from growth, S X {\displaystyle S_{X}} will be small. Metabolic processes force the sum of the remaining two terms to be less than zero through either a large rate of heat transfer or the export of high entropy waste products. Both mechanisms prevent excess entropy from building up inside the growing cell; the latter is what Schrödinger described as feeding on negative entropy, or "negentropy". Implications for metabolism In fact it is possible for this "negentropy" contribution to be large enough that growth is fully endothermic, or actually removes heat from the environment. This type of metabolism, in which acetate, methanol, or a number of other hydrocarbon compounds are converted to methane (a high entropy gas), is known as acetoclastic methanogenesis; one example is the metabolism of the anaerobic archaebacteria Methanosarcina barkeri. At the opposite extreme is the metabolism of anaerobic thermophile archaebacteria Methanobacterium thermoautotrophicum, for which the heat exported into the environment through CO 2 {\displaystyle {\ce {CO_2}}} fixation is high (~3730 kJ/C-mol). Generally, in metabolic processes, spontaneous catabolic processes that break down biomolecules provide the energy to drive non-spontaneous anabolic reactions that build organized biomass from high entropy reactants. Therefore, biomass yield is determined by the balance between coupled catabolic and anabolic processes, where the relationship between these processes can be described by: Δ r G s = ( 1 − Y X / S ) Δ G c a t a b o l i s m + Y X / S Δ G a n a b o l i s m {\displaystyle \Delta _{r}G_{s}=(1-Y_{X/S})\Delta G_{catabolism}+Y_{X/S}\Delta G_{anabolism}} where Δ r G s = {\displaystyle \Delta _{r}G_{s}=} total reaction driving force/ overall molar Gibbs energy Y X / S = {\displaystyle Y_{X/S}=} biomass produced Δ G c a t a b o l i s m = {\displaystyle \Delta G_{catabolism}=} Gibbs energy of catabolic reactions (-) Δ G a n a b o l i s m = {\displaystyle \Delta G_{anabolism}=} Gibbs energy of anabolic reactions (+) Organisms must maintain some optimal balance between Δ r G s {\displaystyle \Delta _{r}G_{s}} and Y X / S {\displaystyle Y_{X/S}} to both avoid thermodynamic equilibrium ( Δ r G s = 0 {\displaystyle \Delta _{r}G_{s}=0} ), at which biomass production would be theoretically maximized but metabolism would proceed at an infinitely slow rate, and the opposite limiting case at which growth is highly favorable ( Δ r G s << 0 {\displaystyle \Delta _{r}G_{s}<<0} ), but biomass yields are prohibitively low. This relationship is best described in general terms, and will vary widely from organism to organism. Because the terms corresponding to catabolic and anabolic contributions would be roughly balanced in the former scenario, this case represents the maximum amount of organized matter that can be produced in accordance with the 2nd law of thermodynamics for a very generalized metabolic system. == Entropy and the origin of life == The second law of thermodynamics applied to the origin of life is a far more complicated issue than the further development of life, since there is no "standard model" of how the first biological lifeforms emerged, only a number of competing hypotheses. The problem is discussed within the context of abiogenesis, implying gradual pre-Darwinian chemical evolution. === Relationship to prebiotic chemistry === In 1924 Alexander Oparin suggested that sufficient energy for generating early life forms from non-living molecules was provided in a "primordial soup". The laws of thermodynamics impose some constraints on the earliest life-sustaining reactions that would have emerged and evolved from such a mixture. Essentially, to remain consistent with the second law of thermodynamics, self organizing systems that are characterized by lower entropy values than equilibrium must dissipate energy so as to increase entropy in the external environment. One consequence of this is that low entropy or high chemical potential chemical intermediates cannot build up to very high levels if the reaction leading to their formation is not coupled to another chemical reaction that releases energy. These reactions often take the form of redox couples, which must have been provided by the environment at the time of the origin of life. In today's biology, many of these reactions require catalysts (or enzymes) to proceed, which frequently contain transition metals. This means identifying both redox couples and metals that are readily available in a given candidate environment for abiogenesis is an important aspect of prebiotic chemistry. The idea that processes that can occur naturally in the environment and act to locally decrease entropy must be identified has been applied in examinations of phosphate's role in the origin of life, where the relevant setting for abiogenesis is an early Earth lake environment. One such process is the ability of phosphate to concentrate reactants selectively due to its localized negative charge. In the context of the alkaline hydrothermal vent (AHV) hypothesis for the origin of life, a framing of lifeforms as "entropy generators" has been suggested in an attempt to develop a framework for abiogenesis under alkaline deep sea conditions. Assuming life develops rapidly under certain conditions, experiments may be able to recreate the first metabolic pathway, as it would be the most energetically favorable and therefore likely to occur. In this case, iron sulfide compounds may have acted as the first catalysts. Therefore, within the larger framing of life as free energy converters, it would eventually be beneficial to characterize quantities such as entropy production and proton gradient dissipation rates quantitatively for origin of life relevant systems (particularly AHVs). === Other theories === The evolution of order, manifested as biological complexity, in living systems and the generation of order in certain non-living systems was proposed to obey a common fundamental principal called "the Darwinian dynamic". The Darwinian dynamic was formulated by first considering how microscopic order is generated in relatively simple non-biological systems that are far from thermodynamic equilibrium (e.g. tornadoes, hurricanes). Consideration was then extended to short, replicating RNA molecules assumed to be similar to the earliest forms of life in the RNA world. It was shown that the underlying order-generating processes in the non-biological systems and in replicating RNA are basically similar. This approach helps clarify the relationship of thermodynamics to evolution as well as the empirical content of Darwin's theory. In 2009 physicist Karo Michaelian published a thermodynamic dissipation theory for the origin of life in which the fundamental molecules of life; nucleic acids, amino acids, carbohydrates (sugars), and lipids are considered to have been originally produced as microscopic dissipative structures (through Prigogine's dissipative structuring) as pigments at the ocean surface to absorb and dissipate into heat the UVC flux of solar light arriving at Earth's surface during the Archean, just as do organic pigments in the visible region today. These UVC pigments were formed through photochemical dissipative structuring from more common and simpler precursor molecules like HCN and H2O under the UVC flux of solar light. The thermodynamic function of the original pigments (fundamental molecules of life) was to increase the entropy production of the incipient biosphere under the solar photon flux and this, in fact, remains as the most important thermodynamic function of the biosphere today, but now mainly in the visible region where photon intensities are higher and biosynthetic pathways are more complex, allowing pigments to be synthesized from lower energy visible light instead of UVC light which no longer reaches Earth's surface. Jeremy England developed a hypothesis of the physics of the origins of life, that he calls 'dissipation-driven adaptation'. The hypothesis holds that random groups of molecules can self-organize to more efficiently absorb and dissipate heat from the environment. His hypothesis states that such self-organizing systems are an inherent part of the physical world. == Other types of entropy and their use in defining life == Like a thermodynamic system, an information system has an analogous concept to entropy called information entropy. Here, entropy is a measure of the increase or decrease in the novelty of information. Path flows of novel information show a familiar pattern. They tend to increase or decrease the number of possible outcomes in the same way that measures of thermodynamic entropy increase or decrease the state space. Like thermodynamic entropy, information entropy uses a logarithmic scale: –P(x) log P(x), where P is the probability of some outcome x. Reductions in information entropy are associated with a smaller number of possible outcomes in the information system. In 1984 Brooks and Wiley introduced the concept of species entropy as a measure of the sum of entropy reduction within species populations in relation to free energy in the environment. Brooks-Wiley entropy looks at three categories of entropy changes: information, cohesion and metabolism. Information entropy here measures the efficiency of the genetic information in recording all the potential combinations of heredity which are present. Cohesion entropy looks at the sexual linkages within a population. Metabolic entropy is the familiar chemical entropy used to compare the population to its ecosystem. The sum of these three is a measure of nonequilibrium entropy that drives evolution at the population level. A 2022 article by Helman in Acta Biotheoretica suggests identifying a divergence measure of these three types of entropies: thermodynamic entropy, information entropy and species entropy. Where these three are overdetermined, there will be a formal freedom that arises similar to how chirality arises from a minimum number of dimensions. Once there are at least four points for atoms, for example, in a molecule that has a central atom, left and right enantiomers are possible. By analogy, once a threshold of overdetermination in entropy is reached in living systems, there will be an internal state space that allows for ordering of systems operations. That internal ordering process is a threshold for distinguishing living from nonliving systems. == Entropy and the search for extraterrestrial life == In 1964 James Lovelock was among a group of scientists requested by NASA to make a theoretical life-detection system to look for life on Mars during the upcoming Viking missions. A significant challenge was determining how to construct a test that would reveal the presence of extraterrestrial life with significant differences from biology as we know it. In considering this problem, Lovelock asked two questions: "How can we be sure that the Martian way of life, if any, will reveal itself to tests based on Earth's life style?", as well as the more challenging underlying question: "What is life, and how should it be recognized?" Because these ideas conflicted with more traditional approaches that assume biological signatures on other planets would look much like they do on Earth, in discussing this issue with some of his colleagues at the Jet Propulsion Laboratory, he was asked what he would do to look for life on Mars instead. To this, Lovelock replied "I'd look for an entropy reduction, since this must be a general characteristic of life." This idea was perhaps better phrased as a search for sustained chemical disequilibria associated with low entropy states resulting from biological processes, and through further collaboration developed into the hypothesis that biosignatures would be detectable through examining atmospheric compositions. Lovelock determined through studying the atmosphere of Earth that this metric would indeed have the potential to reveal the presence of life. This had the consequence of indicating that Mars was most likely lifeless, as its atmosphere lacks any such anomalous signature. This work has been extended recently as a basis for biosignature detection in exoplanetary atmospheres. Essentially, the detection of multiple gases that are not typically in stable equilibrium with one another in a planetary atmosphere may indicate biotic production of one or more of them, in a way that does not require assumptions about the exact biochemical reactions extraterrestrial life might use or the specific products that would result. A terrestrial example is the coexistence of methane and oxygen, both of which would eventually deplete if not for continuous biogenic production. The amount of disequilibrium can be described by differencing observed and equilibrium state Gibbs energies for a given atmosphere composition; it can be shown that this quantity has been directly affected by the presence of life throughout Earth's history. Imaging of exoplanets by future ground and space based telescopes will provide observational constraints on exoplanet atmosphere compositions, to which this approach could be applied. But there is a caveat related to the potential for chemical disequilibria to serve as an anti-biosignature depending on the context. In fact, there was probably a strong chemical disequilibrium present on early Earth before the origin of life due to a combination of the products of sustained volcanic outgassing and oceanic water vapor. In this case, the disequilibrium was the result of a lack of organisms present to metabolize the resulting compounds. This imbalance would actually be decreased by the presence of chemotrophic life, which would remove these atmospheric gases and create more thermodynamic equilibrium prior to the advent of photosynthetic ecosystems. In 2013 Azua-Bustos and Vega argued that, disregarding the types of lifeforms that might be envisioned both on Earth and elsewhere in the Universe, all should share in common the attribute of decreasing their internal entropy at the expense of free energy obtained from their surroundings. As entropy allows the quantification of the degree of disorder in a system, any envisioned lifeform must have a higher degree of order than its immediate supporting environment. These authors showed that by using fractal mathematics analysis alone, they could readily quantify the degree of structural complexity difference (and thus entropy) of living processes as distinct entities separate from their similar abiotic surroundings. This approach may allow the future detection of unknown forms of life both in the Solar System and on recently discovered exoplanets based on nothing more than entropy differentials of complementary datasets (morphology, coloration, temperature, pH, isotopic composition, etc.). == Entropy in psychology == The notion of entropy as disorder has been transferred from thermodynamics to psychology by Polish psychiatrist Antoni Kępiński, who admitted being inspired by Erwin Schrödinger. In his theoretical framework devised to explain mental disorders (the information metabolism theory), the difference between living organisms and other systems was explained as the ability to maintain order. Contrary to inanimate matter, organisms maintain the particular order of their bodily structures and inner worlds which they impose onto their surroundings and forward to new generations. The life of an organism or the species ceases as soon as it loses that ability. Maintenance of that order requires continual exchange of information between the organism and its surroundings. In higher organisms, information is acquired mainly through sensory receptors and metabolised in the nervous system. The result is action – some form of motion, for example locomotion, speech, internal motion of organs, secretion of hormones, etc. The reactions of one organism become an informational signal to other organisms. Information metabolism, which allows living systems to maintain the order, is possible only if a hierarchy of value exists, as the signals coming to the organism must be structured. In humans that hierarchy has three levels, i.e. biological, emotional, and sociocultural. Kępiński explained how various mental disorders are caused by distortions of that hierarchy, and that the return to mental health is possible through its restoration. The idea was continued by Struzik, who proposed that Kępiński's information metabolism theory may be seen as an extension of Léon Brillouin's negentropy principle of information. In 2011, the notion of "psychological entropy" was reintroduced to psychologists by Hirsh et al. Similarly to Kępiński, these authors noted that uncertainty management is a critical ability for any organism. Uncertainty, arising due to the conflict between competing perceptual and behavioral affordances, is experienced subjectively as anxiety. Hirsh and his collaborators proposed that both the perceptual and behavioral domains may be conceptualized as probability distributions and that the amount of uncertainty associated with a given perceptual or behavioral experience can be quantified in terms of Claude Shannon's entropy formula. == Objections == Entropy is well defined for equilibrium systems, so objections to the extension of the second law and of entropy to biological systems, especially as it pertains to its use to support or discredit the theory of evolution, have been stated. Living systems and indeed many other systems and processes in the universe operate far from equilibrium. However, entropy is well defined much more broadly based on the probabilities of a system's states, whether or not the system is a dynamic one (for which equilibrium could be relevant). Even in those physical systems where equilibrium could be relevant, (1) living systems cannot persist in isolation, and (2) the second principle of thermodynamics does not require that free energy be transformed into entropy along the shortest path: living organisms absorb energy from sunlight or from energy-rich chemical compounds and finally return part of such energy to the environment as entropy (generally in the form of heat and low free-energy compounds such as water and carbon dioxide). The Belgian scientist Ilya Prigogine has, throughout all his research, contributed to this line of study and attempted to solve those conceptual limits, winning the Nobel prize in 1977. One of his major contributions was the concept of the dissipative system, which describes the thermodynamics of open systems in non-equilibrium states. == See also == Abiogenesis Adaptive system Complex systems Dissipative system Ecological entropy – a measure of biodiversity in the study of biological ecology Ectropy – a measure of the tendency of a dynamical system to do useful work and grow more organized Entropy (order and disorder) Extropy – a metaphorical term defining the extent of a living or organizational system's intelligence, functional order, vitality, energy, life, experience, and capacity and drive for improvement and growth Negentropy – a shorthand colloquial phrase for negative entropy Self-organization - In non-equilibrium thermodynamics, entropy and dissipative structures are connected to self-organization phenomenon (patterning, orderliness). Life systems and its subsystems are dissipative structures with some degree of self-organization. == References == == Further reading == Schneider, E. and Sagan, D. (2005). Into the Cool: Energy Flow, Thermodynamics, and Life. University of Chicago Press, Chicago. ISBN 9780226739366 Kapusta, A (2007). "Life circle, time and the self in Antoni Kępiński's conception of information metabolism". Filosofija. Sociologija. 18 (1): 46–51. La Cerra, P. (2003). The First Law of Psychology is the Second Law of Thermodynamics: The Energetic Evolutionary Model of the Mind and the Generation of Human Psychological Phenomena, Human Nature Review 3: 440–447. Moroz, A. (2011). The Common Extremalities in Biology and Physics. Elsevier Insights, NY. ISBN 978-0-12-385187-1 John R. Woodward (2010). Artificial life, the second law of thermodynamics, and Kolmogorov Complexity. Artificial life, the second law of thermodynamics, and Kolmogorov Complexity. 2010 IEEE International Conference on Progress in Informatics and Computing. Vol. 2 Pages 1266–1269 IEEE François Roddier (2012). The Thermodynamics of evolution. Paroles Editions. == External links == Thermodynamic Evolution of the Universe pi.physik.uni-bonn.de/~cristinz
Wikipedia/Entropy_and_life
Noise control or noise mitigation is a set of strategies to reduce noise pollution or to reduce the impact of that noise, whether outdoors or indoors. == Overview == The main areas of noise mitigation or abatement are: transportation noise control, architectural design, urban planning through zoning codes, and occupational noise control. Roadway noise and aircraft noise are the most pervasive sources of environmental noise. Social activities may generate noise levels that consistently affect the health of populations residing in or occupying areas, both indoor and outdoor, near entertainment venues that feature amplified sounds and music that present significant challenges for effective noise mitigation strategies. Multiple techniques have been developed to address interior sound levels, many of which are encouraged by local building codes. In the best case of project designs, planners are encouraged to work with design engineers to examine trade-offs of roadway design and architectural design. These techniques include design of exterior walls, party walls, and floor and ceiling assemblies; moreover, there are a host of specialized means for damping reverberation from special-purpose rooms such as auditoria, concert halls, entertainment and social venues, dining areas, audio recording rooms, and meeting rooms. Many of these techniques rely upon material science applications of constructing sound baffles or using sound-absorbing liners for interior spaces. Industrial noise control is a subset of interior architectural control of noise, with emphasis on specific methods of sound isolation from industrial machinery and for protection of workers at their task stations. Sound masking is the active addition of noise to reduce the annoyance of certain sounds, the opposite of soundproofing. == Standards, recommendations, and guidelines == Organizations each have their own standards, recommendations/guidelines, and directives for what levels of noise workers are permitted to be around before noise controls must be put into place. === Occupational Safety and Health Administration (OSHA) === OSHA's requirements state that when workers are exposed to noise levels above 90 A-weighted decibels (dBA) in 8-hour time-weighted averages (TWA), administrative controls and/or new engineering controls must be implemented in the workplace. OSHA also requires that impulse noises and impact noises must be controlled to prevent these noises reaching past 140 dB peak sound pressure levels (SPL). === Mine Safety and Health Organization (MSHA) === MSHA requires that administrative and/or engineering controls must be implemented in the workplace when miners are exposed to levels above 90 dBA TWA. If noise levels exceed 115 dBA, miners are required to wear hearing protection. MSHA, therefore, requires that noise levels be reduced below 115 dB TWA. Measuring noise levels for noise control decision making must integrate all noises from 90 dBA to 140 dBA. === Federal Railroad Administration (FRA) === The FRA recommends that worker exposure to noise should be reduced when their noise exposure exceeds 90 dBA for an 8-hour TWA. Noise measurements must integrate all noises, including intermittent, continuous, impact, and impulse noises of 80 dBA to 140 dBA. === U.S. Department of Defense === The Department of Defense (DoD) suggests that noise levels be controlled primarily through engineering controls. The DoD requires that all steady-state noises be reduced to levels below 85 dBA and that impulse noises be reduced below 140 dB peak SPL. TWA exposures are not considered for the DoD's requirements. === European Parliament and Council Directive === The European Parliament and Council Directive require noise levels to be reduced or eliminated using administrative and engineering controls. This directive requires lower exposure action levels of 80 dBA for 8 hours with 135 dB peak SPL, along with upper exposure action levels of 85 dBA for 8 hours with 137 peak dBSPL. Exposure limits are 87 dBA for 8 hours with peak levels of 140 peak dBSPL. == Approaches to noise control == An effective model for noise control is the source, path, and receiver model by Bolt and Ingard. Hazardous noise can be controlled by reducing the noise output at its source, minimizing the noise as it travels along a path to the listener, and providing equipment to the listener or receiver to attenuate the noise. === Sources === A variety of measures aim to reduce hazardous noise at its source. Programs such as Buy Quiet and the National Institute for Occupational Safety and Health (NIOSH) Prevention through design promote research and design of quiet equipment and renovation and replacement of older hazardous equipment with modern technologies. === Path === The principle of noise reduction through pathway modifications applies to the alteration of direct and indirect pathways for noise. Noise that travels across reflective surfaces, such as smooth floors, can be hazardous. Pathway alterations include physical materials, such as foam, absorb sound and walls to provide a sound barrier that modifies existing systems that decrease hazardous noise. Sound dampening enclosures for loud equipment and isolation chambers from which workers can remotely control equipment can also be designed. These methods prevent sound from traveling along a path to the worker or other listeners. === Receiver === In the industrial or commercial setting, workers must comply with the appropriate Hearing conservation program. Administrative controls, such as the restriction of personnel in noisy areas, prevents unnecessary noise exposure. Personal protective equipment such as foam ear plugs or ear muffs to attenuate sound provide a last line of defense for the listener. == Basic technologies == Sound insulation: prevent the transmission of noise by the introduction of a mass barrier. Common materials have high-density properties such as brick, thick glass, concrete, metal etc. Sound absorption: a porous material which acts as a ‘noise sponge’ by converting the sound energy into heat within the material. Common sound absorption materials include decoupled lead-based tiles, open cell foams and fiberglass Vibration damping: applicable for large vibrating surfaces. The damping mechanism works by extracting the vibration energy from the thin sheet and dissipating it as heat. A common material is sound deadened steel. Vibration isolation: prevents transmission of vibration energy from a source to a receiver by introducing a flexible element or a physical break. Common vibration isolators are springs, rubber mounts, cork etc. == Roadways == Studies on noise barriers have shown mixed results on their ability to effectively reduce noise pollution. Electric and hybrid vehicles could reduce noise pollution, but only if those vehicles make up a high proportion of total vehicles on the road; even if traffic in an urban area reached a makeup of fifty percent electric vehicles, the overall noise reduction achieved would only be a few decibels and would be barely noticeable. Highway noise is today less affected by motor type, since the effects in higher speed are aerodynamic and tire noise related. Other contributions to the reduction of noise at the source are: improved tire tread designs for trucks in the 1970s, better shielding of diesel stacks in the 1980s, and local vehicle regulation of unmuffled vehicles. The most fertile areas for roadway noise mitigation are in urban planning decisions, roadway design, noise barrier design, speed control, surface pavement selection, and truck restrictions. Speed control is effective since the lowest sound emissions arise from vehicles moving smoothly at 30 to 60 kilometers per hour. Above that range, sound emissions double with every five miles per hour of speed. At the lowest speeds, braking and (engine) acceleration noise dominates. Selection of road surface pavement can make a difference of a factor of two in sound levels, for the speed regime above 30 kilometers per hour. Quieter pavements are porous with a negative surface texture and use small to medium-sized aggregates; the loudest pavements have transversely-grooved surfaces, positive surface textures, and larger aggregates. Surface friction and roadway safety are important considerations as well for pavement decisions. When designing new urban freeways or arterials, there are numerous design decisions regarding alignment and roadway geometrics. Use of a computer model to calculate sound levels has become standard practice since the early 1970s. In this way exposure of sensitive receptors to elevated sound levels can be minimized. An analogous process exists for urban mass transit systems and other rail transportation decisions. Early examples of urban rail systems designed using this technology were: Boston MBTA line expansions (1970s), San Francisco BART system expansion (1981), Houston METRORail system (1982), and the MAX Light Rail system in Portland, Oregon (1983). Noise barriers can be applied to existing or planned surface transportation projects. They are one of the most effective actions taken in retrofitting existing roadways and commonly can reduce adjacent land-use sound levels by up to ten decibels. A computer model is required to design the barrier since terrain, micrometeorology and other locale-specific factors make the endeavor a very complex undertaking. For example, a roadway in cut or strong prevailing winds can produce a setting where atmospheric sound propagation is unfavorable to any noise barrier. == Aircraft == As in the case of roadway noise, little progress has been made in quelling aircraft noise at the source, other than elimination of loud engine designs from the 1960s and earlier. Because of its velocity and volume, jet turbine engine exhaust noise defies reduction by any simple means. The most promising forms of aircraft noise abatement are through land planning, flight operations restrictions and residential soundproofing. Flight restrictions can take the form of preferred runway use, departure flight path and slope, and time-of-day restrictions. These tactics are sometimes controversial since they can impact aircraft safety, flying convenience and airline economics. In 1979, the US Congress authorized the FAA to devise technology and programs to attempt to insulate homes near airports. While this obviously does not aid the exterior environment, the program has been effective for residential and school interiors. Some of the airports at which the technology was applied early on were San Francisco International Airport, Seattle-Tacoma International Airport, John Wayne International Airport and San Jose International Airport in California. The underlying technology is a computer model which simulates the propagation of aircraft noise and its penetration into buildings. Variations in aircraft types, flight patterns and local meteorology can be analyzed along with benefits of alternative building retrofit strategies such as roof upgrading, window glazing improvement, fireplace baffling, caulking construction seams and other measures. The computer model allows cost-effectiveness evaluations of a host of alternative strategies. In Canada, Transport Canada prepares noise exposure forecasts (NEF) for each airport, using a computer model similar to that used in the US. Residential land development is discouraged within high impact areas identified by the forecast. === Acoustic liners === == Architectural solutions == Architectural acoustics noise control practices include interior sound reverberation reduction, inter-room noise transfer mitigation, and exterior building skin augmentation. In the case of construction of new (or remodeled) apartments, condominiums, hospitals, and hotels, many states and cities have stringent building codes with requirements of acoustical analysis, in order to protect building occupants. With regard to exterior noise, the codes usually require measurement of the exterior acoustic environment in order to determine the performance standard required for exterior building skin design. The architect can work with the acoustical scientist to arrive at the best cost-effective means of creating a quiet interior (normally 45 dBA). The most important elements of design of the building skin are usually: glazing (glass thickness, double pane design etc.), perforated metal (used internally or externally), roof material, caulking standards, chimney baffles, exterior door design, mail slots, attic ventilation ports, and mounting of through-the-wall air conditioners. Regarding sound generated inside the building, there are two principal types of transmission. Firstly, airborne sound travels through walls or floor and ceiling assemblies and can emanate from either human activities in adjacent living spaces or from mechanical noise within the building systems. Human activities might include voice, noise from amplified sound systems, or animal noise. Mechanical systems are elevator systems, boilers, refrigeration or air conditioning systems, generators and trash compactors. Aerodynamic sources include fans, pneumatics, and combustion. Noise control for aerodynamic sources include quiet air nozzles, pneumatic silencers and quiet fan technology. Since many mechanical sounds are inherently loud, the principal design element is to require the wall or ceiling assembly to meet certain performance standards, (typically Sound transmission class of 50), which allows considerable attenuation of the sound level reaching occupants. The second type of interior sound is called Impact Insulation Class (IIC) transmission. This effect arises not from airborne transmission, but rather from the transmission of sound through the building itself. The most common perception of IIC noise is from the footfall of occupants in living spaces above. Low-frequency noise is transferred easily through the ground and buildings. This type of noise is more difficult to abate, but consideration must be given to isolating the floor assembly above or hanging the lower ceiling on resilient channel. Both of the transmission effects noted above may emanate either from building occupants or from building mechanical systems such as elevators, plumbing systems or heating, ventilating and air conditioning units. In some cases, it is merely necessary to specify the best available quieting technology in selecting such building hardware. In other cases, shock mounting of systems to control vibration may be in order. In the case of plumbing systems, there are specific protocols developed, especially for water supply lines, to create isolation clamping of pipes within building walls. In the case of central air systems, it is important to baffle any ducts that could transmit sound between different building areas. Designing special-purpose rooms has more exotic challenges, since these rooms may have requirements for unusual features such as concert performance, sound studio recording, lecture halls. In these cases reverberation and reflection must be analyzed in order to not only quiet the rooms, but to prevent echo effects from occurring. In these situations special sound baffles and sound absorptive lining materials may be specified to dampen unwanted effects. == Post-architectural solutions == Acoustical wall and ceiling panels are a common commercial and residential solution for noise control in already-constructed buildings. Acoustic panels may be constructed of a variety of materials, though commercial acoustic applications will frequently be composed of fiberglass or mineral wool-based acoustic substrates. For example, Mineral fiberboard is a commonly used acoustical substrate, and commercial thermal insulations, such as those used in the insulation of boiler tanks, are frequently repurposed for noise-controlling acoustic use based on their effectiveness at minimizing reverberations. The ideal acoustical panels are those without a face or finish material that could interfere with the performance of the acoustical infill, but aesthetic and safety concerns typically lead to fabric coverings or other finishing materials to minimize impedance. Panel finishings are occasionally made of a porous configuration of wood or metal. The effectiveness of post-construction acoustic treatment is limited by the amount of space able to be allocated to acoustic treatment, and so on-site acoustical wall panels are frequently made to conform to the shape of the preexisting space. This is done by "framing" the perimeter track into shape, infilling the acoustical substrate and then stretching and tucking the fabric into the perimeter frame system. On-site wall panels can be constructed to work around door frames, baseboard, or any other intrusion. Large panels (generally greater than 50 feet) can be created on walls and ceilings with this method. Double-glazed and thicker windows can also prevent sound transmission from the outdoors. === Industrial === Industrial noise is traditionally associated with manufacturing settings where industrial machinery produces intense sound levels, often upwards of 85 decibels. While this circumstance is the most dramatic, there are many other work environments where sound levels may lie in the range of 70 to 75 decibels, entirely composed of office equipment, music, public address systems, and even exterior noise intrusion. Either type of environment may result in noise health effects if the sound intensity and exposure time is too great. In the case of industrial equipment, the most common techniques for noise protection of workers consist of shock mounting source equipment, creation of acrylic glass or other solid barriers, and provision of ear protection equipment. In certain cases the machinery itself can be re-designed to operate in a manner less prone to produce grating, grinding, frictional, or other motions that induce sound emissions. In recent years, Buy Quiet programs and initiatives have arisen in an effort to combat occupational noise exposures. These programs promote the purchase of quieter tools and equipment and encourage manufacturers to design quieter equipment. In the case of more conventional office environments, the techniques in architectural acoustics discussed above may apply. Other solutions may involve researching the quietest models of office equipment, particularly printers and photocopy machines. Impact printers and other equipment were often fitted with "acoustic hoods", enclosures to reduce emitted noise. One source of annoying, if not loud, sound level emissions are lighting fixtures (notably older fluorescent globes). These fixtures can be retrofitted or analyzed to see whether over-illumination is present, a common office environment issue. If over-illumination is occurring, de-lamping or reduced light bank usage may apply. Photographers can quieten noisy still cameras on a film set using sound blimps. === Commercial === Reductions in cost of technology have allowed noise control technology to be used not only in performance facilities and recording studios, but also in noise-sensitive small businesses such as restaurants. Acoustically absorbent materials such as fiberglass duct liner, wood fiber panels and recycled denim jeans serve as artwork-bearing canvasses in environments in which aesthetics are important. Using a combination of sound absorption materials, arrays of microphones and speakers, and a digital processor, a restaurant operator can use a tablet computer to selectively control noise levels at different places in the restaurant: the microphone arrays pick up sound and send it to the digital processor, which controls the speakers to output sound signals on command. === Residential === Post-construction residential acoustic treatment throughout the 20th century was only commonly the practice of music-listening enthusiasts. However, developments in home recording technology and fidelity have led to a drastic increase in the spread and popularity of residential acoustic treatment in the pursuit of home recording fidelity and accuracy. A large secondary market of homemade and home use acoustic panels, bass trap, and similar constructed products has developed resulting from this demand, with many small companies and individuals wrapping industrial and commercial-grade insulations in fabric for use in home recording studios, theatre rooms, and music practice spaces. == Urban planning == Communities may use zoning codes to isolate noisy urban activities from areas that should be protected from such unhealthy exposures and to establish noise standards in areas that may not be conducive to such isolation strategies. Because low-income neighborhoods are often at greater risk of noise pollution, the establishment of such zoning codes is often an environmental justice issue. Mixed-use areas present especially difficult conflicts that require special attention to the need to protect people from the harmful effects of noise pollution. Noise is generally one consideration in an environmental impact statement, if applicable (such as transportation system construction). == See also == General: Noise pollution Health effects from noise == References == == External links == International Commission for Acoustics Acoustical Society of America European Acoustics Association American Institute of Architects National Council of Acoustical Consultants Business & Institutional Furniture Manufacturer's Association 'City of Melbourne' Citysounds guide to urban residential design Noise Abatement Society Noise control case studies Industrial noise control case studies Multiple sound isolation research articles for beginners NIOSH Buy Quiet Topic Page OSHA Technical manual; Chapter III: Noise
Wikipedia/Noise_control
The zeroth law of thermodynamics is one of the four principal laws of thermodynamics. It provides an independent definition of temperature without reference to entropy, which is defined in the second law. The law was established by Ralph H. Fowler in the 1930s, long after the first, second, and third laws had been widely recognized. The zeroth law states that if two thermodynamic systems are both in thermal equilibrium with a third system, then the two systems are in thermal equilibrium with each other. Two systems are said to be in thermal equilibrium if they are linked by a wall permeable only to heat, and they do not change over time. Another formulation by James Clerk Maxwell is "All heat is of the same kind". Another statement of the law is "All diathermal walls are equivalent".: 24, 144  The zeroth law is important for the mathematical formulation of thermodynamics. It makes the relation of thermal equilibrium between systems an equivalence relation, which can represent equality of some quantity associated with each system. A quantity that is the same for two systems, if they can be placed in thermal equilibrium with each other, is a scale of temperature. The zeroth law is needed for the definition of such scales, and justifies the use of practical thermometers.: 56  == Equivalence relation == A thermodynamic system is by definition in its own state of internal thermodynamic equilibrium, that is to say, there is no change in its observable state (i.e. macrostate) over time and no flows occur in it. One precise statement of the zeroth law is that the relation of thermal equilibrium is an equivalence relation on pairs of thermodynamic systems.: 52  In other words, the set of all systems each in its own state of internal thermodynamic equilibrium may be divided into subsets in which every system belongs to one and only one subset, and is in thermal equilibrium with every other member of that subset, and is not in thermal equilibrium with a member of any other subset. This means that a unique "tag" can be assigned to every system, and if the "tags" of two systems are the same, they are in thermal equilibrium with each other, and if different, they are not. This property is used to justify the use of empirical temperature as a tagging system. Empirical temperature provides further relations of thermally equilibrated systems, such as order and continuity with regard to "hotness" or "coldness", but these are not implied by the standard statement of the zeroth law. If it is defined that a thermodynamic system is in thermal equilibrium with itself (i.e., thermal equilibrium is reflexive), then the zeroth law may be stated as follows: If a body C, be in thermal equilibrium with two other bodies, A and B, then A and B are in thermal equilibrium with one another. This statement asserts that thermal equilibrium is a left-Euclidean relation between thermodynamic systems. If we also define that every thermodynamic system is in thermal equilibrium with itself, then thermal equilibrium is also a reflexive relation. Binary relations that are both reflexive and Euclidean are equivalence relations. Thus, again implicitly assuming reflexivity, the zeroth law is therefore often expressed as a right-Euclidean statement: If two systems are in thermal equilibrium with a third system, then they are in thermal equilibrium with each other. One consequence of an equivalence relationship is that the equilibrium relationship is symmetric: If A is in thermal equilibrium with B, then B is in thermal equilibrium with A. Thus, the two systems are in thermal equilibrium with each other, or they are in mutual equilibrium. Another consequence of equivalence is that thermal equilibrium is described as a transitive relation:: 56  If A is in thermal equilibrium with B and if B is in thermal equilibrium with C, then A is in thermal equilibrium with C. A reflexive, transitive relation does not guarantee an equivalence relationship. For the above statement to be true, both reflexivity and symmetry must be implicitly assumed. It is the Euclidean relationships which apply directly to thermometry. An ideal thermometer is a thermometer which does not measurably change the state of the system it is measuring. Assuming that the unchanging reading of an ideal thermometer is a valid tagging system for the equivalence classes of a set of equilibrated thermodynamic systems, then the systems are in thermal equilibrium, if a thermometer gives the same reading for each system. If the system are thermally connected, no subsequent change in the state of either one can occur. If the readings are different, then thermally connecting the two systems causes a change in the states of both systems. The zeroth law provides no information regarding this final reading. == Foundation of temperature == Nowadays, there are two nearly separate concepts of temperature, the thermodynamic concept, and that of the kinetic theory of gases and other materials. The zeroth law belongs to the thermodynamic concept, but this is no longer the primary international definition of temperature. The current primary international definition of temperature is in terms of the kinetic energy of freely moving microscopic particles such as molecules, related to temperature through the Boltzmann constant k B {\displaystyle k_{\mathrm {B} }} . The present article is about the thermodynamic concept, not about the kinetic theory concept. The zeroth law establishes thermal equilibrium as an equivalence relationship. An equivalence relationship on a set (such as the set of all systems each in its own state of internal thermodynamic equilibrium) divides that set into a collection of distinct subsets ("disjoint subsets") where any member of the set is a member of one and only one such subset. In the case of the zeroth law, these subsets consist of systems which are in mutual equilibrium. This partitioning allows any member of the subset to be uniquely "tagged" with a label identifying the subset to which it belongs. Although the labeling may be quite arbitrary, temperature is just such a labeling process which uses the real number system for tagging. The zeroth law justifies the use of suitable thermodynamic systems as thermometers to provide such a labeling, which yield any number of possible empirical temperature scales, and justifies the use of the second law of thermodynamics to provide an absolute, or thermodynamic temperature scale. Such temperature scales bring additional continuity and ordering (i.e., "hot" and "cold") properties to the concept of temperature. In the space of thermodynamic parameters, zones of constant temperature form a surface, that provides a natural order of nearby surfaces. One may therefore construct a global temperature function that provides a continuous ordering of states. The dimensionality of a surface of constant temperature is one less than the number of thermodynamic parameters, thus, for an ideal gas described with three thermodynamic parameters P, V and N, it is a two-dimensional surface. For example, if two systems of ideal gases are in joint thermodynamic equilibrium across an immovable diathermal wall, then ⁠P1V1/N1⁠ = ⁠P2V2/N2⁠ where Pi is the pressure in the ith system, Vi is the volume, and Ni is the amount (in moles, or simply the number of atoms) of gas. The surface ⁠PV/N⁠ = constant defines surfaces of equal thermodynamic temperature, and one may label defining T so that ⁠PV/N⁠ = RT, where R is some constant. These systems can now be used as a thermometer to calibrate other systems. Such systems are known as "ideal gas thermometers". In a sense, focused on the zeroth law, there is only one kind of diathermal wall or one kind of heat, as expressed by Maxwell's dictum that "All heat is of the same kind". But in another sense, heat is transferred in different ranks, as expressed by Arnold Sommerfeld's dictum "Thermodynamics investigates the conditions that govern the transformation of heat into work. It teaches us to recognize temperature as the measure of the work-value of heat. Heat of higher temperature is richer, is capable of doing more work. Work may be regarded as heat of an infinitely high temperature, as unconditionally available heat." This is why temperature is the particular variable indicated by the zeroth law's statement of equivalence. == Dependence on the existence of walls permeable only to heat == In Constantin Carathéodory's (1909) theory, it is postulated that there exist walls "permeable only to heat", though heat is not explicitly defined in that paper. This postulate is a physical postulate of existence. It does not say that there is only one kind of heat. This paper of Carathéodory states as proviso 4 of its account of such walls: "Whenever each of the systems S1 and S2 is made to reach equilibrium with a third system S3 under identical conditions, systems S1 and S2 are in mutual equilibrium".: §6  It is the function of this statement in the paper, not there labeled as the zeroth law, to provide not only for the existence of transfer of energy other than by work or transfer of matter, but further to provide that such transfer is unique in the sense that there is only one kind of such wall, and one kind of such transfer. This is signaled in the postulate of this paper of Carathéodory that precisely one non-deformation variable is needed to complete the specification of a thermodynamic state, beyond the necessary deformation variables, which are not restricted in number. It is therefore not exactly clear what Carathéodory means when in the introduction of this paper he writes It is possible to develop the whole theory without assuming the existence of heat, that is of a quantity that is of a different nature from the normal mechanical quantities. It is the opinion of Elliott H. Lieb and Jakob Yngvason (1999) that the derivation from statistical mechanics of the law of entropy increase is a goal that has so far eluded the deepest thinkers.: 5  Thus the idea remains open to consideration that the existence of heat and temperature are needed as coherent primitive concepts for thermodynamics, as expressed, for example, by Maxwell and Max Planck. On the other hand, Planck (1926) clarified how the second law can be stated without reference to heat or temperature, by referring to the irreversible and universal nature of friction in natural thermodynamic processes. == History == Writing long before the term "zeroth law" was coined, in 1871 Maxwell discussed at some length ideas which he summarized by the words "All heat is of the same kind". Modern theorists sometimes express this idea by postulating the existence of a unique one-dimensional hotness manifold, into which every proper temperature scale has a monotonic mapping. This may be expressed by the statement that there is only one kind of temperature, regardless of the variety of scales in which it is expressed. Another modern expression of this idea is that "All diathermal walls are equivalent".: 23  This might also be expressed by saying that there is precisely one kind of non-mechanical, non-matter-transferring contact equilibrium between thermodynamic systems. According to Sommerfeld, Ralph H. Fowler coined the term zeroth law of thermodynamics while discussing the 1935 text by Meghnad Saha and B.N. Srivastava. They write on page 1 that "every physical quantity must be measurable in numerical terms". They presume that temperature is a physical quantity and then deduce the statement "If a body A is in temperature equilibrium with two bodies B and C, then B and C themselves are in temperature equilibrium with each other". Then they italicize a self-standing paragraph, as if to state their basic postulate: Any of the physical properties of A which change with the application of heat may be observed and utilised for the measurement of temperature. They do not themselves here use the phrase "zeroth law of thermodynamics". There are very many statements of these same physical ideas in the physics literature long before this text, in very similar language. What was new here was just the label zeroth law of thermodynamics. Fowler & Guggenheim (1936/1965) wrote of the zeroth law as follows: ... we introduce the postulate: If two assemblies are each in thermal equilibrium with a third assembly, they are in thermal equilibrium with each other. They then proposed that ... it may be shown to follow that the condition for thermal equilibrium between several assemblies is the equality of a certain single-valued function of the thermodynamic states of the assemblies, which may be called the temperature t, any one of the assemblies being used as a "thermometer" reading the temperature t on a suitable scale. This postulate of the "Existence of temperature" could with advantage be known as the zeroth law of thermodynamics. The first sentence of this present article is a version of this statement. It is not explicitly evident in the existence statement of Fowler and Edward A. Guggenheim that temperature refers to a unique attribute of a state of a system, such as is expressed in the idea of the hotness manifold. Also their statement refers explicitly to statistical mechanical assemblies, not explicitly to macroscopic thermodynamically defined systems. == References == == Further reading == Atkins, Peter (2007). Four Laws That Drive the Universe. New York: Oxford University Press. ISBN 978-0-19-923236-9.
Wikipedia/Zeroth_law_of_thermodynamics
A thermodynamic system is a body of matter and/or radiation separate from its surroundings that can be studied using the laws of thermodynamics. Thermodynamic systems can be passive and active according to internal processes. According to internal processes, passive systems and active systems are distinguished: passive, in which there is a redistribution of available energy, active, in which one type of energy is converted into another. Depending on its interaction with the environment, a thermodynamic system may be an isolated system, a closed system, or an open system. An isolated system does not exchange matter or energy with its surroundings. A closed system may exchange heat, experience forces, and exert forces, but does not exchange matter. An open system can interact with its surroundings by exchanging both matter and energy. The physical condition of a thermodynamic system at a given time is described by its state, which can be specified by the values of a set of thermodynamic state variables. A thermodynamic system is in thermodynamic equilibrium when there are no macroscopically apparent flows of matter or energy within it or between it and other systems. == Overview == Thermodynamic equilibrium is characterized not only by the absence of any flow of mass or energy, but by “the absence of any tendency toward change on a macroscopic scale.” Equilibrium thermodynamics, as a subject in physics, considers macroscopic bodies of matter and energy in states of internal thermodynamic equilibrium. It uses the concept of thermodynamic processes, by which bodies pass from one equilibrium state to another by transfer of matter and energy between them. The term 'thermodynamic system' is used to refer to bodies of matter and energy in the special context of thermodynamics. The possible equilibria between bodies are determined by the physical properties of the walls that separate the bodies. Equilibrium thermodynamics in general does not measure time. Equilibrium thermodynamics is a relatively simple and well settled subject. One reason for this is the existence of a well defined physical quantity called 'the entropy of a body'. Non-equilibrium thermodynamics, as a subject in physics, considers bodies of matter and energy that are not in states of internal thermodynamic equilibrium, but are usually participating in processes of transfer that are slow enough to allow description in terms of quantities that are closely related to thermodynamic state variables. It is characterized by presence of flows of matter and energy. For this topic, very often the bodies considered have smooth spatial inhomogeneities, so that spatial gradients, for example a temperature gradient, are well enough defined. Thus the description of non-equilibrium thermodynamic systems is a field theory, more complicated than the theory of equilibrium thermodynamics. Non-equilibrium thermodynamics is a growing subject, not an established edifice. Example theories and modeling approaches include the GENERIC formalism for complex fluids, viscoelasticity, and soft materials. In general, it is not possible to find an exactly defined entropy for non-equilibrium problems. For many non-equilibrium thermodynamical problems, an approximately defined quantity called 'time rate of entropy production' is very useful. Non-equilibrium thermodynamics is mostly beyond the scope of the present article. Another kind of thermodynamic system is considered in most engineering. It takes part in a flow process. The account is in terms that approximate, well enough in practice in many cases, equilibrium thermodynamical concepts. This is mostly beyond the scope of the present article, and is set out in other articles, for example the article Flow process. == History == The classification of thermodynamic systems arose with the development of thermodynamics as a science. Theoretical studies of thermodynamic processes in the period from the first theory of heat engines (Saadi Carnot, France, 1824) to the theory of dissipative structures (Ilya Prigozhin, Belgium, 1971) mainly concerned the patterns of interaction of thermodynamic systems with the environment. At the same time, thermodynamic systems were mainly classified as isolated, closed and open, with corresponding properties in various thermodynamic states, for example, in states close to equilibrium, nonequilibrium and strongly nonequilibrium. In 2010, Boris Dobroborsky (Israel, Russia) proposed a classification of thermodynamic systems according to internal processes consisting in energy redistribution (passive systems) and energy conversion (active systems). == Passive systems == If there is a temperature difference inside the thermodynamic system, for example in a rod, one end of which is warmer than the other, then thermal energy transfer processes occur in it, in which the temperature of the colder part rises and the warmer part decreases. As a result, after some time, the temperature in the rod will equalize – the rod will come to a state of thermodynamic equilibrium. == Active systems == If the process of converting one type of energy into another takes place inside a thermodynamic system, for example, in chemical reactions, in electric or pneumatic motors, when one solid body rubs against another, then the processes of energy release or absorption will occur, and the thermodynamic system will always tend to a non-equilibrium state with respect to the environment. == Systems in equilibrium == In isolated systems it is consistently observed that as time goes on internal rearrangements diminish and stable conditions are approached. Pressures and temperatures tend to equalize, and matter arranges itself into one or a few relatively homogeneous phases. A system in which all processes of change have gone practically to completion is considered in a state of thermodynamic equilibrium. The thermodynamic properties of a system in equilibrium are unchanging in time. Equilibrium system states are much easier to describe in a deterministic manner than non-equilibrium states. In some cases, when analyzing a thermodynamic process, one can assume that each intermediate state in the process is at equilibrium. Such a process is called quasistatic. For a process to be reversible, each step in the process must be reversible. For a step in a process to be reversible, the system must be in equilibrium throughout the step. That ideal cannot be accomplished in practice because no step can be taken without perturbing the system from equilibrium, but the ideal can be approached by making changes slowly. The very existence of thermodynamic equilibrium, defining states of thermodynamic systems, is the essential, characteristic, and most fundamental postulate of thermodynamics, though it is only rarely cited as a numbered law. According to Bailyn, the commonly rehearsed statement of the zeroth law of thermodynamics is a consequence of this fundamental postulate. In reality, practically nothing in nature is in strict thermodynamic equilibrium, but the postulate of thermodynamic equilibrium often provides very useful idealizations or approximations, both theoretically and experimentally; experiments can provide scenarios of practical thermodynamic equilibrium. In equilibrium thermodynamics the state variables do not include fluxes because in a state of thermodynamic equilibrium all fluxes have zero values by definition. Equilibrium thermodynamic processes may involve fluxes but these must have ceased by the time a thermodynamic process or operation is complete bringing a system to its eventual thermodynamic state. Non-equilibrium thermodynamics allows its state variables to include non-zero fluxes, which describe transfers of mass or energy or entropy between a system and its surroundings. == Walls == A system is enclosed by walls that bound it and connect it to its surroundings. Often a wall restricts passage across it by some form of matter or energy, making the connection indirect. Sometimes a wall is no more than an imaginary two-dimensional closed surface through which the connection to the surroundings is direct. A wall can be fixed (e.g. a constant volume reactor) or moveable (e.g. a piston). For example, in a reciprocating engine, a fixed wall means the piston is locked at its position; then, a constant volume process may occur. In that same engine, a piston may be unlocked and allowed to move in and out. Ideally, a wall may be declared adiabatic, diathermal, impermeable, permeable, or semi-permeable. Actual physical materials that provide walls with such idealized properties are not always readily available. The system is delimited by walls or boundaries, either actual or notional, across which conserved (such as matter and energy) or unconserved (such as entropy) quantities can pass into and out of the system. The space outside the thermodynamic system is known as the surroundings, a reservoir, or the environment. The properties of the walls determine what transfers can occur. A wall that allows transfer of a quantity is said to be permeable to it, and a thermodynamic system is classified by the permeabilities of its several walls. A transfer between system and surroundings can arise by contact, such as conduction of heat, or by long-range forces such as an electric field in the surroundings. A system with walls that prevent all transfers is said to be isolated. This is an idealized conception, because in practice some transfer is always possible, for example by gravitational forces. It is an axiom of thermodynamics that an isolated system eventually reaches internal thermodynamic equilibrium, when its state no longer changes with time. The walls of a closed system allow transfer of energy as heat and as work, but not of matter, between it and its surroundings. The walls of an open system allow transfer both of matter and of energy. This scheme of definition of terms is not uniformly used, though it is convenient for some purposes. In particular, some writers use 'closed system' where 'isolated system' is here used. Anything that passes across the boundary and effects a change in the contents of the system must be accounted for in an appropriate balance equation. The volume can be the region surrounding a single atom resonating energy, such as Max Planck defined in 1900; it can be a body of steam or air in a steam engine, such as Sadi Carnot defined in 1824. It could also be just one nuclide (i.e. a system of quarks) as hypothesized in quantum thermodynamics. == Surroundings == The system is the part of the universe being studied, while the surroundings is the remainder of the universe that lies outside the boundaries of the system. It is also known as the environment or the reservoir. Depending on the type of system, it may interact with the system by exchanging mass, energy (including heat and work), momentum, electric charge, or other conserved properties. The environment is ignored in the analysis of the system, except in regards to these interactions. == Closed system == In a closed system, no mass may be transferred in or out of the system boundaries. The system always contains the same amount of matter, but (sensible) heat and (boundary) work can be exchanged across the boundary of the system. Whether a system can exchange heat, work, or both is dependent on the property of its boundary. Adiabatic boundary – not allowing any heat exchange: A thermally isolated system Rigid boundary – not allowing exchange of work: A mechanically isolated system One example is fluid being compressed by a piston in a cylinder. Another example of a closed system is a bomb calorimeter, a type of constant-volume calorimeter used in measuring the heat of combustion of a particular reaction. Electrical energy travels across the boundary to produce a spark between the electrodes and initiates combustion. Heat transfer occurs across the boundary after combustion but no mass transfer takes place either way. The first law of thermodynamics for energy transfers for closed system may be stated: Δ U = Q − W {\displaystyle \Delta U=Q-W} where U {\displaystyle U} denotes the internal energy of the system, Q {\displaystyle Q} heat added to the system, W {\displaystyle W} the work done by the system. For infinitesimal changes the first law for closed systems may stated: d U = δ Q − δ W . {\displaystyle \mathrm {d} U=\delta Q-\delta W.} If the work is due to a volume expansion by d V {\displaystyle \mathrm {d} V} at a pressure P {\displaystyle P} then: δ W = P d V . {\displaystyle \delta W=P\mathrm {d} V.} For a quasi-reversible heat transfer, the second law of thermodynamics reads: δ Q = T d S {\displaystyle \delta Q=T\mathrm {d} S} where T {\displaystyle T} denotes the thermodynamic temperature and S {\displaystyle S} the entropy of the system. With these relations the fundamental thermodynamic relation, used to compute changes in internal energy, is expressed as: d U = T d S − P d V . {\displaystyle \mathrm {d} U=T\mathrm {d} S-P\mathrm {d} V.} For a simple system, with only one type of particle (atom or molecule), a closed system amounts to a constant number of particles. For systems undergoing a chemical reaction, there may be all sorts of molecules being generated and destroyed by the reaction process. In this case, the fact that the system is closed is expressed by stating that the total number of each elemental atom is conserved, no matter what kind of molecule it may be a part of. Mathematically: ∑ j = 1 m a i j N j = b i 0 {\displaystyle \sum _{j=1}^{m}a_{ij}N_{j}=b_{i}^{0}} where N j {\displaystyle N_{j}} denotes the number of j {\displaystyle j} -type molecules, a i j {\displaystyle a_{ij}} the number of atoms of element i {\displaystyle i} in molecule j {\displaystyle j} , and b i 0 {\displaystyle b_{i}^{0}} the total number of atoms of element i {\displaystyle i} in the system, which remains constant, since the system is closed. There is one such equation for each element in the system. == Isolated system == An isolated system is more restrictive than a closed system as it does not interact with its surroundings in any way. Mass and energy remains constant within the system, and no energy or mass transfer takes place across the boundary. As time passes in an isolated system, internal differences in the system tend to even out and pressures and temperatures tend to equalize, as do density differences. A system in which all equalizing processes have gone practically to completion is in a state of thermodynamic equilibrium. Truly isolated physical systems do not exist in reality (except perhaps for the universe as a whole), because, for example, there is always gravity between a system with mass and masses elsewhere. However, real systems may behave nearly as an isolated system for finite (possibly very long) times. The concept of an isolated system can serve as a useful model approximating many real-world situations. It is an acceptable idealization used in constructing mathematical models of certain natural phenomena. In the attempt to justify the postulate of entropy increase in the second law of thermodynamics, Boltzmann's H-theorem used equations, which assumed that a system (for example, a gas) was isolated. That is all the mechanical degrees of freedom could be specified, treating the walls simply as mirror boundary conditions. This inevitably led to Loschmidt's paradox. However, if the stochastic behavior of the molecules in actual walls is considered, along with the randomizing effect of the ambient, background thermal radiation, Boltzmann's assumption of molecular chaos can be justified. The second law of thermodynamics for isolated systems states that the entropy of an isolated system not in equilibrium tends to increase over time, approaching maximum value at equilibrium. Overall, in an isolated system, the internal energy is constant and the entropy can never decrease. A closed system's entropy can decrease e.g. when heat is extracted from the system. Isolated systems are not equivalent to closed systems. Closed systems cannot exchange matter with the surroundings, but can exchange energy. Isolated systems can exchange neither matter nor energy with their surroundings, and as such are only theoretical and do not exist in reality (except, possibly, the entire universe). 'Closed system' is often used in thermodynamics discussions when 'isolated system' would be correct – i.e. there is an assumption that energy does not enter or leave the system. == Selective transfer of matter == For a thermodynamic process, the precise physical properties of the walls and surroundings of the system are important, because they determine the possible processes. An open system has one or several walls that allow transfer of matter. To account for the internal energy of the open system, this requires energy transfer terms in addition to those for heat and work. It also leads to the idea of the chemical potential. A wall selectively permeable only to a pure substance can put the system in diffusive contact with a reservoir of that pure substance in the surroundings. Then a process is possible in which that pure substance is transferred between system and surroundings. Also, across that wall a contact equilibrium with respect to that substance is possible. By suitable thermodynamic operations, the pure substance reservoir can be dealt with as a closed system. Its internal energy and its entropy can be determined as functions of its temperature, pressure, and mole number. A thermodynamic operation can render impermeable to matter all system walls other than the contact equilibrium wall for that substance. This allows the definition of an intensive state variable, with respect to a reference state of the surroundings, for that substance. The intensive variable is called the chemical potential; for component substance i it is usually denoted μi. The corresponding extensive variable can be the number of moles Ni of the component substance in the system. For a contact equilibrium across a wall permeable to a substance, the chemical potentials of the substance must be same on either side of the wall. This is part of the nature of thermodynamic equilibrium, and may be regarded as related to the zeroth law of thermodynamics. == Open system == In an open system, there is an exchange of energy and matter between the system and the surroundings. The presence of reactants in an open beaker is an example of an open system. Here the boundary is an imaginary surface enclosing the beaker and reactants. It is named closed, if borders are impenetrable for substance, but allow transit of energy in the form of heat, and isolated, if there is no exchange of heat and substances. The open system cannot exist in the equilibrium state. To describe deviation of the thermodynamic system from equilibrium, in addition to constitutive variables that was described above, a set of internal variables ξ 1 , ξ 2 , … {\displaystyle \xi _{1},\xi _{2},\ldots } have been introduced. The equilibrium state is considered to be stable and the main property of the internal variables, as measures of non-equilibrium of the system, is their trending to disappear; the local law of disappearing can be written as relaxation equation for each internal variable where τ i = τ i ( T , x 1 , x 2 , … , x n ) {\displaystyle \tau _{i}=\tau _{i}(T,x_{1},x_{2},\ldots ,x_{n})} is a relaxation time of a corresponding variable. It is convenient to consider the initial value ξ i 0 {\displaystyle \xi _{i}^{0}} equal to zero. The specific contribution to the thermodynamics of open non-equilibrium systems was made by Ilya Prigogine, who investigated a system of chemically reacting substances. In this case the internal variables appear to be measures of incompleteness of chemical reactions, that is measures of how much the considered system with chemical reactions is out of equilibrium. The theory can be generalized, to consider any deviations from the equilibrium state, such as structure of the system, gradients of temperature, difference of concentrations of substances and so on, to say nothing of degrees of completeness of all chemical reactions, to be internal variables. The increments of Gibbs free energy G {\displaystyle G} and entropy S {\displaystyle S} at T = const {\displaystyle T={\text{const}}} and p = const {\displaystyle p={\text{const}}} are determined as The stationary states of the system exist due to exchange of both thermal energy ( Δ Q α {\displaystyle \Delta Q_{\alpha }} ) and a stream of particles. The sum of the last terms in the equations presents the total energy coming into the system with the stream of particles of substances Δ N α {\displaystyle \Delta N_{\alpha }} that can be positive or negative; the quantity μ α {\displaystyle \mu _{\alpha }} is chemical potential of substance α {\displaystyle \alpha } .The middle terms in equations (2) and (3) depict energy dissipation (entropy production) due to the relaxation of internal variables ξ j {\displaystyle \xi _{j}} , while Ξ j {\displaystyle \Xi _{j}} are thermodynamic forces. This approach to the open system allows describing the growth and development of living objects in thermodynamic terms. == See also == Dynamical system Energy system Isolated system Mechanical system Physical system Quantum system Thermodynamic cycle Thermodynamic process Two-state quantum system GENERIC formalism == References == == Sources == Abbott, M.M.; van Hess, H. G. (1989). Thermodynamics with Chemical Applications (2nd ed.). McGraw Hill. Bailyn, M. (1994). A Survey of Thermodynamics. New York: American Institute of Physics Press. ISBN 0-88318-797-3. Callen, H. B. (1985) [1960]. Thermodynamics and an Introduction to Thermostatistics (2nd ed.). New York: Wiley. ISBN 0-471-86256-8. Carnot, Sadi (1824). Réflexions sur la puissance motrice du feu et sur les machines propres à développer cette puissance (in French). Paris: Bachelier. Haase, R. (1971). "Survey of Fundamental Laws". In Eyring, H.; Henderson, D.; Jost, W. (eds.). Thermodynamics. Physical Chemistry: An Advanced Treatise. Vol. 1. New York: Academic Press. pp. 1–97. LCCN 73-117081. Dobroborsky B.S. Machine safety and the human factor / Edited by Doctor of Technical Sciences, prof. S.A. Volkov. — St. Petersburg: SPbGASU, 2011. — pp. 33–35. — 114 p. — ISBN 978-5-9227-0276-8. (Ru) Halliday, David; Resnick, Robert; Walker, Jearl (2008). Fundamentals of Physics (8th ed.). Wiley. Moran, Michael J.; Shapiro, Howard N. (2008). Fundamentals of Engineering Thermodynamics (6th ed.). Wiley. Rex, Andrew; Finn, C. B. P. (2017). Finn's Thermal Physics (3rd ed.). Taylor & Francis. ISBN 978-1-498-71887-5. Tisza, László (1966). Generalized Thermodynamics. MIT Press. Tschoegl, N. W. (2000). Fundamentals of Equilibrium and Steady-State Thermodynamics. Amsterdam: Elsevier. ISBN 0-444-50426-5.
Wikipedia/System_(thermodynamics)
The history of thermodynamics is a fundamental strand in the history of physics, the history of chemistry, and the history of science in general. Due to the relevance of thermodynamics in much of science and technology, its history is finely woven with the developments of classical mechanics, quantum mechanics, magnetism, and chemical kinetics, to more distant applied fields such as meteorology, information theory, and biology (physiology), and to technological developments such as the steam engine, internal combustion engine, cryogenics and electricity generation. The development of thermodynamics both drove and was driven by atomic theory. It also, albeit in a subtle manner, motivated new directions in probability and statistics; see, for example, the timeline of thermodynamics. == Antiquity == The ancients viewed heat as that related to fire. In 3000 BC, the ancient Egyptians viewed heat as related to origin mythologies. The ancient Indian philosophy including Vedic philosophy believed that five classical elements (or pancha mahā bhūta) are the basis of all cosmic creations. In the Western philosophical tradition, after much debate about the primal element among earlier pre-Socratic philosophers, Empedocles proposed a four-element theory, in which all substances derive from earth, water, air, and fire. The Empedoclean element of fire is perhaps the principal ancestor of later concepts such as phlogiston and caloric. Around 500 BC, the Greek philosopher Heraclitus became famous as the "flux and fire" philosopher for his proverbial utterance: "All things are flowing." Heraclitus argued that the three principal elements in nature were fire, earth, and water. === Vacuum-abhorrence === The 5th century BC Greek philosopher Parmenides, in his only known work, a poem conventionally titled On Nature, uses verbal reasoning to postulate that a void, essentially what is now known as a vacuum, in nature could not occur. This view was supported by the arguments of Aristotle, but was criticized by Leucippus and Hero of Alexandria. From antiquity to the Middle Ages various arguments were put forward to prove or disapprove the existence of a vacuum and several attempts were made to construct a vacuum but all proved unsuccessful. === Atomism === Atomism is a central part of today's relationship between thermodynamics and statistical mechanics. Ancient thinkers such as Leucippus and Democritus, and later the Epicureans, by advancing atomism, laid the foundations for the later atomic theory. Until experimental proof of atoms was later provided in the 20th century, the atomic theory was driven largely by philosophical considerations and scientific intuition. == 17th century == === Early thermometers === The European scientists Cornelius Drebbel, Robert Fludd, Galileo Galilei and Santorio Santorio in the 16th and 17th centuries were able to gauge the relative "coldness" or "hotness" of air, using a rudimentary air thermometer (or thermoscope). This may have been influenced by an earlier device which could expand and contract the air constructed by Philo of Byzantium and Hero of Alexandria. === "Heat is motion" (Francis Bacon) === The idea that heat is a form of motion is perhaps an ancient one and is certainly discussed by the English philosopher and scientist Francis Bacon in 1620 in his Novum Organum. Bacon surmised: "Heat itself, its essence and quiddity is motion and nothing else." "not ... of the whole, but of the small particles of the body." === René Descartes === ==== Precursor to work ==== In 1637, in a letter to the Dutch scientist Christiaan Huygens, the French philosopher René Descartes wrote: Lifting 100 lb one foot twice over is the same as lifting 200 lb one foot, or 100 lb two feet. In 1686, the German philosopher Gottfried Leibniz wrote essentially the same thing: The same force ["work" in modern terms] is necessary to raise body A of 1 pound (libra) to a height of 4 yards (ulnae), as is necessary to raise body B of 4 pounds to a height of 1 yard. ==== Quantity of motion ==== In Principles of Philosophy (Principia Philosophiae) from 1644, Descartes defined "quantity of motion" (Latin: quantitas motus) as the product of size and speed, and claimed that the total quantity of motion in the universe is conserved. If x is twice the size of y, and is moving half as fast, then there's the same amount of motion in each.[God] created matter, along with its motion ... merely by letting things run their course, he preserves the same amount of motion ... as he put there in the beginning. He claimed that merely by letting things run their course, God preserves the same amount of motion as He created, and that thus the total quantity of motion in the universe is conserved. === Boyle's law === Irish physicist and chemist Robert Boyle in 1656, in coordination with English scientist Robert Hooke, built an air pump. Using this pump, Boyle and Hooke noticed the pressure-volume correlation: PV=constant. In that time, air was assumed to be a system of motionless particles, and not interpreted as a system of moving molecules. The concept of thermal motion came two centuries later. Therefore, Boyle's publication in 1660 speaks about a mechanical concept: the air spring. Later, after the invention of the thermometer, the property temperature could be quantified. This tool gave Gay-Lussac the opportunity to derive his law, which led shortly later to the ideal gas law. ==== Gas laws in brief ==== Boyle's law (1662) Charles's law was first published by Joseph Louis Gay-Lussac in 1802, but he referenced unpublished work by Jacques Charles from around 1787. The relationship had been anticipated by the work of Guillaume Amontons in 1702. Gay-Lussac's law (1802) === Steam digester === Denis Papin, an associate of Boyle's, built in 1679 a bone digester, which is a closed vessel with a tightly fitting lid that confines steam until a high pressure is generated. Later designs implemented a steam release valve to keep the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and cylinder engine. He did not however follow through with his design. Nevertheless, in 1697, based on Papin's designs, Thomas Newcomen greatly improved upon engineer Thomas Savery's earlier "fire engine" by incorporating a piston. This made it suitable for mechanical work in addition to pumping to heights beyond 30 feet, and is thus often considered the first true steam engine. === Heat transfer (Halley and Newton) === The phenomenon of heat conduction is immediately grasped in everyday life. The fact that warm air rises and the importance of the phenomenon to meteorology was first realised by Edmond Halley in 1686. In 1701, Sir Isaac Newton published his law of cooling. == 18th century == === Phlogiston theory === The theory of phlogiston arose in the 17th century, late in the period of alchemy. Its replacement by caloric theory in the 18th century is one of the historical markers of the transition from alchemy to chemistry. Phlogiston was a hypothetical substance that was presumed to be liberated from combustible substances during burning, and from metals during the process of rusting. === Limit to the "degree of cold" === In 1702 Guillaume Amontons introduced the concept of absolute zero based on observations of gases. === Kinetic theory (18th century) === An early scientific reflection on the microscopic and kinetic nature of matter and heat is found in a work by Mikhail Lomonosov, in which he wrote: "Movement should not be denied based on the fact it is not seen. ... leaves of trees move when rustled by a wind, despite it being unobservable from large distances. Just as in this case motion ... remains hidden in warm bodies due to the extremely small sizes of the moving particles." During the same years, Daniel Bernoulli published his book Hydrodynamics (1738), in which he derived an equation for the pressure of a gas considering the collisions of its atoms with the walls of a container. He proved that this pressure is two thirds the average kinetic energy of the gas in a unit volume. Bernoulli's ideas, however, made little impact on the dominant caloric culture. Bernoulli made a connection with Gottfried Leibniz's vis viva principle, an early formulation of the principle of conservation of energy, and the two theories became intimately entwined throughout their history. === Thermochemistry and steam engines === ==== Heat capacity ==== Bodies were capable of holding a certain amount of this fluid, leading to the term heat capacity, named and first investigated by Scottish chemist Joseph Black in the 1750s. In the mid- to late 19th century, heat became understood as a manifestation of a system's internal energy. Today heat is seen as the transfer of disordered thermal energy. Nevertheless, at least in English, the term heat capacity survives. In some other languages, the term thermal capacity is preferred, and it is also sometimes used in English. ==== Steam engines ==== Prior to 1698 and the invention of the Savery engine, horses were used to power pulleys, attached to buckets, which lifted water out of flooded salt mines in England. In the years to follow, more variations of steam engines were built, such as the Newcomen engine, and later the Watt engine. In time, these early engines would eventually be utilized in place of horses. Thus, each engine began to be associated with a certain amount of "horse power" depending upon how many horses it had replaced. The main problem with these first engines was that they were slow and clumsy, converting less than 2% of the input fuel into useful work. In other words, large quantities of coal (or wood) had to be burned to yield only a small fraction of work output. Hence the need for a new science of engine dynamics was born. ==== Caloric theory ==== In the mid- to late 18th century, heat was thought to be a measurement of an invisible fluid, known as the caloric. Like phlogiston, caloric was presumed to be the "substance" of heat that would flow from a hotter body to a cooler body, thus warming it. The utility and explanatory power of kinetic theory, however, soon started to displace the caloric theory. Nevertheless, William Thomson, for example, was still trying to explain James Joule's observations within a caloric framework as late as 1850. The caloric theory was largely obsolete by the end of the 19th century. ==== Calorimetry ==== Joseph Black and Antoine Lavoisier made important contributions in the precise measurement of heat changes using the calorimeter, a subject which became known as thermochemistry. The development of the steam engine focused attention on calorimetry and the amount of heat produced from different types of coal. The first quantitative research on the heat changes during chemical reactions was initiated by Lavoisier using an ice calorimeter following research by Joseph Black on the latent heat of water. === Thermal conduction and thermal radiation === Carl Wilhelm Scheele distinguished heat transfer by thermal radiation (radiant heat) from that by convection and conduction in 1777. In the 17th century, it came to be believed that all materials had an identical conductivity and that differences in sensation arose from their different heat capacities. Suggestions that this might not be the case came from the new science of electricity in which it was easily apparent that some materials were good electrical conductors while others were effective insulators. Jan Ingen-Housz in 1785-9 made some of the earliest measurements, as did Benjamin Thompson during the same period. In 1791, Pierre Prévost showed that all bodies radiate heat, no matter how hot or cold they are. In 1804, Sir John Leslie observed that a matte black surface radiates heat more effectively than a polished surface, suggesting the importance of black-body radiation. === Heat and friction (Rumford) === In the 19th century, scientists abandoned the idea of a physical caloric. The first substantial experimental challenges to the caloric theory arose in a work by Benjamin Thompson's (Count Rumford) from 1798, in which he showed that boring cast iron cannons produced great amounts of heat which he ascribed to friction. His work was among the first to undermine the caloric theory. As a result of his experiments in 1798, Thompson suggested that heat was a form of motion, though no attempt was made to reconcile theoretical and experimental approaches, and it is unlikely that he was thinking of the vis viva principle. == Early 19th century == === Modern thermodynamics (Carnot) === Although early steam engines were crude and inefficient, they attracted the attention of the leading scientists of the time. One such scientist was Sadi Carnot, the "father of thermodynamics", who in 1824 published Reflections on the Motive Power of Fire, a discourse on heat, power, and engine efficiency. Most cite this book as the starting point for thermodynamics as a modern science. (The name "thermodynamics", however, did not arrive until 1854, when the British mathematician and physicist William Thomson (Lord Kelvin) coined the term thermo-dynamics in his paper On the Dynamical Theory of Heat.) Carnot defined "motive power" to be the expression of the useful effect that a motor is capable of producing. Herein, Carnot introduced us to the first modern day definition of "work": weight lifted through a height. The desire to understand, via formulation, this useful effect in relation to "work" is at the core of all modern day thermodynamics. Even though he was working with the caloric theory, Carnot in 1824 suggested that some of the caloric available for generating useful work is lost in any real process. === Reflection, refraction, and polarisation of radiant heat === Though it had come to be suspected from Scheele's work, in 1831 Macedonio Melloni demonstrated that radiant heat could be reflected, refracted and polarised in the same way as light. === Kinetic theory (early 19th century) === John Herapath independently formulated a kinetic theory in 1820, but mistakenly associated temperature with momentum rather than vis viva or kinetic energy. His work ultimately failed peer review, even from someone as well-disposed to the kinetic principle as Humphry Davy, and was neglected. John James Waterston in 1843 provided a largely accurate account, again independently, but his work received the same reception, failing peer review. Further progress in kinetic theory started only in the middle of the 19th century, with the works of Rudolf Clausius, James Clerk Maxwell, and Ludwig Boltzmann. === Mechanical equivalent of heat === Quantitative studies by Joule from 1843 onwards provided soundly reproducible phenomena, and helped to place the subject of thermodynamics on a solid footing. In 1843, Joule experimentally found the mechanical equivalent of heat. In 1845, Joule reported his best-known experiment, involving the use of a falling weight to spin a paddle-wheel in a barrel of water, which allowed him to estimate a mechanical equivalent of heat of 819 ft·lbf/Btu (4.41 J/cal). This led to the theory of conservation of energy and explained why heat can do work. === Absolute zero and the Kelvin scale === The idea of absolute zero was generalised in 1848 by Lord Kelvin. == Late 19th century == === Entropy and the second law of thermodynamics === ==== Lord Kelvin ==== In March 1851, while grappling to come to terms with the work of Joule, Lord Kelvin started to speculate that there was an inevitable loss of useful heat in all processes. The idea was framed even more dramatically by Hermann von Helmholtz in 1854, giving birth to the spectre of the heat death of the universe. ==== William Rankine ==== In 1854, William John Macquorn Rankine started to make use of what he called thermodynamic function in calculations. This has subsequently been shown to be identical to the concept of entropy formulated by the famed mathematical physicist Rudolf Clausius. ==== Rudolf Clausius ==== In 1865, Clausius coined the term "entropy" (das Wärmegewicht, symbolized S) to denote heat lost or turned into waste. ("Wärmegewicht" translates literally as "heat-weight"; the corresponding English term stems from the Greek τρέπω, "I turn".) Clausius used the concept to develop his classic statement of the second law of thermodynamics the same year. === Statistical thermodynamics === ==== Temperature is average kinetic energy of molecules ==== In his 1857 work On the nature of the motion called heat, Clausius for the first time clearly states that heat is the average kinetic energy of molecules. ==== Maxwell–Boltzmann distribution ==== Clausius' above statement interested the Scottish mathematician and physicist James Clerk Maxwell, who in 1859 derived the momentum distribution later named after him. The Austrian physicist Ludwig Boltzmann subsequently generalized this distribution for the case of gases in external fields. In association with Clausius, in 1871, Maxwell formulated a new branch of thermodynamics called statistical thermodynamics, which functions to analyze large numbers of particles at equilibrium, i.e., systems where no changes are occurring, such that only their average properties as temperature T, pressure P, and volume V become important. ==== Degrees of freedom ==== Boltzmann is perhaps the most significant contributor to kinetic theory, as he introduced many of the fundamental concepts in the theory. Besides the Maxwell–Boltzmann distribution mentioned above, he also associated the kinetic energy of particles with their degrees of freedom. The Boltzmann equation for the distribution function of a gas in non-equilibrium states is still the most effective equation for studying transport phenomena in gases and metals. By introducing the concept of thermodynamic probability as the number of microstates corresponding to the current macrostate, he showed that its logarithm is proportional to entropy. ==== Definition of entropy ==== In 1875, the Austrian physicist Ludwig Boltzmann formulated a precise connection between entropy S and molecular motion: S = k log ⁡ W {\displaystyle S=k\log W\,} being defined in terms of the number of possible states W that such motion could occupy, where k is the Boltzmann constant. ==== Gibbs free energy ==== In 1876, chemical engineer Willard Gibbs published an obscure 300-page paper titled: On the Equilibrium of Heterogeneous Substances, wherein he formulated one grand equality, the Gibbs free energy equation, which suggested a measure of the amount of "useful work" attainable in reacting systems. === Enthalpy === Gibbs also originated the concept we now know as enthalpy H, calling it "a heat function for constant pressure". The modern word enthalpy would be coined many years later by Heike Kamerlingh Onnes, who based it on the Greek word enthalpein meaning to warm. === Stefan–Boltzmann law === James Clerk Maxwell's 1862 insight that both light and radiant heat were forms of electromagnetic wave led to the start of the quantitative analysis of thermal radiation. In 1879, Jožef Stefan observed that the total radiant flux from a blackbody is proportional to the fourth power of its temperature and stated the Stefan–Boltzmann law. The law was derived theoretically by Ludwig Boltzmann in 1884. == 20th century == === Quantum thermodynamics === In 1900 Max Planck found an accurate formula for the spectrum of black-body radiation. Fitting new data required the introduction of a new constant, known as the Planck constant, the fundamental constant of modern physics. Looking at the radiation as coming from a cavity oscillator in thermal equilibrium, the formula suggested that energy in a cavity occurs only in multiples of frequency times the constant. That is, it is quantized. This avoided a divergence to which the theory would lead without the quantization. === Third law of thermodynamics === In 1906, Walther Nernst stated the third law of thermodynamics. === Erwin Schrödinger === Building on the foundations above, Lars Onsager, Erwin Schrödinger, Ilya Prigogine and others, brought these engine "concepts" into the thoroughfare of almost every modern-day branch of science. == Branches of thermodynamics == The following list is a rough disciplinary outline of the major branches of thermodynamics and their time of inception: Thermochemistry – 1780s Classical thermodynamics – 1824 Chemical thermodynamics – 1876 Statistical mechanics – c. 1880s Equilibrium thermodynamics Engineering thermodynamics Chemical engineering thermodynamics – c. 1940s Non-equilibrium thermodynamics – 1941 Small systems thermodynamics – 1960s Biological thermodynamics – 1957 Ecosystem thermodynamics – 1959 Relativistic thermodynamics – 1965 Rational thermodynamics – 1960s Quantum thermodynamics – 1968 Black hole thermodynamics – c. 1970s Theory of critical phenomena and use of renormalization group theory in statistical physics – 1966-1974 Geological thermodynamics – c. 1970s Biological evolution thermodynamics – 1978 Geochemical thermodynamics – c. 1980s Atmospheric thermodynamics – c. 1980s Natural systems thermodynamics – 1990s Supramolecular thermodynamics – 1990s Earthquake thermodynamics – 2000 Drug-receptor thermodynamics – 2001 Pharmaceutical systems thermodynamics – 2002 Concepts of thermodynamics have also been applied in other fields, for example: Thermoeconomics – c. 1970s == See also == History of chemistry Timeline of heat engine technology Timeline of low-temperature technology Timeline of thermodynamics == References == == Further reading == Cardwell, D.S.L. (1971). From Watt to Clausius: The Rise of Thermodynamics in the Early Industrial Age. London: Heinemann. ISBN 978-0-435-54150-7. Leff, H.S.; Rex, A.F., eds. (1990). Maxwell's Demon: Entropy, Information and Computing. Bristol: Adam Hilger. ISBN 978-0-7503-0057-5. == External links == History of Thermodynamics – University of Waterloo Thermodynamic History Notes – WolframScience.com Brief History of Thermodynamics – Berkeley [PDF]
Wikipedia/Theory_of_heat
Dilution is the process of decreasing the concentration of a solute in a solution, usually simply by mixing with more solvent like adding more water to the solution. To dilute a solution means to add more solvent without the addition of more solute. The resulting solution is thoroughly mixed so as to ensure that all parts of the solution are identical. The same direct relationship applies to gases and vapors diluted in air for example. Although, thorough mixing of gases and vapors may not be as easily accomplished. For example, if there are 10 grams of salt (the solute) dissolved in 1 litre of water (the solvent), this solution has a certain salt concentration (molarity). If one adds 1 litre of water to this solution, the salt concentration is reduced. The diluted solution still contains 10 grams of salt (0.171 moles of NaCl). Mathematically this relationship can be shown by equation: c 1 V 1 = c 2 V 2 {\displaystyle c_{1}V_{1}=c_{2}V_{2}} where c1 = initial concentration or molarity V1 = initial volume c2 = final concentration or molarity V2 = final volume .... == Basic room purge equation == The basic room purge equation is used in industrial hygiene. It determines the time required to reduce a known vapor concentration existing in a closed space to a lower vapor concentration. The equation can only be applied when the purged volume of vapor or gas is replaced with "clean" air or gas. For example, the equation can be used to calculate the time required at a certain ventilation rate to reduce a high carbon monoxide concentration in a room. D t = [ V Q ] ⋅ ln ⁡ [ C initial C ending ] {\displaystyle D_{t}=\left[{\frac {V}{Q}}\right]\cdot \ln \left[{\frac {C_{\text{initial}}}{C_{\text{ending}}}}\right]} Sometimes the equation is also written as: ln ⁡ [ C ending C initial ] = − Q V ⋅ ( t ending − t initial ) {\displaystyle \ln \left[{\frac {C_{\text{ending}}}{C_{\text{initial}}}}\right]\quad ={-}{\frac {Q}{V}}\cdot (t_{\text{ending}}-t_{\text{initial}})} where t initial = 0 {\displaystyle t_{\text{initial}}=0} Dt = time required; the unit of time used is the same as is used for Q V = air or gas volume of the closed space or room in cubic feet, cubic metres or litres Q = ventilation rate into or out of the room in cubic feet per minute, cubic metres per hour or litres per second Cinitial = initial concentration of a vapor inside the room measured in ppm Cfinal = final reduced concentration of the vapor inside the room in ppm == Dilution ventilation equation == The basic room purge equation can be used only for purge scenarios. In a scenario where a liquid continuously evaporates from a container in a ventilated room, a differential equation has to be used: d C d t = G − Q ′ C V {\displaystyle {\frac {dC}{dt}}={\frac {G-Q'C}{V}}} where the ventilation rate has been adjusted by a mixing factor K: Q ′ = Q K {\displaystyle Q'={\frac {Q}{K}}} C = concentration of a gas G = generation rate V = room volume Q′ = adjusted ventilation rate of the volume == Dilution in welding == The dilution in welding terms is defined as the weight of the base metal melted divided by the total weight of the weld metal. For example, if we have a dilution of 0.40, the fraction of the weld metal that came from the consumable electrode is 0.60. == See also == Displacement ventilation Reaction rate Partial molar quantities Apparent molar property Excess molar quantity Heat of dilution == References == == External links == http://pubs.acs.org/doi/abs/10.1021/ja01320a004 Easy dilution calculator
Wikipedia/Dilution_(equation)
Common thermodynamic equations and quantities in thermodynamics, using mathematical notation, are as follows: == Definitions == Many of the definitions below are also used in the thermodynamics of chemical reactions. === General basic quantities === === General derived quantities === === Thermal properties of matter === === Thermal transfer === == Equations == The equations in this article are classified by subject. === Thermodynamic processes === === Kinetic theory === ==== Ideal gas ==== === Entropy === S = k B ln ⁡ Ω {\displaystyle S=k_{\mathrm {B} }\ln \Omega } , where kB is the Boltzmann constant, and Ω denotes the volume of macrostate in the phase space or otherwise called thermodynamic probability. d S = δ Q T {\displaystyle dS={\frac {\delta Q}{T}}} , for reversible processes only === Statistical physics === Below are useful results from the Maxwell–Boltzmann distribution for an ideal gas, and the implications of the Entropy quantity. The distribution is valid for atoms or molecules constituting ideal gases. Corollaries of the non-relativistic Maxwell–Boltzmann distribution are below. === Quasi-static and reversible processes === For quasi-static and reversible processes, the first law of thermodynamics is: d U = δ Q − δ W {\displaystyle dU=\delta Q-\delta W} where δQ is the heat supplied to the system and δW is the work done by the system. === Thermodynamic potentials === The following energies are called the thermodynamic potentials, and the corresponding fundamental thermodynamic relations or "master equations" are: === Maxwell's relations === The four most common Maxwell's relations are: More relations include the following. Other differential equations are: === Quantum properties === U = N k B T 2 ( ∂ ln ⁡ Z ∂ T ) V {\displaystyle U=Nk_{\text{B}}T^{2}\left({\frac {\partial \ln Z}{\partial T}}\right)_{V}} S = U T + N k B ln ⁡ Z − N k ln ⁡ N + N k {\displaystyle S={\frac {U}{T}}+Nk_{\text{B}}\ln Z-Nk\ln N+Nk} Indistinguishable Particles where N is number of particles, h is that Planck constant, I is moment of inertia, and Z is the partition function, in various forms: == Thermal properties of matter == === Thermal transfer === === Thermal efficiencies === == See also == == References == == External links == Thermodynamic equation calculator
Wikipedia/Table_of_thermodynamic_equations
In thermodynamics, a quantity that is well defined so as to describe the path of a process through the equilibrium state space of a thermodynamic system is termed a process function, or, alternatively, a process quantity, or a path function. As an example, mechanical work and heat are process functions because they describe quantitatively the transition between equilibrium states of a thermodynamic system. Path functions depend on the path taken to reach one state from another. Different routes give different quantities. Examples of path functions include work, heat and arc length. In contrast to path functions, state functions are independent of the path taken. Thermodynamic state variables are point functions, differing from path functions. For a given state, considered as a point, there is a definite value for each state variable and state function. Infinitesimal changes in a process function X are often indicated by δX to distinguish them from infinitesimal changes in a state function Y which is written dY. The quantity dY is an exact differential, while δX is not, it is an inexact differential. Infinitesimal changes in a process function may be integrated, but the integral between two states depends on the particular path taken between the two states, whereas the integral of a state function is simply the difference of the state functions at the two points, independent of the path taken. In general, a process function X may be either holonomic or non-holonomic. For a holonomic process function, an auxiliary state function (or integrating factor) λ may be defined such that Y = λX is a state function. For a non-holonomic process function, no such function may be defined. In other words, for a holonomic process function, λ may be defined such that dY = λδX is an exact differential. For example, thermodynamic work is a holonomic process function since the integrating factor λ = ⁠1/p⁠ (where p is pressure) will yield exact differential of the volume state function dV = ⁠δW/p⁠. The second law of thermodynamics as stated by Carathéodory essentially amounts to the statement that heat is a holonomic process function since the integrating factor λ = ⁠1/T⁠ (where T is temperature) will yield the exact differential of an entropy state function dS = ⁠δQ/T⁠. == References == == See also == Thermodynamics
Wikipedia/Process_function
Leadership in Energy and Environmental Design (LEED) is a green building certification program used worldwide. Developed by the non-profit U.S. Green Building Council (USGBC), it includes a set of rating systems for the design, construction, operation, and maintenance of green buildings, homes, and neighborhoods, which aims to help building owners and operators be environmentally responsible and use resources efficiently. As of 2024 there were over 195,000 LEED-certified buildings and over 205,000 LEED-accredited professionals in 186 countries worldwide. In the US, the District of Columbia consistently leads in LEED-certified square footage per capita, followed in 2022 by the top-ranking states of Massachusetts, Illinois, New York, California, and Maryland. Outside the United States, the top-ranking countries for 2022 were Mainland China, India, Canada, Brazil, and Sweden. LEED Canada has developed a separate rating system adapted to the Canadian climate and regulations. Many U.S. federal agencies, state and local governments require or reward LEED certification. As of 2022, based on certified square feet per capita, the leading five states (after the District of Columbia) were Massachusetts, Illinois, New York, California, and Maryland. Incentives can include tax credits, zoning allowances, reduced fees, and expedited permitting. Offices, healthcare-, and education-related buildings are the most frequent LEED-certified buildings in the US (over 60%), followed by warehouses, distribution centers, retail projects and multifamily dwellings (another 20%). Studies have found that for-rent LEED office spaces generally have higher rents and occupancy rates and lower capitalization rates. LEED is a design tool rather than a performance-measurement tool and has tended to focus on energy modeling rather than actual energy consumption. It has been criticized for a point system that can lead to inappropriate design choices and the prioritization of LEED certification points over actual energy conservation; for lacking climate specificity; for not sufficiently addressing issues of climate change and extreme weather; and for not incorporating principles of a circular economy. Draft versions of LEED v5 were released for public comment in 2024, and the final version of LEED v5 is expected to appear in 2025. It may address some of the previous criticisms. Despite concerns, LEED has been described as a "transformative force in the design and construction industry". LEED is credited with providing a framework for green building, expanding the use of green practices and products in buildings, encouraging sustainable forestry, and helping professionals to consider buildings in terms of the well-being of their occupants and as part of larger systems. == History == In April 1993, the U.S. Green Building Council (USGBC) was founded by Rick Fedrizzi, the head of environmental marketing at Carrier, real estate developer David Gottfried, and environmental lawyer Michael Italiano. Representatives from 60 firms and nonprofits met at the American Institute of Architects to discuss organizing within the building industry to support green building and develop a green building rating system. Also influential early on was architect Bob Berkebile. Fedrizzi served as the volunteer founding chair of USGBC from 1993 to 2004, and became its CEO as of 2004. As of November 4, 2016, he was succeeded as president and CEO of USGBC by Mahesh Ramanujam. Ramanujam served as CEO until 2021. Peter Templeton became interim president and CEO of USGBC as of November 1, 2021. A key player in developing the Leadership in Energy and Environmental Design (LEED) green certification program was Natural Resources Defense Council (NRDC) senior scientist Robert K. Watson. It was Watson, sometimes referred to as the "Founding Father of LEED", who created the acronym. Over two decades, Watson led a broad-based consensus process, bringing together non-profit organizations, government agencies, architects, engineers, developers, builders, product manufacturers and other industry leaders. The original planning group consisted of Watson, Mike Italiano, architect Bill Reed (founding LEED Technical Committee co-chair 1994–2003), architect Sandy Mendler, builder Gerard Heiber and engineer Richard Bourne. Tom Paladino and Lynne Barker (formerly King) co-chaired the LEED Pilot Committee from 1996–2001. Scot Horst chaired the LEED Steering Committee beginning in 2005 and was deeply involved in the development of LEED 2009. Joel Ann Todd took over as chair of the steering committee from 2009 to 2013, working to develop LEED v4, and introducing social equity credits. Other steering committee chairs include Chris Schaffner (2019) and Jennifer Sanguinetti (2020). Chairs of the USGBC's Energy and Atmosphere Technical Advisory Group for LEED technology have included Gregory Kats. The LEED initiative has been strongly supported by the USGBC Board of Directors, including Chair of the Board of Directors Steven Winter (1999–2003). The current chair of the Board of Directors is Anyeley Hallová (2023). LEED has grown from one standard for new construction to a comprehensive system of interrelated standards covering aspects from the design and construction to the maintenance and operation of buildings. LEED has also grown from six committee volunteers to an organization of 122,626 volunteers, professionals and staff. As of 2023, more than 185,000 LEED projects representing over 28 billion square feet (2.6×10^9 m2) have been proposed worldwide, and more than 105,000 projects representing over 12 billion square feet (1.1×10^9 m2) have been certified in 185 countries. However, lumber, chemical and plastics trade groups have lobbied to weaken the application of LEED guidelines in several southern states. In 2013, the states of Alabama, Georgia and Mississippi effectively banned the use of LEED in new public buildings, in favor of other industry standards that the USGBC considers too lax. LEED is considered a target of a type of disinformation attack known as astroturfing, involving "fake grassroots organizations usually sponsored by large corporations". Unlike model building codes, such as the International Building Code, only members of the USGBC and specific "in-house" committees may add to, subtract from, or edit the standard, subject to an internal review process. Proposals to modify the LEED standards are offered and publicly reviewed by USGBC's member organizations, of which there were 4551 as of October 2023. == Rating systems == LEED has evolved since 1998 to more accurately represent and incorporate emerging green building technologies. LEED has developed building programs specific to new construction (NC), core and shell (CS), commercial interiors (CI), existing buildings (EB), neighborhood development (ND), homes (LEED for Homes), retail, schools, and healthcare. The pilot version, LEED New Construction (NC) v1.0, led to LEED NCv2.0, LEED NCv2.2 in 2005, LEED 2009 (a.k.a. LEED v3) in 2009, and LEED v4 in November 2013. LEED 2009 was depreciated for new projects registered from October 31, 2016. LEED v4.1 was released on April 2, 2019. Draft versions of LEED v5 have been released and revised in response to public comment during 2024. The official final version of LEED v5 is expected to be released in 2025. Future updates to the standard are planned to occur every five years. LEED forms the basis for other sustainability rating systems such as the U.S. Environmental Protection Agency's (EPA) Labs21 and LEED Canada. The Australian Green Star is based on both LEED and the UK's Building Research Establishment Environmental Assessment Methodology (BREEAM). === LEED v3 (2009) === LEED 2009 encompasses ten rating systems for the design, construction and operation of buildings, homes and neighborhoods. Five overarching categories correspond to the specialties available under the LEED professional program. That suite consists of: Green building design and construction (BD+C) – for new construction, core and shell, schools, retail spaces (new constructions and major renovations), and healthcare facilities Green interior design and construction – for commercial and retail interiors Green building operations and maintenance Green neighborhood development Green home design and construction LEED v3 aligned credits across all LEED rating systems, weighted by environmental priority. It reflects a continuous development process, with a revised third-party certification program and online resources. Under LEED 2009, an evaluated project scores points to a possible maximum of 100 across six categories: sustainable sites (SS), water efficiency (WE), energy and atmosphere (EA), materials and resources (MR), indoor environment quality (IEQ) and design innovation (INNO). Each of these categories also includes mandatory requirements, which receive no points. Up to 10 additional points may be earned: 4 for regional priority credits and 6 for innovation in design. Additional performance categories for residences (LEED for Homes) recognize the importance of transportation access, open space, and outdoor physical activity, and the need for buildings and settlements to educate occupants. Buildings can qualify for four levels of certification: Certified: 40–49 points Silver: 50–59 points Gold: 60–79 points Platinum: 80 points and above The aim of LEED 2009 is to allocate points "based on the potential environmental impacts and human benefits of each credit". These are weighed using the environmental impact categories of the EPA's Tools for the Reduction and Assessment of Chemical and Other Environmental Impacts (TRACI) and the environmental-impact weighting scheme developed by the National Institute of Standards and Technology (NIST). Prior to LEED 2009 evaluation and certification, a building must comply with minimum requirements including environmental laws and regulations, occupancy scenarios, building permanence and pre-rating completion, site boundaries and area-to-site ratios. Its owner must share data on the building's energy and water use for five years after occupancy (for new construction) or date of certification (for existing buildings). The credit weighting process has the following steps: First, a collection of reference buildings are assessed to estimate the environmental impacts of similar buildings. NIST weightings are then applied to judge the relative importance of these impacts in each category. Data regarding actual impacts on environmental and human health are then used to assign points to individual categories and measures. This system results in a weighted average for each rating scheme based upon actual impacts and the relative importance of those impacts to human health and environmental quality. The LEED council also appears to have assigned credit and measured weighting based upon the market implications of point allocation. From 2010, buildings can use carbon offsets to achieve green power credits for LEED-NC (new construction certification). === LEED v4 (2014) === For LEED BD+C v4 credit, the IEQ category addresses thermal, visual, and acoustic comfort as well as indoor air quality. Laboratory and field research have directly linked occupants' satisfaction and performance to the building's thermal conditions. Energy reduction goals can be supported while improving thermal satisfaction. For example, providing occupants control over the thermostat or operable windows allows for comfort across a wider range of temperatures. === LEED v4.1 (2019) === On April 2, 2019, the USGBC released LEED v4.1, a new version of the LEED green building program, designed for use with cities, communities and homes. However, LEED v4.1 was never officially balloted. An update to v4, proposed as of November 22, 2022, took effect on March 1, 2024. Any projects that register under LEED v4 after March 1, 2024 must meet these updated guidelines. === LEED v5 (Draft, 2023) === As of January 2023, USGBC began to develop LEED v5. LEED v5 is the first version of the LEED rating system to be based on the June 2022 Future of LEED principles. The LEED v5 rating system will cover both new construction and existing buildings. An initial draft version was discussed at Greenbuild 2023. The beta draft of LEED v5 was released for an initial period of public comment on April 3, 2024. Changes were made in response to nearly 6,000 comments. A second public comment period was opened for the revised version, from September 27 to October 28, 2024. The official release of the final version of LEED v5 is expected to occur in 2025. Future updates of the certification system are planned to occur every five years. LEED v5 reorganizes the credits system and prerequisites, and has a greater focus on decarbonization of buildings. The scorecard expresses three global goals of climate action (worth 50% of the certification points), quality of life (25%) and conservation and ecological restoration (25%) in terms of five principles: decarbonization, ecosystems, equity, health and resilience. One of the reponses to public comments was to emphasize a data-driven approach to Operations and Maintenance by more clearly identifying performance-based credits (80% of points) and decoupling them from strategic credits (20%). === LEED Canada === In 2003, the Canada Green Building Council (CAGBC) received permission to create LEED Canada-NC v1.0, which was based upon LEED-NC 2.0. As of 2021, Canada ranked second in the world (not including the USA) in its number of LEED-certified projects and square feet of space. Buildings in Canada such as Winnipeg's Canadian Museum for Human Rights are LEED certified due to practices including the use of rainwater harvesting, green roofs, and natural lighting. As of March 18, 2022, the Canada Green Building Council took over direct oversight for LEED™ green building certification of projects in Canada, formerly done by GBCI Canada. CAGBC will continue to work with Green Business Certification Inc. (GBCI) and USGBC while consolidating certification and credentialing for CAGBC's Zero Carbon Building Standards, LEED, TRUE, and Investor Ready Energy Efficiency (IREE). IREE is a model supported by CAGBC and the Canada Infrastructure Bank (CIB) for the verification of proposed retrofit projects. === Certification process === LEED certification is granted by the Green Building Certification Institute (GBCI), which arranges third-party verification of a project's compliance with the LEED requirements. The certification process for design teams consists of the design application, under the purview of the architect and the engineer and documented in the official construction drawings, and the construction application, under the purview of the building contractor and documented during the construction and commissioning of the building. A fee is required to register the building, and to submit the design and construction applications. Total fees are assessed based on building area, ranging from a minimum of $2,900 to over $1 million for a large project. "Soft" costs – i.e., added costs to the building project to qualify for LEED certification – may range from 1% to 6% of the total project cost. The average cost increase was about 2%, or an extra $3–$5 per square foot. The application review and certification process is conducted through LEED Online, USGBC's web-based service. The GBCI also utilizes LEED Online to conduct their reviews. === LEED energy modeling === Applicants have the option of achieving credit points by building energy models. One model represents the building as designed, and a second model represents a baseline building in the same location, with the same geometry and occupancy. Depending on location (climate) and building size, the standard provides requirements for heating, ventilation and air-conditioning (HVAC) system type, and wall and window definitions. This allows for a comparison with emphasis on factors that heavily influence energy consumption when considering design decisions. === LEED for Homes rating system === The LEED for Homes rating system was first piloted in 2005. It has been available in countries including the U.S., Canada, Sweden, and India. LEED for Homes projects are low-rise residential. The process of the LEED for Homes rating system differs significantly from the LEED rating system for new construction. Unlike LEED, LEED for Homes requires an on-site inspection. LEED for Homes projects are required to work with either an American or a Canadian provider organization and a green rater. The provider organization helps the project through the process while overseeing the green raters, individuals who conduct two mandatory site inspections: the thermal bypass inspection and the final inspection. The provider and rater assist in the certification process but do not themselves certify the project. === Professional accreditation === In addition to certifying projects pursuing LEED, USGBC's Green Business Certification Inc. (GBCI) offers various accreditations to people who demonstrate knowledge of the LEED rating system, including LEED Accredited Professional (LEED AP), LEED Green Associate, and LEED Fellow. The Green Building Certification Institute (GBCI) describes its LEED professional accreditation as "demonstrat[ing] current knowledge of green building technologies, best practices" and the LEED rating system, to assure the holder's competency as one of "the most qualified, educated, and influential green building professionals in the marketplace." == Criticism == Critics of LEED certification such as Auden Schendler and Randy Udall have pointed out that the process is slow, complicated, and expensive. In 2005, they published an article titled "LEED is Broken; Let's Fix It", in which they argued that the certification process "makes green building more difficult than it needs to be" and called for changes "to make LEED easier to use and more popular" to better accelerate the transition to green building. Schendler and Udall also identified a pattern which they call "LEED brain", in which participants may become focused on "point mongering" and pick and choose design elements that don't actually go well together or don't fit local conditions, to gain points. The public relations value of LEED certification begins to drive the development of buildings rather than focusing on design. They give the example of debating whether to add a reflective roof, which can counter "heat island" effects in urban areas, to a building high in the Rocky Mountains.: 230  A 2012 USA Today review of 7,100 LEED-certified commercial buildings found that designers tended to choose easier points such as using recycled materials, rather than more challenging ones that could increase the energy efficiency of a building. Critics such as David Owen and Jeff Speck also point out that LEED certification focuses on the building itself, and does not take into account factors such as the location in which the building stands, or how employee commutes may be affected by a relocation. In Green Metropolis (2009), Owen discusses an environmentally-friendly building in San Bruno, California, built by Gap Inc., which was located 16 miles (26 km) from the company's corporate headquarters in downtown San Francisco, and 15 miles (24 km) from Gap's corporate campus in Mission Bay. Although the company added shuttle buses between buildings, "no bus is as green as an elevator".: 232–33  Similarly, in Walkable City (2013), Jeff Speck describes the relocation of the Environmental Protection Agency's Region 7 Headquarters from downtown Kansas City, Missouri, to a LEED-certified building 20 miles (32 km) away in the suburb of Lenexa, Kansas. Kaid Benfield of the Natural Resources Defense Council estimated that the carbon emissions associated with the additional miles driven were almost three times higher than before, a change from 0.39 metric tons per person per month to 1.08 metric tons of carbon dioxide per person per month. Speck writes that "The carbon saved by the new building's LEED status, if any, will be a small fraction of the carbon wasted by its location". Both Speck and Owen make the point that a building-centric standard that doesn't consider location will inevitably undervalue the benefits of people living closer together in cities, compared to the costs of automobile-oriented suburban sprawl.: 221–35  == Assessment == LEED is a design tool and as such has focused on energy modeling, rather than being a performance-measurement tool that measures actual energy consumption. LEED uses modeling software to predict future energy use based on intended use. Buildings certified under LEED do not have to prove energy or water efficiency in practice to receive LEED certification points. This has led to criticism of LEED's ability to accurately determine the efficiency of buildings, and concerns about the accuracy of its predictive models. Research papers provide most of what is known about the performance and effectiveness of LEED models and buildings. Much of the available research predates 2014, and therefore applies to buildings that were designed under early versions of the LEED rating and certification systems, LEED v3 (2009) or earlier. Research papers have tended to address performance and effectiveness of LEED in two credit category areas: energy (EA) and indoor environment quality (IEQ). Many early analyses should be considered as at best preliminary. Studies should be repeated with longer data history and larger building samples, include newer LEED certified buildings, and clearly identify green-building rating schemes and certification levels of individual buildings. Buildings may also need to be grouped according to location, since local conditions and regulation may influence building design and confound assessment results. === Modelling assessment === In 2018, Pushkar examined LEED-NC 2009 (v3) Certified-level certified projects from countries in northern (Finland, Sweden) and southern (Turkey, Spain) regions of Europe to see how different types of credits are understood and applied. Pushkar found that credit achievements were similar within regions and countries for Indoor Environmental Quality (EQ), Materials and Resources (MR), Sustainable Sites (SS), and Water Efficiency (WE), but differed for Energy and Atmosphere (EA). Sustainable Sites (SS) and Water Efficiency (WE) were high achievement areas, scoring 80–100% and 70–75%; Indoor Environmental Quality was intermediate (40–60%); and Materials and Resources (MR) was low (20–40%). Energy and Atmosphere (EA) was intermediate (60–65%) in northern Europe, and low (40%) in southern Europe. These results examine the extent to which different credits have been chosen by modellers. === Energy performance research (EA) === Because LEED focuses on the design of the building and not on its actual energy consumption, it has been suggested that LEED buildings should be tracked to discover whether the potential energy savings from the design are being used in practice. In 2009, architectural scientist Guy Newsham (et al.) of the National Research Council of Canada (NRC) re-analyzed a dataset of 100 LEED certified (v3 or earlier version) buildings. The data included only "medium use" buildings, and did not include 21 laboratories, data centers and supermarkets which were expected to have higher energy activity. Researchers further attempted to match each building with a conventional building within the Commercial Building Energy Consumption Survey (CBECS) database according to building type and occupancy. On average, the LEED buildings consumed 18 to 39% less energy by floor area than the conventional buildings. However, 28 to 35% of LEED-certified buildings used more energy. The paper found no correlation between the number of energy points achieved or LEED certification level and measured building performance. In 2009 physicist John Scofield published an article in response to Newsham et al., analyzing the same database of LEED buildings and arriving at different conclusions. Scofield criticized the earlier analysis for focusing on energy per floor area instead of a total energy consumption. Scofield considered source energy (accounting for energy losses during generation and transmission) as well as site energy, and used area-weighted energy use intensities (EUIs) (energy per unit area per year), when comparing buildings to account for the fact that larger buildings tend to have larger EUIs. Scofield concluded that, collectively, the LEED-certified buildings showed no significant source energy consumption savings or greenhouse gas emission reductions when compared to non-LEED buildings, although they did consume 10–17% less site energy. Scofield notes the difficulties of building analysis, given both the lack of a randomly selected sample of LEED buildings, and the diversity of factors involved when selecting a comparison group of non-LEED buildings. In 2013 Scofield identified 21 LEED-certified New York City office buildings with publicly available energy performance data for 2011, out of 953 office buildings in New York City with such data. Results differed with certification level. LEED-Gold buildings were found to use 20% less source energy than conventional buildings. However, buildings at the Silver and Certified levels used 11 to 15% more source energy, on average, than conventional buildings. (Data was not available for Platinum-level buildings.) An analysis of 132 LEED buildings based on municipal energy benchmarking data from Chicago in 2015 showed that LEED-certified buildings used about 10% less energy on site than comparable conventional buildings. However, the study did not show differences in use of source energy. In 2014, architect Gwen Fuertes and engineer Stefano Schiavon developed the first study that analyzes plug loads using LEED-documented data from certified projects. The study compared plug load assumptions made by 92 energy modeling practitioners against ASHRAE and Title 24 requirements, and the evaluation of the plug load calculation methodology used by 660 LEED-CI and 429 LEED-NC certified projects. They found that energy modelers only considered the energy consumption of predictable plug loads, such as refrigerators, computers and monitors. Overall the results suggested a disconnection between assumptions in the models and the actual performance of buildings. Energy modeling might be a source of error during the LEED design phase. Engineers Christopher Stoppel and Fernanda Leite evaluated the predicted and actual energy consumption of two twin buildings using the energy model during the LEED design phase and the utility meter data after one year of occupancy. The study's results suggest that mechanical systems turnover and occupancy assumptions significantly differ from predicted to actual values. In a 2019 review, Amiri et al. suggest that judging energy efficiency based on source energy may not be appropriate where the availability of energy types depends on city council or government policies. If some types of source energy are not supported locally, there is no opportunity to choose the types of energy promoted by the LEED scoring system. Amiri emphasizes that many studies have weaknesses due to the lack of randomly selected samples of LEED buildings, and the difficulty of selecting comparison groups of non-LEED buildings. Amiri also notes that the standards for building design have changed significantly over time. For example, newer non-LEED buildings may routinely use features such as high-quality windows which were rarely used in older buildings. Comparisons of LEED and non-LEED buildings therefore need to consider age as well as size, use, occupant behavior, and location aspects such as climate zone. Zhang et al. (2019) examined renewable energy assessment methods and different assessment systems, and noted that LEED-US addresses management problems at the pre-occupancy phase. Interest in Post‐occupancy evaluation (POE), the process of evaluating building performance after occupation, is increasing. This is due in part to concerns about differences between energy models in the design phase and actual use of buildings. POE research emphasizes the need to collect and analyze actual occupancy data from existing buildings, to better understand how people are using spaces and resources. Asensio and Delmas (2017) carefully matched and compared buildings that did and did not participate in LEED, Energy Star, and Better Buildings Challenge programs in Los Angeles, California. They examined data for monthly energy consumption between 2005–2012, for more than 175,000 commercial buildings. Buildings from all three programs displayed “high magnitude” energy savings, ranging from 18–19% for Better Buildings and Energy Star to 30% for LEED-rated buildings. The three programs saved 210 million kilowatt-hours, equal to 145 kilotons of CO2 equivalent emissions per year. === IEQ performance research (IEQ) === The Centers for Disease Control and Prevention (CDC) defines indoor environmental quality (IEQ) as "the quality of a building's environment in relation to the health and wellbeing of those who occupy space within it." The USGBC includes the following considerations for attaining IEQ credits: indoor air quality, the level of volatile organic compounds (VOC), lighting, thermal comfort, and daylighting and views. In consideration of a building's indoor environmental quality, published studies have also included factors such as: acoustics, building cleanliness and maintenance, colors and textures, workstation size, ceiling height, window access and shading, surface finishes, furniture adaptability and comfort. The most widely used method for post-occupancy evaluation (POE) in IEQ-related studies is occupant surveys. In 2013, architectural physicist Sergio Altamonte and Stefano Schiavon used occupant surveys from the Center for the Built Environment at Berkeley's database to study IEQ occupant satisfaction in 65 LEED buildings and 79 non-LEED buildings. They analyzed 15 IEQ-related factors including the ease of interaction, building cleanliness, the comfort of furnishing, the amount of light, building maintenance, colors and textures, workplace cleanliness, the amount of space, furniture adjustability, visual comfort, air quality, visual privacy, noise, temperature, and sound privacy. Occupants reported being slightly more satisfied in LEED buildings for the air quality and slightly more dissatisfied with the amount of light. Overall, occupants of both LEED and non-LEED buildings had equal satisfaction with the building overall and with the workspace. The authors noted that the data may not be representative of the entire building stock and a randomized approach was not used in the data assessment. Newsham et al (2013) carried out an evaluation using both occupant interviews and physical site measurements. Field studies and post-occupancy evaluations (POE) were performed in 12 "green" and 12 conventional buildings across Canada and the northern United States. Most but not all of the "green" buildings were LEED-certified. 2545 occupants completed a questionnaire. On-site, 974 randomly selected workstations were measured for thermal conditions, air quality, acoustics, lighting, workstation size, ceiling height, window access and shading, and surface finishes. Responses were positive in the areas of environmental satisfaction, satisfaction with thermal conditions, satisfaction with outside views, aesthetic appearance, reduced disturbance from HVAC noise, workplace image, night-time sleep quality, mood, physical symptoms, and reduced number of airborne particulates. The green buildings were rated more highly and in the case of airborne particulates exhibited superior performance than the conventional buildings. Schiavon and Altomonte (2014) found that occupants have equivalent satisfaction levels in LEED and non-LEED buildings when evaluated independently from the following factors: office type, spatial layout, distance from windows, building size, gender, age, type of work, time at workspace, and weekly working hours. LEED certified buildings may provide higher satisfaction in open spaces than in enclosed offices, in smaller buildings than in larger buildings, and to occupants having spent less than one year in their workspaces rather than to those who have used their workspace longer. This study suggests that the positive value of LEED certification as measured by occupant satisfaction may decrease with time. In 2015, environmental health scientist Joseph Allen (et al.) reviewed studies of indoor environmental quality and the potential health benefits of green-certified buildings. He concluded that green buildings provide better indoor environmental quality with direct benefits to the human health of occupants, compared to non-green buildings. Statistically significant measures from different studies included decreased symptoms of sick building syndrome, decreased sick days, decreased respiratory symptoms during the daytime and asthma symptoms at night, and lowered levels of PM2.5, NO2, and nicotine. However, Allen noted that the frequent use of subjective health performance indicators was a limitations of many of the studies reviewed. He proposed a framework to encourage the use of direct, objective, and leading “Health Performance Indicators” in building assessment. The daylight credit was updated in LEED v4 to include a simulation option for daylight analysis that uses spatial daylight autonomy (SDA) and annual sunlight exposure (ASE) metrics to evaluate daylight quality in LEED projects. SDA is a metric that measures the annual sufficiency of daylight levels in interior spaces and ASE describes the potential for visual discomfort by direct sunlight and glare. These metrics are approved by the Illuminating Engineering Society of North America (IES) and codified in the LM-83-12 standard. LEED recommends a minimum of 300 lux for at least 50% of total occupied hours of the year for at least 55% of the occupied floor area. The threshold recommended by LEED for ASE is that no more than 10% of regularly occupied floor area can be exposed to more than 1000 lux of direct sunlight for more than 250 hours per year. Additionally, LEED requires window shades to be closed when more than 2% of a space is subject to direct sunlight above 1000 lux. According to building scientist Christopher Reinhart, the direct sunlight requirement is a very stringent approach that can discourage good daylight design. Reinhart proposed the application of the direct sunlight criterion only in spaces that require stringent control of sunlight (e.g. desks, white boards, etc.). In 2024, Kent et al. compared satisfaction of people in buildings that had received either WELL certification or LEED certification. Ratings of buildings certified with WELL and LEED were matched on six dimensions: award level, years in building, time in workspace, type of workspace, proximity to a window, and floor height. Satisfaction with the overall building and one's workspace were high under both rating systems. However, satisfaction with LEED‑certified buildings (73% and 71%) tended to be lower than that for WELL‑certified buildings (94% and 87%). This may be because WELL is a human-centered standard for building design that focuses primarily on comfort, health, and well-being. In contrast, only 10% of the credits in LEED certification relate to indoor environmental quality (IEQ). Differences may also reflect age of buildings, which were not matched for in the design. === Water Efficiency (WE) === Water systems involve both water and energy as resources. Outside buildings, the acquisition, treatment, and transportation of water is involved. Inside building, onsite water treatment, heating, and wastewater treatment are issues. Data on the energy use of specific water and wastewater systems is becoming increasingly available. Energy use can sometimes be estimated from public sources. LEED v4 includes a number of credits related to Water Efficiency (WE). Points are awarded for Outdoor Water Use Reduction, Indoor Water Use Reduction and Building-level Water Metering based on predetermined percentage reductions in water or energy use. There has been criticism that the LEED rating system is not sensitive and does not vary enough with regard to local environmental conditions. For example, there are 16 climate zones in California, with unique weather and temperature patterns. The availability of electricity, water and other resources differs widely in different regions, making it important to consider interconnected systems and supply chain issues. Greer et al. (2019) reviewed renewable energy assessment methods and examined the effectiveness of LEED v4 buildings in California. They examined relationships between the climate mitigation points given for water efficiency (WE) and energy efficiency (EA) and used baseline energy and water budgets to calculate the avoided GHG emissions of buildings. Their calculations both demonstrate mitigation of expected climate change and also indicate high variability in environmental outcomes within the state. While LEED v4 introduced “Impact Categories” as system goals, Greer suggests that closer linkages are needed between design points and outcomes, and that issues like supply chains, infrastructure, and regionalized variability should be considered. They report that impacts like the mitigation of expected climate change pollution can be calculated, and while "LEED points do not equally reward equal impact mitigation", such differences could be reconciled to better align LEED credits and goals. === Innovation in design research (ID) === The rise in LEED certification also brought forth a new era of construction and building research and ideation. Architects and designers have begun stressing the importance of occupancy health over high efficiency within new construction and have been trying to engage in more conversations with health professionals. Along with this, they also create buildings to perform better and analyze performance data to upkeep the process. Another way LEED has affected research is that designers and architects focus on creating spaces that are modular and flexible to ensure a longer lifespan while simultaneously sourcing products that are resilient through consistent use. Innovation in LEED architecture is linked with new designs and high-quality construction. One example is use of nanoparticle technology for consolidation and conservation effects in cultural heritage buildings. This practice began with the use of calcium hydroxide nano-particles in porous structures to improve mechanical strength. Titanium, silica, and aluminum-based compounds may also be used. Material technology and construction techniques could be among first issues to consider in building design. For the facade of high-rise buildings, such as the Empire State Building, the surface area provides opportunities for design innovation. VOC released from construction materials into the air is another challenge to address. In Milan, a university-corporate partnership sought to produce semi-transparent solar panels to take the place of ordinary windows in glass-facade high-rise buildings. Similar concepts are under development elsewhere, with considerable market potential. The Manzara Adalar skyscraper project in Istanbul, designed by Zaha Hadid, saw considerable innovation through the use of communal rooms, outdoor spaces, and natural lighting as part of the Urban Transformation Project of the Kartal port region. === Sustainable Sites (SS) === === Remaining credit areas === Other credit areas include: Materials and Resources (MR), and Regional Priority (RP). === Financial considerations === When a LEED rating is pursued, the cost of initial design and construction may rise. There may be a lack of abundant availability of manufactured building components that meet LEED specifications. There are also added costs in USGBC correspondence, LEED design-aide consultants, and the hiring of the required Commissioning Authority, which are not in themselves necessary for an environmentally responsible project unless seeking LEED certification. Proponents argue that these higher initial costs can be mitigated by the savings incurred over time due to projected lower-than-industry-standard operational costs typical of a LEED certified building. This life cycle costing is a method for assessing the total cost of ownership, taking into account all costs of acquiring, owning and operating, and the eventual disposal of a building. Additional economic payback may come in the form of employee productivity gains incurred as a result of working in a healthier environment. Studies suggest that an initial up-front investment of 2% extra yields over ten times that initial investment over the life cycle of the building. LEED has been developed and continuously modified by workers in the green building industry, especially in the ten largest metro areas in the U.S.; however, LEED certified buildings have been slower to penetrate small and middle markets. From a financial perspective, studies from 2008 and 2009 found that LEED for-rent office spaces generally charged higher rent and had higher occupancy rates. Analysis of CoStar Group property data estimated the extra cost for the minimum benefit at 3%, with an additional 2.5% for silver-certified buildings. More recent studies have confirmed earlier findings that certified buildings achieve significantly higher rents, sale prices and occupancy rates as well as lower capitalization rates, potentially reflecting lower investment risk. == Incentive programs == Many federal, state, and local governments and school districts have adopted various types of LEED initiatives and incentives. LEED incentive programs can include tax credits, tax breaks, density zoning bonuses, reduced fees, priority or expedited permitting, free or reduced-cost technical assistance, grants and low-interest loans. In the United States, states that have provided incentives include California, New York, Delaware, Hawaii, Illinois, Maryland, Nevada, New Mexico, North Carolina, Pennsylvania, and Virginia. Cincinnati, Ohio, provides property tax abatements for newly constructed or rehabilitated commercial or residential properties that earn are LEED certified. Beginning in June 2013, USGBC has offered free LEED certification to the first LEED-certified project in a country that doesn't have one. == Notable certifications == === Directories of LEED-certified projects === The USGBC and Canada Green Building Council maintain online directories of U.S. LEED-certified and LEED Canada-certified projects. In 2012 the USGBC launched the Green Building Information Gateway (GBIG) to connect green building efforts and projects worldwide. It provides searchable access to a database of activities, buildings, places and collections of green building-related information from many sources and programs, including LEED projects. A number of sites including the Canada Green Building Council (CaGBC) Project Database list resources relating to LEED buildings in Canada. === Platinum certification === The Philip Merrill Environmental Center in Annapolis, Maryland was the first building to receive a LEED-Platinum rating, version 1.0. It was recognized as one of the "greenest" buildings constructed in the U.S. in 2001 at the time it was built. Sustainability issues ranging from energy use to material selection were given serious consideration throughout design and construction of this facility. The first LEED platinum-rated building outside the U.S. is the CII Sohrabji Godrej Green Business Centre (CII GBC) in Hyderabad, India, certified in 2003 under LEED version 2.0. The Coastal Maine Botanical Gardens Bosarge Family Education Center, completed in 2011, achieved LEED Platinum certification and became known as "Maine's greenest building". In October 2011 Apogee Stadium at the University of North Texas became the first newly built stadium in the country to achieve Platinum-level certification. In Pittsburgh, Sota Construction Services' corporate headquarters earned a LEED Platinum rating in 2012 with one of the highest scores by percentage of total points earned in any LEED category, making it one of the top ten greenest buildings in the world. It featured a super-efficient thermal envelope using cob walls, a geothermal well, radiant heat flooring, a roof-mounted solar panel array, and daylighting features. When it received LEED Platinum in 2012, Manitoba Hydro Place in downtown Winnipeg was the most energy-efficient office tower in North America and the only office tower in Canada with a Platinum rating. The office tower employs south-facing winter gardens to capture solar energy during the harsh Manitoba winters and uses glass extensively to maximize natural light. === Gold certification === Pittsburgh's 1,500,000-square-foot (140,000 m2) David L. Lawrence Convention Center was the first Gold LEED-certified convention center and largest "green" building in the world when it opened in 2003. It earned Platinum certification in 2012, becoming the only convention center with certifications for both the original building and new construction. The Cashman Equipment building in Henderson, Nevada became the first construction equipment dealership to receive LEED gold certification in 2009. The headquarters of the Caterpillar brand, it is the largest LEED industrial complex in Nevada. Around 2010, the Empire State Building underwent a $550 million renovation, including $120 million towards energy efficiency and eco-friendliness. It received a gold LEED rating in 2011, and at the time was the tallest LEED-certified building in the United States. In July 2014, the San Francisco 49ers' Levi's Stadium became the first NFL venue to earn a LEED Gold certification. The Minnesota Vikings' U.S. Bank Stadium equaled this feat with a Gold certification in Building Design and Construction in 2017 as well as a Platinum certification in Operations and Maintenance in 2019, a first for any professional sports stadium. In San Francisco's Presidio, the Letterman Digital Arts Center earned a Gold certification in 2013. It was built almost entirely from the recycled remains of the Letterman Army Hospital, which previously occupied the site. Although originally constructed in 1973, Willis Tower a commercial office building located in Chicago, adopted and implemented a new set of sustainable practices in 2018, earning the property LEED Gold certification under the LEED for Existing Buildings: O&M™ rating system. This adoption earned Willis Tower the ranking of the tallest LEED-certified building in the United States. === Multiple certifications === In September 2012, The Crystal in London became the world's first building awarded LEED Platinum and BREEAM Outstanding status. It generates its own energy using solar power and ground-source heat pumps and utilizes extensive KNX technologies to automate the building's environmental controls. In Pittsburgh, the visitor's center of Phipps Conservatory & Botanical Gardens received Silver certification, its Center for Sustainable Landscapes received a Platinum certification and fulfilled the Living Building Challenge for net-zero energy, and its greenhouse facility received Platinum certification. It may be the only greenhouse in the world to have achieved such a rating. Torre Mayor, at one time the tallest building in Mexico, achieved LEED Gold certification for an existing building and eventually reached Platinum certification under LEED v4.1. The building is designed to withstand 8.5-magnitude earthquakes, and has enhanced many of its systems including air handling and water treatment. In 2017, Kaiser Permanente, the largest integrated health system in the United States, opened California's first LEED Platinum certified hospital, the Kaiser Permanente San Diego Medical Center. By 2020, Kaiser Permanente owned 40 LEED certified buildings. Its construction of LEED buildings was one of multiple initiatives that enabled Kaiser Permanente to report net-zero carbon emissions in 2020. As of 2022, University of California, Irvine had 32 LEED-certified buildings across the campus. 21 were LEED Platinum certified, and 11 were LEED Gold. === Extreme structures === Extreme structures that have received LEED certification include: Amorepacific Headquarters in Seoul by David Chipperfield Architects; Project: Brave New World: SFMOMA by Snøhetta in San Francisco, California; Project: UFO in a Sequinned Dress: Centro Botín in Santander by Renzo Piano; Building Workshop in Zusammenarbeit with Luis Vidal + Architects, in Santander, Spain; and Project: Vertical factory: Office building in London by Allford Hall Monaghan Morris in London. == See also == == Notes and references == === Explanatory notes === === Citations === == External links == U.S. Green Building Council (USGBC) About LEED at USGBC LEED rating system at USGBC LEED Project Directory at USGBC EXPLORE GREEN BUILDINGS GBIG Green Building Information Gateway by USGBC Canada Green Building Council (CGBC) Project Database for CGBC
Wikipedia/Leadership_in_Energy_and_Environmental_Design
Demand controlled ventilation (DCV) is a feedback control method to maintain indoor air quality that automatically adjusts the ventilation rate provided to a space in response to changes in conditions such as occupant number or indoor pollutant concentration. The most common indoor pollutants monitored in DCV systems are carbon dioxide and humidity. This control strategy is mainly intended to reduce the energy used by heating, ventilation, and air conditioning (HVAC) systems compared to those of buildings that use open-loop controls with constant ventilation rates. == When to use DCV == Standard HVAC system design uses fixed airflow rates to calculate the outdoor air (OA) required in a space. These airflow rates are determined by mechanical code and vary based on expected occupancy and space use. This process of supplying fixed airflow to a space ensures that sufficient OA is present in that space when it is occupied. However, such spaces are not always fully occupied; in these cases, energy is wasted as the HVAC system processes more OA than is necessary for the space occupants. Demand control ventilation is an attractive alternative to standard design in these situations because DCV systems only supply the outdoor airflow necessary to serve the occupants in a space. Therefore, the above-described energy is not wasted in this system type. == DCV application in different system types == DCV is primarily used in variable-air-volume (VAV) systems. In DCV VAV systems, airflow to a zone is modulated to control the temperature and outdoor airflow to the space. Using the pollutant levels measured in a zone, the system’s controller sets the zone’s minimum airflow requirement to dilute the pollutant concentration. Such a control sequence is supported by a pollutant sensor (e.g. carbon dioxide sensor), a variable frequency drive (VFD) on the fan supplying the zone, individual VAV boxes with reheat serving each space in the zone, and airflow measuring stations. Research has been conducted on the application of DCV in constant-air-volume (CAV) systems. Although CAV systems cannot modulate airflow, researchers have experimented with running CAV system equipment intermittently to reduce energy consumption. In this proposed system, the HVAC equipment is to run continuously when the space is occupied, then cycle on and off to maintain indoor air quality during inoccupancy. == Carbon dioxide sensing == Carbon dioxide levels measured in a space are commonly used to control DCV systems because CO2 level is generally proportional to the level of bioeffluents, or occupant generated pollutants, in a space. Carbon dioxide sensors monitor carbon dioxide levels in a space by strategic placement. The placement of the sensors should be able to provide an accurate representation of the space, usually placed in a return duct or on the wall. As the sensor reads the increasing amount of carbon dioxide levels in a space, the ventilation increases to dilute the levels. When the space is unoccupied, the sensor reads normal levels, and continues to supply the unoccupied airflow rate. This rate is determined by the building owner standards, along with the designer and ASHRAE Standard 62.1. == Codes & standards == Common reference codes and standards for ventilation: International Mechanical Code (IMC) Chapter 4: Ventilation International Organization for Standardization (ISO) International Classification for Standards (ICS) 91.140.30: Ventilation and air-conditioning systems American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) 62.1 & 62.2: The standards for Ventilation and Indoor Air Quality == Examples of estimating occupancy == Ticket sales Timed schedules Positive control gates Motion sensors (various technologies including: Audible sound, inaudible sound, infrared) Gas detection (CO2) In a survey on Norwegian schools, using CO2 sensors for DCV was found to reduce energy consumption by 62% when compared with a constant air volume (CAV) ventilation system. Security equipment data share (including people counting video software) Inference from other system sensors/equipment, like smart meters == See also == Room air distribution == References == == External links ==
Wikipedia/Demand_controlled_ventilation
The caloric theory is an obsolete scientific theory that heat consists of a self-repellent fluid called caloric that flows from hotter bodies to colder bodies. Caloric was also thought of as a weightless gas that could pass in and out of pores in solids and liquids. The "caloric theory" was superseded by the mid-19th century in favor of the mechanical theory of heat, but nevertheless persisted in some scientific literature—particularly in more popular treatments—until the end of the 19th century. == Early history == === Phlogiston theory is replaced by combustion in oxygen === In the history of thermodynamics, the initial explanations of heat were thoroughly confused with explanations of combustion. After J. J. Becher and Georg Ernst Stahl introduced the phlogiston theory of combustion in the 17th century, phlogiston was thought to be the substance of heat. There is one version of the caloric theory that was introduced by Antoine Lavoisier. Prior to Lavoisier's caloric theory, published references concerning heat and its existence, outside of being an agent for chemical reactions, were sparse only having been offered by Joseph Black in Rozier's Journal (1772) citing the melting temperature of ice. In response to Black, Lavoisier's private manuscripts revealed that he had encountered the same phenomena of a fixed melting point for ice and mentioned that he had already formulated an explanation which he had not published as of yet. Lavoisier developed the explanation of combustion in terms of oxygen in the 1770s. === Igneous fluid === On 28 June and 13 July 1783, Lavoisier read his two-part manuscript Reflections on phlogiston (Réflexions sur le phlogistique) at the Royal Academy of Sciences in Paris. In this paper Lavoisier argued that the phlogiston theory was inconsistent with his experimental results, and proposed a 'subtle fluid' he named “igneous fluid” as the substance of heat. Lavoisier argued that this “igneous fluid” is the cause of heat, and that its existence is necessary to explain thermal expansion and contraction. When an ordinary body—solid or fluid—is heated, that body ... occupies a larger and larger volume. If the cause of heating ceases, the body retreats ... at the same rate as it cools. Finally, if it is returned to the same temperature that it had at the first instant, it will clearly return to the same volume as it had before. Hence the corpuscles of matter do not touch each other, there exists between them a distance that heat increases and that cold decreases. One can scarcely conceive of these phenomena except by admitting the existence of a subtle fluid, the accumulation of which is the cause of heat and the absence of which is the cause of coldness. No doubt it is this fluid that lodges between the particles of matter, which spreads them apart and which occupies the space left between them. ... I name this fluid ... igneous fluid, the matter of heat and fire. I do not deny that the existence of this fluid is ... hypothetical. === Caloric === ==== Caloric vs. heat ==== The term “caloric” was not coined until 1787, when Louis-Bernard Guyton de Morveau used calorique in a work he co-edited with Lavoisier. The word “caloric” was first used in English in a 1788 translation of Guyton de Morveau's essay by James St. John. In his influential 1789 textbook Traité Élémentaire de Chimie, Lavoisier clarified the concept of caloric and introduced it to a wider audience. Lavoisier emphasized that caloric was the cause of heat and therefore could not be equated with heat, i.e. not be the cause of itself. As for a definition of heat, Lavoisier offered just a simple, dictionary-style explanation: heat ... the sensation which we call warmth being caused by the accumulation of this substance, we cannot, in strict language, distinguish it by the term heat; because the same name would then very improperly express both cause and effect. For this reason, in the memoir which I published in 1777, I gave it the names of igneous fluid and matter of heat. In [Méthode de nomenclature chimique] we have distinguished the cause of heat, or that exquisitely elastic fluid which produces it, by the term of caloric. ==== Caloric theory ==== According to the caloric theory, the quantity of this substance is constant throughout the universe, and it flows from warmer to colder bodies. Indeed, Lavoisier was one of the first to use a calorimeter to measure the heat released during chemical reaction. Lavoisier presented the idea that caloric was a subtle fluid, obeying the common laws of matter, but attenuated to such a degree that it is capable of passing through dense matter without restraint; caloric's own material nature is evident when it is in abundance such as in the case of an explosion. In the 1780s, Count Rumford believed that cold was a fluid, "frigoric", after the results of Pictet's experiment. Pierre Prévost argued that cold was simply a lack of caloric. Since heat was a material substance in caloric theory, and therefore could neither be created nor destroyed, conservation of heat was a central assumption. Heat conduction was believed to have occurred as a result of the affinity between caloric and matter thus the less caloric a substance possessed, thereby being colder, attracted excess caloric from nearby atoms until a caloric, and temperature, equilibrium was reached. Chemists of the time believed in the self-repulsion of heat particles as a fundamental force thereby making the great fluid elasticity of caloric, which does not create a repulsive force, an anomalous property which Lavoisier could not explain to his detractors. Radiation of heat was explained by Lavoisier to be concerned with the condition of the surface of a physical body rather than the material of which it was composed. Lavoisier described a poor radiator to be a substance with a polished or smooth surface as it possessed its molecules lying in a plane closely bound together thus creating a surface layer of caloric which insulated the release of the rest within. He described a great radiator to be a substance with a rough surface as only a small amount of molecules held caloric in within a given plane allowing for greater escape from within. Count Rumford would later cite this explanation of caloric movement as insufficient to explain the radiation of cold becoming a point of contention for the theory as a whole. The introduction of the caloric theory was influenced by the experiments of Joseph Black related to the thermal properties of materials. Besides the caloric theory, another theory existed in the late eighteenth century that could explain the phenomenon of heat: the kinetic theory. The two theories were considered to be equivalent at the time, but kinetic theory was the more modern one, as it used a few ideas from atomic theory and could explain both combustion and calorimetry. Caloric theory's inability to explain evaporation and sublimation further led to the rise of kinetic theory through the work of Count Rumford. Count Rumford observed solid mercury's tendency to melt under atmospheric conditions and thus proposed that the intensity of heat itself must stem from particle motion for such an event to occur where great heat was not expected to be. == Successes == Quite a number of successful explanations can be, and were, made from these hypotheses alone. We can explain the cooling of a cup of tea in room temperature: caloric is self-repelling, and thus slowly flows from regions dense in caloric (the hot water) to regions less dense in caloric (the cooler air in the room). We can explain the expansion of air under heat: caloric is absorbed into the air, which increases its volume. If we say a little more about what happens to caloric during this absorption phenomenon, we can explain the radiation of heat, the state changes of matter under various temperatures, and deduce nearly all of the gas laws. Sadi Carnot, who reasoned purely on the basis of the caloric theory, developed his principle of the Carnot cycle, which still forms the basis of heat engine theory. Carnot's analysis of energy flow in steam engines (1824) marks the beginning of ideas which led thirty years later to the recognition of the second law of thermodynamics. Caloric was believed to be capable of entering chemical reactions as a substituent inciting corresponding changes in the matter states of other substances. Lavoisier explained that the caloric quantity of a substance, and by extent the fluid elasticity of caloric, directly determined the state of the substance. Thus, changes in state were a central aspect of a chemical process and essential for a reaction where the substituents undergo changes in temperature. Changes of state had gone virtually ignored by previous chemists making the caloric theory the inception point for this class of phenomena as a subject of interest under scientific inquiry. However, one of the greatest apparent confirmations of the caloric theory was Pierre-Simon Laplace's theoretical correction of Sir Isaac Newton’s calculation of the speed of sound. Newton had assumed an isothermal process, while Laplace, a calorist, treated it as adiabatic. This addition not only substantially corrected the theoretical prediction of the speed of sound, but also continued to make even more accurate predictions for almost a century afterward, even as measurements became more precise. == Later developments == In 1798, Count Rumford published "An Inquiry Concerning the Source of the Heat Which Is Excited by Friction", a report on his investigation of the heat produced while manufacturing cannons. He had found that boring a cannon repeatedly does not result in a loss of its ability to produce heat, and therefore no loss of caloric. This suggested that caloric could not be a conserved "substance", though the experimental uncertainties in his experiment were widely debated. His results were not seen as a "threat" to caloric theory at the time, as this theory was considered to be equivalent to the alternative kinetic theory. In fact, to some of his contemporaries, the results added to the understanding of caloric theory. Rumford's experiment inspired the work of James Prescott Joule and others towards the middle of the 19th century. In 1850, Rudolf Clausius published a paper showing that the two theories were indeed compatible, as long as the calorists' principle of the conservation of heat was replaced by a principle of conservation of energy. Although compatible however, the theories differ significantly in their implications. In modern thermodynamics, heat is usually a transfer of kinetic energy of particles (atoms, molecules) from a hotter to a colder substance. In later combination with the law of energy conservation, the caloric theory still provides a valuable analogy for some aspects of heat, for example, the emergence of Laplace's equation and Poisson's equation in the problems of spatial distribution of heat and temperature. == See also == Energeticisms == Notes == == References == === Citations === === Sources cited === Best, Nicholas W. (2015). "Lavoisier's "Reflections on phlogiston" II: on the nature of heat". Foundations of Chemistry. 18 (1): 3–13. doi:10.1007/s10698-015-9236-x. ISSN 1386-4238. Page numbers in square brackets are from Lavoisier, A.-L. (1862–93) [1786]. ‘‘Réflexions sur le phlogistique, pour servir de suite à la théorie de la combustion et de la calcination, publiée en 1777’’. In Dumas, J.-B.; Grimaux, E.; Fouqué, F.-A. (eds.). Œuvres de Lavoisier, Vol. II. Imprimerie Impériale. pp. 623–655. The subtitle ‘‘On the Nature of Heat’’ is not part of Lavoisier’s original work.{{cite journal}}: CS1 maint: postscript (link) Fox, R. (1971). The Caloric Theory of Gases. Clarendon Press: Oxford. Chang, H.S. (2003). "Preservative realism and its discontents: Revisiting caloric" (PDF). Philosophy of Science. 70 (5): 902–912. doi:10.1086/377376. S2CID 40143106. Mendosa, E. (February 1961). "A sketch for a history of early thermodynamics". Physics Today. 14 (2): 32–42. Bibcode:1961PhT....14b..32M. doi:10.1063/1.3057388.
Wikipedia/Caloric_theory
In thermodynamics, entropy is a numerical quantity that shows that many physical processes can go in only one direction in time. For example, cream and coffee can be mixed together, but cannot be "unmixed"; a piece of wood can be burned, but cannot be "unburned". The word 'entropy' has entered popular usage to refer to a lack of order or predictability, or of a gradual decline into disorder. A more physical interpretation of thermodynamic entropy refers to spread of energy or matter, or to extent and diversity of microscopic motion. If a movie that shows coffee being mixed or wood being burned is played in reverse, it would depict processes highly improbable in reality. Mixing coffee and burning wood are "irreversible". Irreversibility is described by a law of nature known as the second law of thermodynamics, which states that in an isolated system (a system not connected to any other system) which is undergoing change, entropy increases over time. Entropy does not increase indefinitely. A body of matter and radiation eventually will reach an unchanging state, with no detectable flows, and is then said to be in a state of thermodynamic equilibrium. Thermodynamic entropy has a definite value for such a body and is at its maximum value. When bodies of matter or radiation, initially in their own states of internal thermodynamic equilibrium, are brought together so as to intimately interact and reach a new joint equilibrium, then their total entropy increases. For example, a glass of warm water with an ice cube in it will have a lower entropy than that same system some time later when the ice has melted leaving a glass of cool water. Such processes are irreversible: A glass of cool water will not spontaneously turn into a glass of warm water with an ice cube in it. Some processes in nature are almost reversible. For example, the orbiting of the planets around the Sun may be thought of as practically reversible: A movie of the planets orbiting the Sun which is run in reverse would not appear to be impossible. While the second law, and thermodynamics in general, accurately predicts the intimate interactions of complex physical systems, scientists are not content with simply knowing how a system behaves, they also want to know why it behaves the way it does. The question of why entropy increases until equilibrium is reached was answered in 1877 by physicist Ludwig Boltzmann. The theory developed by Boltzmann and others is known as statistical mechanics. Statistical mechanics explains thermodynamics in terms of the statistical behavior of the atoms and molecules which make up the system. The theory not only explains thermodynamics, but also a host of other phenomena which are outside the scope of thermodynamics. == Explanation == === Thermodynamic entropy === The concept of thermodynamic entropy arises from the second law of thermodynamics. This law of entropy increase quantifies the reduction in the capacity of an isolated compound thermodynamic system to do thermodynamic work on its surroundings, or indicates whether a thermodynamic process may occur. For example, whenever there is a suitable pathway, heat spontaneously flows from a hotter body to a colder one. Thermodynamic entropy is measured as a change in entropy ( Δ S {\displaystyle \Delta S} ) to a system containing a sub-system which undergoes heat transfer to its surroundings (inside the system of interest). It is based on the macroscopic relationship between heat flow into the sub-system and the temperature at which it occurs summed over the boundary of that sub-system. Following the formalism of Clausius, the basic calculation can be mathematically stated as: δ S = δ q T . {\displaystyle {\rm {\delta }}S={\frac {{\rm {\delta }}q}{T}}.} where δ S {\displaystyle \delta S} is the increase or decrease in entropy, δ q {\displaystyle \delta q} is the heat added to the system or subtracted from it, and T {\displaystyle T} is temperature. The 'equals' sign and the symbol δ {\displaystyle \delta } imply that the heat transfer should be so small and slow that it scarcely changes the temperature T {\displaystyle T} . If the temperature is allowed to vary, the equation must be integrated over the temperature path. This calculation of entropy change does not allow the determination of absolute value, only differences. In this context, the Second Law of Thermodynamics may be stated that for heat transferred over any valid process for any system, whether isolated or not, δ S ≥ δ q T . {\displaystyle {{\rm {\delta }}S}\geq {\frac {{\rm {\delta }}q}{T}}.} According to the first law of thermodynamics, which deals with the conservation of energy, the loss δ q {\displaystyle \delta q} of heat will result in a decrease in the internal energy of the thermodynamic system. Thermodynamic entropy provides a comparative measure of the amount of decrease in internal energy and the corresponding increase in internal energy of the surroundings at a given temperature. In many cases, a visualization of the second law is that energy of all types changes from being localized to becoming dispersed or spread out, if it is not hindered from doing so. When applicable, entropy increase is the quantitative measure of that kind of a spontaneous process: how much energy has been effectively lost or become unavailable, by dispersing itself, or spreading itself out, as assessed at a specific temperature. For this assessment, when the temperature is higher, the amount of energy dispersed is assessed as 'costing' proportionately less. This is because a hotter body is generally more able to do thermodynamic work, other factors, such as internal energy, being equal. This is why a steam engine has a hot firebox. The second law of thermodynamics deals only with changes of entropy ( Δ S {\displaystyle \Delta S} ). The absolute entropy (S) of a system may be determined using the third law of thermodynamics, which specifies that the entropy of all perfectly crystalline substances is zero at the absolute zero of temperature. The entropy at another temperature is then equal to the increase in entropy on heating the system reversibly from absolute zero to the temperature of interest. === Statistical mechanics and information entropy === Thermodynamic entropy bears a close relationship to the concept of information entropy (H). Information entropy is a measure of the "spread" of a probability density or probability mass function. Thermodynamics makes no assumptions about the atomistic nature of matter, but when matter is viewed in this way, as a collection of particles constantly moving and exchanging energy with each other, and which may be described in a probabilistic manner, information theory may be successfully applied to explain the results of thermodynamics. The resulting theory is known as statistical mechanics. An important concept in statistical mechanics is the idea of the microstate and the macrostate of a system. If we have a container of gas, for example, and we know the position and velocity of every molecule in that system, then we know the microstate of that system. If we only know the thermodynamic description of that system, the pressure, volume, temperature, and/or the entropy, then we know the macrostate of that system. Boltzmann realized that there are many different microstates that can yield the same macrostate, and, because the particles are colliding with each other and changing their velocities and positions, the microstate of the gas is always changing. But if the gas is in equilibrium, there seems to be no change in its macroscopic behavior: No changes in pressure, temperature, etc. Statistical mechanics relates the thermodynamic entropy of a macrostate to the number of microstates that could yield that macrostate. In statistical mechanics, the entropy of the system is given by Ludwig Boltzmann's equation: S = k B ln ⁡ W {\displaystyle S=k_{\text{B}}\,\ln W} where S is the thermodynamic entropy, W is the number of microstates that may yield the macrostate, and k B {\displaystyle k_{\text{B}}} is the Boltzmann constant. The natural logarithm of the number of microstates ( ln ⁡ W {\displaystyle \ln W} ) is known as the information entropy of the system. This can be illustrated by a simple example: If you flip two coins, you can have four different results. If H is heads and T is tails, we can have (H,H), (H,T), (T,H), and (T,T). We can call each of these a "microstate" for which we know exactly the results of the process. But what if we have less information? Suppose we only know the total number of heads?. This can be either 0, 1, or 2. We can call these "macrostates". Only microstate (T,T) will give macrostate zero, (H,T) and (T,H) will give macrostate 1, and only (H,H) will give macrostate 2. So we can say that the information entropy of macrostates 0 and 2 are ln(1) which is zero, but the information entropy of macrostate 1 is ln(2) which is about 0.69. Of all the microstates, macrostate 1 accounts for half of them. It turns out that if you flip a large number of coins, the macrostates at or near half heads and half tails accounts for almost all of the microstates. In other words, for a million coins, you can be fairly sure that about half will be heads and half tails. The macrostates around a 50–50 ratio of heads to tails will be the "equilibrium" macrostate. A real physical system in equilibrium has a huge number of possible microstates and almost all of them are the equilibrium macrostate, and that is the macrostate you will almost certainly see if you wait long enough. In the coin example, if you start out with a very unlikely macrostate (like all heads, for example with zero entropy) and begin flipping one coin at a time, the entropy of the macrostate will start increasing, just as thermodynamic entropy does, and after a while, the coins will most likely be at or near that 50–50 macrostate, which has the greatest information entropy – the equilibrium entropy. The macrostate of a system is what we know about the system, for example the temperature, pressure, and volume of a gas in a box. For each set of values of temperature, pressure, and volume there are many arrangements of molecules which result in those values. The number of arrangements of molecules which could result in the same values for temperature, pressure and volume is the number of microstates. The concept of information entropy has been developed to describe any of several phenomena, depending on the field and the context in which it is being used. When it is applied to the problem of a large number of interacting particles, along with some other constraints, like the conservation of energy, and the assumption that all microstates are equally likely, the resultant theory of statistical mechanics is extremely successful in explaining the laws of thermodynamics. == Example of increasing entropy == Ice melting provides an example in which entropy increases in a small system, a thermodynamic system consisting of the surroundings (the warm room) and the entity of glass container, ice and water which has been allowed to reach thermodynamic equilibrium at the melting temperature of ice. In this system, some heat (δQ) from the warmer surroundings at 298 K (25 °C; 77 °F) transfers to the cooler system of ice and water at its constant temperature (T) of 273 K (0 °C; 32 °F), the melting temperature of ice. The entropy of the system, which is ⁠δQ/T⁠, increases by ⁠δQ/273 K⁠. The heat δQ for this process is the energy required to change water from the solid state to the liquid state, and is called the enthalpy of fusion, i.e. ΔH for ice fusion. The entropy of the surrounding room decreases less than the entropy of the ice and water increases: the room temperature of 298 K is larger than 273 K and therefore the ratio, (entropy change), of ⁠δQ/298 K⁠ for the surroundings is smaller than the ratio (entropy change), of ⁠δQ/273 K⁠ for the ice and water system. This is always true in spontaneous events in a thermodynamic system and it shows the predictive importance of entropy: the final net entropy after such an event is always greater than was the initial entropy. As the temperature of the cool water rises to that of the room and the room further cools imperceptibly, the sum of the ⁠δQ/T⁠ over the continuous range, "at many increments", in the initially cool to finally warm water can be found by calculus. The entire miniature 'universe', i.e. this thermodynamic system, has increased in entropy. Energy has spontaneously become more dispersed and spread out in that 'universe' than when the glass of ice and water was introduced and became a 'system' within it. == Origins and uses == Originally, entropy was named to describe the "waste heat", or more accurately, energy loss, from heat engines and other mechanical devices which could never run with 100% efficiency in converting energy into work. Later, the term came to acquire several additional descriptions, as more was understood about the behavior of molecules on the microscopic level. In the late 19th century, the word "disorder" was used by Ludwig Boltzmann in developing statistical views of entropy using probability theory to describe the increased molecular movement on the microscopic level. That was before quantum behavior came to be better understood by Werner Heisenberg and those who followed. Descriptions of thermodynamic (heat) entropy on the microscopic level are found in statistical thermodynamics and statistical mechanics. For most of the 20th century, textbooks tended to describe entropy as "disorder", following Boltzmann's early conceptualisation of the "motional" (i.e. kinetic) energy of molecules. More recently, there has been a trend in chemistry and physics textbooks to describe entropy as energy dispersal. Entropy can also involve the dispersal of particles, which are themselves energetic. Thus there are instances where both particles and energy disperse at different rates when substances are mixed together. The mathematics developed in statistical thermodynamics were found to be applicable in other disciplines. In particular, information sciences developed the concept of information entropy, which lacks the Boltzmann constant inherent in thermodynamic entropy. === Classical calculation of entropy === When the word 'entropy' was first defined and used in 1865, the very existence of atoms was still controversial, though it had long been speculated that temperature was due to the motion of microscopic constituents and that "heat" was the transferring of that motion from one place to another. Entropy change, Δ S {\displaystyle \Delta S} , was described in macroscopic terms that could be directly measured, such as volume, temperature, or pressure. However, today the classical equation of entropy, Δ S = q r e v T {\displaystyle \Delta S={\frac {q_{\mathrm {rev} }}{T}}} can be explained, part by part, in modern terms describing how molecules are responsible for what is happening: Δ S {\displaystyle \Delta S} is the change in entropy of a system (some physical substance of interest) after some motional energy ("heat") has been transferred to it by fast-moving molecules. So, Δ S = S f i n a l − S i n i t i a l {\displaystyle \Delta S=S_{\mathrm {final} }-S_{\mathrm {initial} }} . Then, Δ S = S f i n a l − S i n i t i a l = q r e v T {\displaystyle \Delta S=S_{\mathrm {final} }-S_{\mathrm {initial} }={\frac {q_{\mathrm {rev} }}{T}}} , the quotient of the motional energy ("heat") q that is transferred "reversibly" (rev) to the system from the surroundings (or from another system in contact with the first system) divided by T, the absolute temperature at which the transfer occurs. "Reversible" or "reversibly" (rev) simply means that T, the temperature of the system, has to stay (almost) exactly the same while any energy is being transferred to or from it. That is easy in the case of phase changes, where the system absolutely must stay in the solid or liquid form until enough energy is given to it to break bonds between the molecules before it can change to a liquid or a gas. For example, in the melting of ice at 273.15 K, no matter what temperature the surroundings are – from 273.20 K to 500 K or even higher, the temperature of the ice will stay at 273.15 K until the last molecules in the ice are changed to liquid water, i.e., until all the hydrogen bonds between the water molecules in ice are broken and new, less-exactly fixed hydrogen bonds between liquid water molecules are formed. This amount of energy necessary for ice melting per mole has been found to be 6008 J at 273 K. Therefore, the entropy change per mole is q r e v T = 6008 J 273 K {\displaystyle {\frac {q_{\mathrm {rev} }}{T}}={\frac {6008\,\mathrm {J} }{273\,\mathrm {K} }}} , or 22 J/K. When the temperature is not at the melting or boiling point of a substance no intermolecular bond-breaking is possible, and so any motional molecular energy ("heat") from the surroundings transferred to a system raises its temperature, making its molecules move faster and faster. As the temperature is constantly rising, there is no longer a particular value of "T" at which energy is transferred. However, a "reversible" energy transfer can be measured at a very small temperature increase, and a cumulative total can be found by adding each of many small temperature intervals or increments. For example, to find the entropy change q r e v T {\displaystyle {\frac {q_{\mathrm {rev} }}{T}}} from 300 K to 310 K, measure the amount of energy transferred at dozens or hundreds of temperature increments, say from 300.00 K to 300.01 K and then 300.01 to 300.02 and so on, dividing the q by each T, and finally adding them all. Calculus can be used to make this calculation easier if the effect of energy input to the system is linearly dependent on the temperature change, as in simple heating of a system at moderate to relatively high temperatures. Thus, the energy being transferred "per incremental change in temperature" (the heat capacity, C p {\displaystyle C_{p}} ), multiplied by the integral of d T T {\displaystyle {\frac {dT}{T}}} from T i n i t i a l {\displaystyle T_{\mathrm {initial} }} to T f i n a l {\displaystyle T_{\mathrm {final} }} , is directly given by Δ S = C p ln ⁡ T f i n a l T i n i t i a l {\displaystyle \Delta S=C_{p}\ln {\frac {T_{\mathrm {final} }}{T_{\mathrm {initial} }}}} . == Alternate explanations of entropy == === Thermodynamic entropy === A measure of energy unavailable for work: This is an often-repeated phrase which, although it is true, requires considerable clarification to be understood. It is only true for cyclic reversible processes, and is in this sense misleading. By "work" is meant moving an object, for example, lifting a weight, or bringing a flywheel up to speed, or carrying a load up a hill. To convert heat into work, using a coal-burning steam engine, for example, one must have two systems at different temperatures, and the amount of work you can extract depends on how large the temperature difference is, and how large the systems are. If one of the systems is at room temperature, and the other system is much larger, and near absolute zero temperature, then almost ALL of the energy of the room temperature system can be converted to work. If they are both at the same room temperature, then NONE of the energy of the room temperature system can be converted to work. Entropy is then a measure of how much energy cannot be converted to work, given these conditions. More precisely, for an isolated system comprising two closed systems at different temperatures, in the process of reaching equilibrium the amount of entropy lost by the hot system, multiplied by the temperature of the hot system, is the amount of energy that cannot converted to work. An indicator of irreversibility: fitting closely with the 'unavailability of energy' interpretation is the 'irreversibility' interpretation. Spontaneous thermodynamic processes are irreversible, in the sense that they do not spontaneously undo themselves. Thermodynamic processes artificially imposed by agents in the surroundings of a body also have irreversible effects on the body. For example, when James Prescott Joule used a device that delivered a measured amount of mechanical work from the surroundings through a paddle that stirred a body of water, the energy transferred was received by the water as heat. There was scarce expansion of the water doing thermodynamic work back on the surroundings. The body of water showed no sign of returning the energy by stirring the paddle in reverse. The work transfer appeared as heat, and was not recoverable without a suitably cold reservoir in the surroundings. Entropy gives a precise account of such irreversibility. Dispersal: Edward A. Guggenheim proposed an ordinary language interpretation of entropy that may be rendered as "dispersal of modes of microscopic motion throughout their accessible range". Later, along with a criticism of the idea of entropy as 'disorder', the dispersal interpretation was advocated by Frank L. Lambert, and is used in some student textbooks. The interpretation properly refers to dispersal in abstract microstate spaces, but it may be loosely visualised in some simple examples of spatial spread of matter or energy. If a partition is removed from between two different gases, the molecules of each gas spontaneously disperse as widely as possible into their respectively newly accessible volumes; this may be thought of as mixing. If a partition, that blocks heat transfer between two bodies of different temperatures, is removed so that heat can pass between the bodies, then energy spontaneously disperses or spreads as heat from the hotter to the colder. Beyond such loose visualizations, in a general thermodynamic process, considered microscopically, spontaneous dispersal occurs in abstract microscopic phase space. According to Newton's and other laws of motion, phase space provides a systematic scheme for the description of the diversity of microscopic motion that occurs in bodies of matter and radiation. The second law of thermodynamics may be regarded as quantitatively accounting for the intimate interactions, dispersal, or mingling of such microscopic motions. In other words, entropy may be regarded as measuring the extent of diversity of motions of microscopic constituents of bodies of matter and radiation in their own states of internal thermodynamic equilibrium. === Information entropy and statistical mechanics === As a measure of disorder: Traditionally, 20th century textbooks have introduced entropy as order and disorder so that it provides "a measurement of the disorder or randomness of a system". It has been argued that ambiguities in, and arbitrary interpretations of, the terms used (such as "disorder" and "chaos") contribute to widespread confusion and can hinder comprehension of entropy for most students. On the other hand, in a convenient though arbitrary interpretation, "disorder" may be sharply defined as the Shannon entropy of the probability distribution of microstates given a particular macrostate,: 379  in which case the connection of "disorder" to thermodynamic entropy is straightforward, but arbitrary and not immediately obvious to anyone unfamiliar with information theory. Missing information: The idea that information entropy is a measure of how much one does not know about a system is quite useful. If, instead of using the natural logarithm to define information entropy, we instead use the base 2 logarithm, then the information entropy is roughly equal to the average number of (carefully chosen) yes/no questions that would have to be asked to get complete information about the system under study. In the introductory example of two flipped coins, the information entropy for the macrostate which contains one head and one tail, one would only need one question to determine its exact state, (e.g. is the first one heads?") and instead of expressing the entropy as ln(2) one could say, equivalently, that it is log2(2) which equals the number of binary questions we would need to ask: One. When measuring entropy using the natural logarithm (ln), the unit of information entropy is called a "nat", but when it is measured using the base-2 logarithm, the unit of information entropy is called a "shannon" (alternatively, "bit"). This is just a difference in units, much like the difference between inches and centimeters. (1 nat = log2e shannons). Thermodynamic entropy is equal to the Boltzmann constant times the information entropy expressed in nats. The information entropy expressed with the unit shannon (Sh) is equal to the number of yes–no questions that need to be answered to determine the microstate from the macrostate. The concepts of "disorder" and "spreading" can be analyzed with this information entropy concept in mind. For example, if we take a new deck of cards out of the box, it is arranged in "perfect order" (spades, hearts, diamonds, clubs, each suit beginning with the ace and ending with the king), we may say that we then have an "ordered" deck with an information entropy of zero. If we thoroughly shuffle the deck, the information entropy will be about 225.6 shannons: We will need to ask about 225.6 questions, on average, to determine the exact order of the shuffled deck. We can also say that the shuffled deck has become completely "disordered" or that the ordered cards have been "spread" throughout the deck. But information entropy does not say that the deck needs to be ordered in any particular way. If we take our shuffled deck and write down the names of the cards, in order, then the information entropy becomes zero. If we again shuffle the deck, the information entropy would again be about 225.6 shannons, even if by some miracle it reshuffled to the same order as when it came out of the box, because even if it did, we would not know that. So the concept of "disorder" is useful if, by order, we mean maximal knowledge and by disorder we mean maximal lack of knowledge. The "spreading" concept is useful because it gives a feeling to what happens to the cards when they are shuffled. The probability of a card being in a particular place in an ordered deck is either 0 or 1, in a shuffled deck it is 1/52. The probability has "spread out" over the entire deck. Analogously, in a physical system, entropy is generally associated with a "spreading out" of mass or energy. The connection between thermodynamic entropy and information entropy is given by Boltzmann's equation, which says that S = kB ln W. If we take the base-2 logarithm of W, it will yield the average number of questions we must ask about the microstate of the physical system to determine its macrostate. == See also == Entropy (classical thermodynamics) Entropy (energy dispersal) Second law of thermodynamics Statistical mechanics Thermodynamics List of textbooks on thermodynamics and statistical mechanics == References == == Further reading == Goldstein, Martin and Inge F. (1993). The Refrigerator and the Universe: Understanding the Laws of Energy. Harvard Univ. Press. ISBN 9780674753259. Chapters 4–12 touch on entropy.
Wikipedia/Introduction_to_entropy
HVAC (Heating, Ventilation and Air Conditioning) equipment needs a control system to regulate the operation of a heating and/or air conditioning system. Usually a sensing device is used to compare the actual state (e.g. temperature) with a target state. Then the control system draws a conclusion what action has to be taken (e.g. start the blower). == Direct digital control == Central controllers and most terminal unit controllers are programmable, meaning the direct digital control program code may be customized for the intended use. The program features include time schedules, set points, controllers, logic, timers, trend logs, and alarms. The unit controllers typically have analog and digital inputs that allow measurement of the variable (temperature, humidity, or pressure) and analog and digital outputs for control of the transport medium (hot/cold water and/or steam). Digital inputs are typically (dry) contacts from a control device, and analog inputs are typically a voltage or current measurement from a variable (temperature, humidity, velocity, or pressure) sensing device. Digital outputs are typically relay contacts used to start and stop equipment, and analog outputs are typically voltage or current signals to control the movement of the medium (air/water/steam) control devices such as valves, dampers, and motors. Groups of DDC controllers, networked or not, form a layer of systems themselves. This "subsystem" is vital to the performance and basic operation of the overall HVAC system. The DDC system is the "brain" of the HVAC system. It dictates the position of every damper and valve in a system. It determines which fans, pumps, and chiller run and at what speed or capacity. With this configurable intelligence in this "brain", we are moving to the concept of building automation. == Building automation system == More complex HVAC systems can interface to Building Automation System (BAS) to allow the building owners to have more control over the heating or cooling units. The building owner can monitor the system and respond to alarms generated by the system from local or remote locations. The system can be scheduled for occupancy or the configuration can be changed from the BAS. Sometimes the BAS is directly controlling the HVAC components. Depending on the BAS different interfaces can be used. Today, there are also dedicated gateways that connect advanced VRV / VRF and Split HVAC Systems with Home Automation and BMS (Building Management Systems) controllers for centralized control and monitoring, obviating the need to purchase more complex and expensive HVAC systems. In addition, such gateway solutions are capable of providing remote control operation of all HVAC indoor units over the internet incorporating a simple and friendly user interface. == Cost and efficiency == Many people do not have a Heating, Ventilation, and Air Condition (HVAC) system in their homes because it is too expensive. However according to this article Save Money Through Energy Efficiency, HVAC is not as expensive as one may think it is. Although many might choose to not believe that sticker and that it is just there to help with the sales, history shows that many of the newer HVAC systems with the yellow energy guide stickers help save customers hundreds to thousands of dollars depending on how much they use their HVAC system. On the yellow energy guide sticker on many of the newer systems, it displays the average cost to run that machine. Once a customer has found the perfect HVAC system, the customer should run it monthly if it is only put into use during specific times of year. It is recommended that if an HVAC system is not being used each month, that it should be turned on and left running for ten to fifteen minutes. On the other hand if the customer is somebody who runs their HVAC system frequently, it is really important to maintain it. Maintenance on an HVAC system includes changing out the air filter, inspecting the areas where air intake takes place, and check for leaks. Doing these three steps are super essential and is the key to keeping an HVAC system running for a long time. A customer should do these three steps every couple of months or when it is suspected problem with the HVAC system. Some signs that could lead to a potential problem is if the HVAC system does not provide air cool enough. That could be due to a leakage in the cooling fluids. Another sign that could mean that the HVAC system is not running perfectly fine is if there is a bad smell to the air that it is providing. That often means that the air filters need to be replaced. Changing the air filters on an HVAC system is really important because they are exposed to a lot of dust depending on where your HVAC system is and it could build up dust from simply just sitting in one's home. == Goals HVAC system installation == Source Goal 1: Keep HVAC equipment and materials dry during construction and provide temperature and humidity control as required during the close-in phase of construction. HVAC System Installation Goal 2: Install HVAC systems to effectively implement moisture control as specified in the design drawings and specifications. HVAC System Installation Goal 3: Prepare operation and maintenance materials for continued performance of HVAC system moisture control. == Design, modeling, and marketing == Most HVAC systems are used for the same purpose but designed differently. All HVAC systems have an intake, air filter, and air conditioning liquid. However, when designing HVAC systems, many engineers design it for a specific setting and/or purpose. When engineers are designing an HVAC system, they try their best to make it compact while still being able to perform at the highest level and experiment with different ways to make HVAC systems as efficient as possible. == History == The first HVAC controllers utilized pneumatic controls since engineers understood fluid control. Thus, the properties of steam and air were used to control the flow of heated or cooled air via mechanically controlled logic. After the control of air flow and temperature was standardized, the use of electromechanical relays in ladder logic to switch dampers became standardized. Eventually, the relays became electronic switches, as transistors eventually could handle greater current loads. By 1985, pneumatic controls could no longer compete with this new technology although pneumatic control systems (sometimes decades old) are still common in many older buildings. By the year 2000, computerized controllers were common. Today, some of these controllers can even be accessed by web browsers, which need no longer be in the same building as the HVAC equipment. This allows some economies of scale, as a single operations center can easily monitor multiple buildings. == See also == American Society of Heating, Refrigerating and Air-Conditioning Engineers BACnet Building Automation OpenTherm == References ==
Wikipedia/HVAC_control_system
In thermodynamics, Bridgman's thermodynamic equations are a basic set of thermodynamic equations, derived using a method of generating multiple thermodynamic identities involving a number of thermodynamic quantities. The equations are named after the American physicist Percy Williams Bridgman. (See also the exact differential article for general differential relationships). The extensive variables of the system are fundamental. Only the entropy S , the volume V and the four most common thermodynamic potentials will be considered. The four most common thermodynamic potentials are: The first derivatives of the internal energy with respect to its (extensive) natural variables S and V yields the intensive parameters of the system - The pressure P and the temperature T . For a simple system in which the particle numbers are constant, the second derivatives of the thermodynamic potentials can all be expressed in terms of only three material properties Bridgman's equations are a series of relationships between all of the above quantities. == Introduction == Many thermodynamic equations are expressed in terms of partial derivatives. For example, the expression for the heat capacity at constant pressure is: C P = ( ∂ H ∂ T ) P {\displaystyle C_{P}=\left({\frac {\partial H}{\partial T}}\right)_{P}} which is the partial derivative of the enthalpy with respect to temperature while holding pressure constant. We may write this equation as: C P = ( ∂ H ) P ( ∂ T ) P {\displaystyle C_{P}={\frac {(\partial H)_{P}}{(\partial T)_{P}}}} This method of rewriting the partial derivative was described by Bridgman (and also Lewis & Randall), and allows the use of the following collection of expressions to express many thermodynamic equations. For example from the equations below we have: ( ∂ H ) P = C P {\displaystyle (\partial H)_{P}=C_{P}} and ( ∂ T ) P = 1 {\displaystyle (\partial T)_{P}=1} Dividing, we recover the proper expression for CP. The following summary restates various partial terms in terms of the thermodynamic potentials, the state parameters S, T, P, V, and the following three material properties which are easily measured experimentally. ( ∂ V ∂ T ) P = α V {\displaystyle \left({\frac {\partial V}{\partial T}}\right)_{P}=\alpha V} ( ∂ V ∂ P ) T = − β T V {\displaystyle \left({\frac {\partial V}{\partial P}}\right)_{T}=-\beta _{T}V} ( ∂ H ∂ T ) P = C P = c P N {\displaystyle \left({\frac {\partial H}{\partial T}}\right)_{P}=C_{P}=c_{P}N} == Bridgman's thermodynamic equations == Note that Lewis and Randall use F and E for the Gibbs energy and internal energy, respectively, rather than G and U which are used in this article. ( ∂ T ) P = − ( ∂ P ) T = 1 {\displaystyle (\partial T)_{P}=-(\partial P)_{T}=1} ( ∂ V ) P = − ( ∂ P ) V = ( ∂ V ∂ T ) P {\displaystyle (\partial V)_{P}=-(\partial P)_{V}=\left({\frac {\partial V}{\partial T}}\right)_{P}} ( ∂ S ) P = − ( ∂ P ) S = C p T {\displaystyle (\partial S)_{P}=-(\partial P)_{S}={\frac {C_{p}}{T}}} ( ∂ U ) P = − ( ∂ P ) U = C P − P ( ∂ V ∂ T ) P {\displaystyle (\partial U)_{P}=-(\partial P)_{U}=C_{P}-P\left({\frac {\partial V}{\partial T}}\right)_{P}} ( ∂ H ) P = − ( ∂ P ) H = C P {\displaystyle (\partial H)_{P}=-(\partial P)_{H}=C_{P}} ( ∂ G ) P = − ( ∂ P ) G = − S {\displaystyle (\partial G)_{P}=-(\partial P)_{G}=-S} ( ∂ A ) P = − ( ∂ P ) A = − S − P ( ∂ V ∂ T ) P {\displaystyle (\partial A)_{P}=-(\partial P)_{A}=-S-P\left({\frac {\partial V}{\partial T}}\right)_{P}} ( ∂ V ) T = − ( ∂ T ) V = − ( ∂ V ∂ P ) T {\displaystyle (\partial V)_{T}=-(\partial T)_{V}=-\left({\frac {\partial V}{\partial P}}\right)_{T}} ( ∂ S ) T = − ( ∂ T ) S = ( ∂ V ∂ T ) P {\displaystyle (\partial S)_{T}=-(\partial T)_{S}=\left({\frac {\partial V}{\partial T}}\right)_{P}} ( ∂ U ) T = − ( ∂ T ) U = T ( ∂ V ∂ T ) P + P ( ∂ V ∂ P ) T {\displaystyle (\partial U)_{T}=-(\partial T)_{U}=T\left({\frac {\partial V}{\partial T}}\right)_{P}+P\left({\frac {\partial V}{\partial P}}\right)_{T}} ( ∂ H ) T = − ( ∂ T ) H = − V + T ( ∂ V ∂ T ) P {\displaystyle (\partial H)_{T}=-(\partial T)_{H}=-V+T\left({\frac {\partial V}{\partial T}}\right)_{P}} ( ∂ G ) T = − ( ∂ T ) G = − V {\displaystyle (\partial G)_{T}=-(\partial T)_{G}=-V} ( ∂ A ) T = − ( ∂ T ) A = P ( ∂ V ∂ P ) T {\displaystyle (\partial A)_{T}=-(\partial T)_{A}=P\left({\frac {\partial V}{\partial P}}\right)_{T}} ( ∂ S ) V = − ( ∂ V ) S = C P T ( ∂ V ∂ P ) T + ( ∂ V ∂ T ) P 2 {\displaystyle (\partial S)_{V}=-(\partial V)_{S}={\frac {C_{P}}{T}}\left({\frac {\partial V}{\partial P}}\right)_{T}+\left({\frac {\partial V}{\partial T}}\right)_{P}^{2}} ( ∂ U ) V = − ( ∂ V ) U = C P ( ∂ V ∂ P ) T + T ( ∂ V ∂ T ) P 2 {\displaystyle (\partial U)_{V}=-(\partial V)_{U}=C_{P}\left({\frac {\partial V}{\partial P}}\right)_{T}+T\left({\frac {\partial V}{\partial T}}\right)_{P}^{2}} ( ∂ H ) V = − ( ∂ V ) H = C P ( ∂ V ∂ P ) T + T ( ∂ V ∂ T ) P 2 − V ( ∂ V ∂ T ) P {\displaystyle (\partial H)_{V}=-(\partial V)_{H}=C_{P}\left({\frac {\partial V}{\partial P}}\right)_{T}+T\left({\frac {\partial V}{\partial T}}\right)_{P}^{2}-V\left({\frac {\partial V}{\partial T}}\right)_{P}} ( ∂ G ) V = − ( ∂ V ) G = − V ( ∂ V ∂ T ) P − S ( ∂ V ∂ P ) T {\displaystyle (\partial G)_{V}=-(\partial V)_{G}=-V\left({\frac {\partial V}{\partial T}}\right)_{P}-S\left({\frac {\partial V}{\partial P}}\right)_{T}} ( ∂ A ) V = − ( ∂ V ) A = − S ( ∂ V ∂ P ) T {\displaystyle (\partial A)_{V}=-(\partial V)_{A}=-S\left({\frac {\partial V}{\partial P}}\right)_{T}} ( ∂ U ) S = − ( ∂ S ) U = P C P T ( ∂ V ∂ P ) T + P ( ∂ V ∂ T ) P 2 {\displaystyle (\partial U)_{S}=-(\partial S)_{U}={\frac {PC_{P}}{T}}\left({\frac {\partial V}{\partial P}}\right)_{T}+P\left({\frac {\partial V}{\partial T}}\right)_{P}^{2}} ( ∂ H ) S = − ( ∂ S ) H = − V C P T {\displaystyle (\partial H)_{S}=-(\partial S)_{H}=-{\frac {VC_{P}}{T}}} ( ∂ G ) S = − ( ∂ S ) G = − V C P T + S ( ∂ V ∂ T ) P {\displaystyle (\partial G)_{S}=-(\partial S)_{G}=-{\frac {VC_{P}}{T}}+S\left({\frac {\partial V}{\partial T}}\right)_{P}} ( ∂ A ) S = − ( ∂ S ) A = P C P T ( ∂ V ∂ P ) T + P ( ∂ V ∂ T ) P 2 + S ( ∂ V ∂ T ) P {\displaystyle (\partial A)_{S}=-(\partial S)_{A}={\frac {PC_{P}}{T}}\left({\frac {\partial V}{\partial P}}\right)_{T}+P\left({\frac {\partial V}{\partial T}}\right)_{P}^{2}+S\left({\frac {\partial V}{\partial T}}\right)_{P}} ( ∂ H ) U = − ( ∂ U ) H = − V C P + P V ( ∂ V ∂ T ) P − P C P ( ∂ V ∂ P ) T − P T ( ∂ V ∂ T ) P 2 {\displaystyle (\partial H)_{U}=-(\partial U)_{H}=-VC_{P}+PV\left({\frac {\partial V}{\partial T}}\right)_{P}-PC_{P}\left({\frac {\partial V}{\partial P}}\right)_{T}-PT\left({\frac {\partial V}{\partial T}}\right)_{P}^{2}} ( ∂ G ) U = − ( ∂ U ) G = − V C P + P V ( ∂ V ∂ T ) P + S T ( ∂ V ∂ T ) P + S P ( ∂ V ∂ P ) T {\displaystyle (\partial G)_{U}=-(\partial U)_{G}=-VC_{P}+PV\left({\frac {\partial V}{\partial T}}\right)_{P}+ST\left({\frac {\partial V}{\partial T}}\right)_{P}+SP\left({\frac {\partial V}{\partial P}}\right)_{T}} ( ∂ A ) U = − ( ∂ U ) A = P ( C P + S ) ( ∂ V ∂ P ) T + P T ( ∂ V ∂ T ) P 2 + S T ( ∂ V ∂ T ) P {\displaystyle (\partial A)_{U}=-(\partial U)_{A}=P(C_{P}+S)\left({\frac {\partial V}{\partial P}}\right)_{T}+PT\left({\frac {\partial V}{\partial T}}\right)_{P}^{2}+ST\left({\frac {\partial V}{\partial T}}\right)_{P}} ( ∂ G ) H = − ( ∂ H ) G = − V ( C P + S ) + T S ( ∂ V ∂ T ) P {\displaystyle (\partial G)_{H}=-(\partial H)_{G}=-V(C_{P}+S)+TS\left({\frac {\partial V}{\partial T}}\right)_{P}} ( ∂ A ) H = − ( ∂ H ) A = − [ S + P ( ∂ V ∂ T ) P ] [ V − T ( ∂ V ∂ T ) P ] + P C P ( ∂ V ∂ P ) T {\displaystyle (\partial A)_{H}=-(\partial H)_{A}=-\left[S+P\left({\frac {\partial V}{\partial T}}\right)_{P}\right]\left[V-T\left({\frac {\partial V}{\partial T}}\right)_{P}\right]+PC_{P}\left({\frac {\partial V}{\partial P}}\right)_{T}} ( ∂ A ) G = − ( ∂ G ) A = − S [ V + P ( ∂ V ∂ P ) T ] − P V ( ∂ V ∂ T ) P {\displaystyle (\partial A)_{G}=-(\partial G)_{A}=-S\left[V+P\left({\frac {\partial V}{\partial P}}\right)_{T}\right]-PV\left({\frac {\partial V}{\partial T}}\right)_{P}} == See also == Table of thermodynamic equations Exact differential == References == Bridgman, P.W. (1914). "A Complete Collection of Thermodynamic Formulas". Physical Review. 3 (4): 273–281. Bibcode:1914PhRv....3..273B. doi:10.1103/PhysRev.3.273. Lewis, G.N.; Randall, M. (1961). Thermodynamics (2nd ed.). New York: McGraw-Hill Book Company.
Wikipedia/Bridgman's_thermodynamic_equations
In philosophy, the philosophy of physics deals with conceptual and interpretational issues in physics, many of which overlap with research done by certain kinds of theoretical physicists. Historically, philosophers of physics have engaged with questions such as the nature of space, time, matter and the laws that govern their interactions, as well as the epistemological and ontological basis of the theories used by practicing physicists. The discipline draws upon insights from various areas of philosophy, including metaphysics, epistemology, and philosophy of science, while also engaging with the latest developments in theoretical and experimental physics. Contemporary work focuses on issues at the foundations of the three pillars of modern physics: Quantum mechanics: Interpretations of quantum theory, including the nature of quantum states, the measurement problem, and the role of observers. Implications of entanglement, nonlocality, and the quantum-classical relationship are also explored. Relativity: Conceptual foundations of special and general relativity, including the nature of spacetime, simultaneity, causality, and determinism. Compatibility with quantum mechanics, gravitational singularities, and philosophical implications of cosmology are also investigated. Statistical mechanics: Relationship between microscopic and macroscopic descriptions, interpretation of probability, origin of irreversibility and the arrow of time. Foundations of thermodynamics, role of information theory in understanding entropy, and implications for explanation and reduction in physics. Other areas of focus include the nature of physical laws, symmetries, and conservation principles; the role of mathematics; and philosophical implications of emerging fields like quantum gravity, quantum information, and complex systems. Philosophers of physics have argued that conceptual analysis clarifies foundations, interprets implications, and guides theory development in physics. == Philosophy of space and time == The existence and nature of space and time (or space-time) are central topics in the philosophy of physics. Issues include (1) whether space and time are fundamental or emergent, and (2) how space and time are operationally different from one another. === Time === In classical mechanics, time is taken to be a fundamental quantity (that is, a quantity which cannot be defined in terms of other quantities). However, certain theories such as loop quantum gravity claim that spacetime is emergent. As Carlo Rovelli, one of the founders of loop quantum gravity, has said: "No more fields on spacetime: just fields on fields". Time is defined via measurement—by its standard time interval. Currently, the standard time interval (called "conventional second", or simply "second") is defined as 9,192,631,770 oscillations of a hyperfine transition in the 133 caesium atom. (ISO 31-1). What time is and how it works follows from the above definition. Time then can be combined mathematically with the fundamental quantities of space and mass to define concepts such as velocity, momentum, energy, and fields. Both Isaac Newton and Galileo Galilei, as well as most people up until the 20th century, thought that time was the same for everyone everywhere. The modern conception of time is based on Albert Einstein's theory of relativity and Hermann Minkowski's spacetime, in which rates of time run differently in different inertial frames of reference, and space and time are merged into spacetime. Einstein's general relativity as well as the redshift of the light from receding distant galaxies indicate that the entire Universe and possibly space-time itself began about 13.8 billion years ago in the Big Bang. Einstein's theory of special relativity mostly (though not universally) made theories of time where there is something metaphysically special about the present seem much less plausible, as the reference-frame-dependence of time seems to not allow the idea of a privileged present moment. === Space === Space is one of the few fundamental quantities in physics, meaning that it cannot be defined via other quantities because there is nothing more fundamental known at present. Thus, similar to the definition of other fundamental quantities (like time and mass), space is defined via measurement. Currently, the standard space interval, called a standard metre or simply metre, is defined as the distance traveled by light in a vacuum during a time interval of 1/299792458 of a second (exact). In classical physics, space is a three-dimensional Euclidean space where any position can be described using three coordinates and parameterised by time. Special and general relativity use four-dimensional spacetime rather than three-dimensional space; and currently there are many speculative theories which use more than three spatial dimensions. == Philosophy of quantum mechanics == Quantum mechanics is a large focus of contemporary philosophy of physics, specifically concerning the correct interpretation of quantum mechanics. Very broadly, much of the philosophical work that is done in quantum theory is trying to make sense of superposition states: the property that particles seem to not just be in one determinate position at one time, but are somewhere 'here', and also 'there' at the same time. Such a radical view turns many common sense metaphysical ideas on their head. Much of contemporary philosophy of quantum mechanics aims to make sense of what the very empirically successful formalism of quantum mechanics tells us about the physical world. === Uncertainty principle === The uncertainty principle is a mathematical relation asserting an upper limit to the accuracy of the simultaneous measurement of any pair of conjugate variables, e.g. position and momentum. In the formalism of operator notation, this limit is the evaluation of the commutator of the variables' corresponding operators. The uncertainty principle arose as an answer to the question: How does one measure the location of an electron around a nucleus if an electron is a wave? When quantum mechanics was developed, it was seen to be a relation between the classical and quantum descriptions of a system using wave mechanics. === "Locality" and hidden variables === Bell's theorem is a term encompassing a number of closely related results in physics, all of which determine that quantum mechanics is incompatible with local hidden-variable theories given some basic assumptions about the nature of measurement. "Local" here refers to the principle of locality, the idea that a particle can only be influenced by its immediate surroundings, and that interactions mediated by physical fields cannot propagate faster than the speed of light. "Hidden variables" are putative properties of quantum particles that are not included in the theory but nevertheless affect the outcome of experiments. In the words of physicist John Stewart Bell, for whom this family of results is named, "If [a hidden-variable theory] is local it will not agree with quantum mechanics, and if it agrees with quantum mechanics it will not be local." The term is broadly applied to a number of different derivations, the first of which was introduced by Bell in a 1964 paper titled "On the Einstein Podolsky Rosen Paradox". Bell's paper was a response to a 1935 thought experiment that Albert Einstein, Boris Podolsky and Nathan Rosen proposed, arguing that quantum physics is an "incomplete" theory. By 1935, it was already recognized that the predictions of quantum physics are probabilistic. Einstein, Podolsky and Rosen presented a scenario that involves preparing a pair of particles such that the quantum state of the pair is entangled, and then separating the particles to an arbitrarily large distance. The experimenter has a choice of possible measurements that can be performed on one of the particles. When they choose a measurement and obtain a result, the quantum state of the other particle apparently collapses instantaneously into a new state depending upon that result, no matter how far away the other particle is. This suggests that either the measurement of the first particle somehow also influenced the second particle faster than the speed of light, or that the entangled particles had some unmeasured property which pre-determined their final quantum states before they were separated. Therefore, assuming locality, quantum mechanics must be incomplete, as it cannot give a complete description of the particle's true physical characteristics. In other words, quantum particles, like electrons and photons, must carry some property or attributes not included in quantum theory, and the uncertainties in quantum theory's predictions would then be due to ignorance or unknowability of these properties, later termed "hidden variables". Bell carried the analysis of quantum entanglement much further. He deduced that if measurements are performed independently on the two separated particles of an entangled pair, then the assumption that the outcomes depend upon hidden variables within each half implies a mathematical constraint on how the outcomes on the two measurements are correlated. This constraint would later be named the Bell inequality. Bell then showed that quantum physics predicts correlations that violate this inequality. Consequently, the only way that hidden variables could explain the predictions of quantum physics is if they are "nonlocal", which is to say that somehow the two particles can carry non-classical correlations no matter how widely they ever become separated. Multiple variations on Bell's theorem were put forward in the following years, introducing other closely related conditions generally known as Bell (or "Bell-type") inequalities. The first rudimentary experiment designed to test Bell's theorem was performed in 1972 by John Clauser and Stuart Freedman. More advanced experiments, known collectively as Bell tests, have been performed many times since. To date, Bell tests have consistently found that physical systems obey quantum mechanics and violate Bell inequalities; which is to say that the results of these experiments are incompatible with any local hidden variable theory. The exact nature of the assumptions required to prove a Bell-type constraint on correlations has been debated by physicists and by philosophers. While the significance of Bell's theorem is not in doubt, its full implications for the interpretation of quantum mechanics remain unresolved. === Interpretations of quantum mechanics === In March 1927, working in Niels Bohr's institute, Werner Heisenberg formulated the principle of uncertainty thereby laying the foundation of what became known as the Copenhagen interpretation of quantum mechanics. Heisenberg had been studying the papers of Paul Dirac and Pascual Jordan. He discovered a problem with measurement of basic variables in the equations. His analysis showed that uncertainties, or imprecisions, always turned up if one tried to measure the position and the momentum of a particle at the same time. Heisenberg concluded that these uncertainties or imprecisions in the measurements were not the fault of the experimenter, but fundamental in nature and are inherent mathematical properties of operators in quantum mechanics arising from definitions of these operators. The Copenhagen interpretation is somewhat loosely defined, as many physicists and philosophers of physics have advanced similar but not identical views of quantum mechanics. It is principally associated with Heisenberg and Bohr, despite their philosophical differences. Features common to Copenhagen-type interpretations include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using the Born rule, and the principle of complementarity, which states that objects have certain pairs of complementary properties that cannot all be observed or measured simultaneously. Moreover, the act of "observing" or "measuring" an object is irreversible, and no truth can be attributed to an object, except according to the results of its measurement. Copenhagen-type interpretations hold that quantum descriptions are objective, in that they are independent of any arbitrary factors in the physicist's mind.: 85–90  The many-worlds interpretation of quantum mechanics by Hugh Everett III claims that the wave-function of a quantum system is telling us claims about the reality of that physical system. It denies wavefunction collapse, and claims that superposition states should be interpreted literally as describing the reality of many-worlds where objects are located, and not simply indicating the indeterminacy of those variables. This is sometimes argued as a corollary of scientific realism, which states that scientific theories aim to give us literally true descriptions of the world. One issue for the Everett interpretation is the role that probability plays on this account. The Everettian account is completely deterministic, whereas probability seems to play an ineliminable role in quantum mechanics. Contemporary Everettians have argued that one can get an account of probability that follows the Born rule through certain decision-theoretic proofs, but there is as yet no consensus about whether any of these proofs are successful. Physicist Roland Omnès noted that it is impossible to experimentally differentiate between Everett's view, which says that as the wave-function decoheres into distinct worlds, each of which exists equally, and the more traditional view that says that a decoherent wave-function leaves only one unique real result. Hence, the dispute between the two views represents a great "chasm". "Every characteristic of reality has reappeared in its reconstruction by our theoretical model; every feature except one: the uniqueness of facts." == Philosophy of thermal and statistical physics == The philosophy of thermal and statistical physics is concerned with the foundational issues and conceptual implications of thermodynamics and statistical mechanics. These branches of physics deal with the macroscopic behavior of systems comprising a large number of microscopic entities, such as particles, and the nature of laws that emerge from these systems like irreversibility and entropy. Interest of philosophers in statistical mechanics first arose from the observation of an apparent conflict between the time-reversal symmetry of fundamental physical laws and the irreversibility observed in thermodynamic processes, known as the arrow of time problem. Philosophers have sought to understand how the asymmetric behavior of macroscopic systems, such as the tendency of heat to flow from hot to cold bodies, can be reconciled with the time-symmetric laws governing the motion of individual particles. Another key issue is the interpretation of probability in statistical mechanics, which is primarily concerned with the question of whether probabilities in statistical mechanics are epistemic, reflecting our lack of knowledge about the precise microstate of a system, or ontic, representing an objective feature of the physical world. The epistemic interpretation, also known as the subjective or Bayesian view, holds that probabilities in statistical mechanics are a measure of our ignorance about the exact state of a system. According to this view, we resort to probabilistic descriptions only due to the practical impossibility of knowing the precise properties of all its micro-constituents, like the positions and momenta of particles. As such, the probabilities are not objective features of the world but rather arise from our ignorance. In contrast, the ontic interpretation, also called the objective or frequentist view, asserts that probabilities in statistical mechanics are real, physical properties of the system itself. Proponents of this view argue that the probabilistic nature of statistical mechanics is not merely a reflection of our ignorance but an intrinsic feature of the physical world, and that even if we had complete knowledge of the microstate of a system, the macroscopic behavior would still be best described by probabilistic laws. == History == === Aristotelian physics === Aristotelian physics viewed the universe as a sphere with a center. Matter, composed of the classical elements: earth, water, air, and fire; sought to go down towards the center of the universe, the center of the Earth, or up, away from it. Things in the aether such as the Moon, the Sun, planets, or stars circled the center of the universe. Movement is defined as change in place, i.e. space. === Newtonian physics === The implicit axioms of Aristotelian physics with respect to movement of matter in space were superseded in Newtonian physics by Newton's first law of motion.Every body perseveres in its state either of rest or of uniform motion in a straight line, except insofar as it is compelled to change its state by impressed forces. "Every body" includes the Moon, and an apple; and includes all types of matter, air as well as water, stones, or even a flame. Nothing has a natural or inherent motion. Absolute space being three-dimensional Euclidean space, infinite and without a center. Being "at rest" means being at the same place in absolute space over time. The topology and affine structure of space must permit movement in a straight line at a uniform velocity; thus both space and time must have definite, stable dimensions. === Leibniz === Gottfried Wilhelm Leibniz, 1646–1716, was a contemporary of Newton. He contributed a fair amount to the statics and dynamics emerging around him, often disagreeing with Descartes and Newton. He devised a new theory of motion (dynamics) based on kinetic energy and potential energy, which posited space as relative, whereas Newton was thoroughly convinced that space was absolute. An important example of Leibniz's mature physical thinking is his Specimen Dynamicum of 1695. Until the discovery of subatomic particles and the quantum mechanics governing them, many of Leibniz's speculative ideas about aspects of nature not reducible to statics and dynamics made little sense. He anticipated Albert Einstein by arguing, against Newton, that space, time and motion are relative, not absolute: "As for my own opinion, I have said more than once, that I hold space to be something merely relative, as time is, that I hold it to be an order of coexistences, as time is an order of successions." == See also == == References == == Further reading == David Albert, 1994. Quantum Mechanics and Experience. Harvard Univ. Press. John D. Barrow and Frank J. Tipler, 1986. The Cosmological Anthropic Principle. Oxford Univ. Press. Beisbart, C. and S. Hartmann, eds., 2011. "Probabilities in Physics". Oxford Univ. Press. John S. Bell, 2004 (1987), Speakable and Unspeakable in Quantum Mechanics. Cambridge Univ. Press. David Bohm, 1980. Wholeness and the Implicate Order. Routledge. Nick Bostrom, 2002. Anthropic Bias: Observation Selection Effects in Science and Philosophy. Routledge. Thomas Brody, 1993, Ed. by Luis de la Peña and Peter E. Hodgson The Philosophy Behind Physics Springer ISBN 3-540-55914-0 Harvey Brown, 2005. Physical Relativity. Space-time structure from a dynamical perspective. Oxford Univ. Press. Butterfield, J., and John Earman, eds., 2007. Philosophy of Physics, Parts A and B. Elsevier. Craig Callender and Nick Huggett, 2001. Physics Meets Philosophy at the Planck Scale. Cambridge Univ. Press. David Deutsch, 1997. The Fabric of Reality. London: The Penguin Press. Bernard d'Espagnat, 1989. Reality and the Physicist. Cambridge Univ. Press. Trans. of Une incertaine réalité; le monde quantique, la connaissance et la durée. --------, 1995. Veiled Reality. Addison-Wesley. --------, 2006. On Physics and Philosophy. Princeton Univ. Press. Roland Omnès, 1994. The Interpretation of Quantum Mechanics. Princeton Univ. Press. --------, 1999. Quantum Philosophy. Princeton Univ. Press. Huw Price, 1996. Time's Arrow and Archimedes's Point. Oxford Univ. Press. Lawrence Sklar, 1992. Philosophy of Physics. Westview Press. ISBN 0-8133-0625-6, ISBN 978-0-8133-0625-4 Victor Stenger, 2000. Timeless Reality. Prometheus Books. Carl Friedrich von Weizsäcker, 1980. The Unity of Nature. Farrar Straus & Giroux. Werner Heisenberg, 1971. Physics and Beyond: Encounters and Conversations. Harper & Row (World Perspectives series), 1971. William Berkson, 1974. Fields of Force. Routledge and Kegan Paul, London. ISBN 0-7100-7626-6 Encyclopædia Britannica, Philosophy of Physics, David Z. Albert == External links == Stanford Encyclopedia of Philosophy: "Absolute and Relational Theories of Space and Motion"—Nick Huggett and Carl Hoefer "Being and Becoming in Modern Physics"—Steven Savitt "Boltzmann's Work in Statistical Physics"—Jos Uffink "Conventionality of Simultaneity"—Allen Janis "Early Philosophical Interpretations of General Relativity"—Thomas A. Ryckman "Everett's Relative-State Formulation of Quantum Mechanics"—Jeffrey A. Barrett "Experiments in Physics"—Allan Franklin "Holism and Nonseparability in Physics"—Richard Healey "Intertheory Relations in Physics"—Robert Batterman "Naturalism"—David Papineau "Philosophy of Statistical Mechanics"—Lawrence Sklar "Physicalism"—Daniel Sojkal "Quantum Mechanics"—Jenann Ismael "Reichenbach's Common Cause Principle"—Frank Artzenius "Structural Realism"—James Ladyman "Structuralism in Physics"—Heinz-Juergen Schmidt "Supertasks"—JB Manchak and Bryan Roberts "Symmetry and Symmetry Breaking"—Katherine Brading and Elena Castellani "Thermodynamic Asymmetry in Time"—Craig Callender "Time"—by Ned Markosian "Time Machines" —John Earman, Chris Wüthrich, and JB Manchak "Uncertainty principle"—Jan Hilgevoord and Jos Uffink "The Unity of Science"—Jordi Cat
Wikipedia/Philosophy_of_thermal_and_statistical_physics
In the thermodynamics of equilibrium, a state function, function of state, or point function for a thermodynamic system is a mathematical function relating several state variables or state quantities (that describe equilibrium states of a system) that depend only on the current equilibrium thermodynamic state of the system (e.g. gas, liquid, solid, crystal, or emulsion), not the path which the system has taken to reach that state. A state function describes equilibrium states of a system, thus also describing the type of system. A state variable is typically a state function so the determination of other state variable values at an equilibrium state also determines the value of the state variable as the state function at that state. The ideal gas law is a good example. In this law, one state variable (e.g., pressure, volume, temperature, or the amount of substance in a gaseous equilibrium system) is a function of other state variables so is regarded as a state function. A state function could also describe the number of a certain type of atoms or molecules in a gaseous, liquid, or solid form in a heterogeneous or homogeneous mixture, or the amount of energy required to create such a system or change the system into a different equilibrium state. Internal energy, enthalpy, and entropy are examples of state quantities or state functions because they quantitatively describe an equilibrium state of a thermodynamic system, regardless of how the system has arrived in that state. They are expressed by exact differentials. In contrast, mechanical work and heat are process quantities or path functions because their values depend on a specific "transition" (or "path") between two equilibrium states that a system has taken to reach the final equilibrium state, being expressed by inexact differentials. Exchanged heat (in certain discrete amounts) can be associated with changes of state function such as enthalpy. The description of the system heat exchange is done by a state function, and thus enthalpy changes point to an amount of heat. This can also apply to entropy when heat is compared to temperature. The description breaks down for quantities exhibiting hysteresis. == History == It is likely that the term "functions of state" was used in a loose sense during the 1850s and 1860s by those such as Rudolf Clausius, William Rankine, Peter Tait, and William Thomson. By the 1870s, the term had acquired a use of its own. In his 1873 paper "Graphical Methods in the Thermodynamics of Fluids", Willard Gibbs states: "The quantities v, p, t, ε, and η are determined when the state of the body is given, and it may be permitted to call them functions of the state of the body." == Overview == A thermodynamic system is described by a number of thermodynamic parameters (e.g. temperature, volume, or pressure) which are not necessarily independent. The number of parameters needed to describe the system is the dimension of the state space of the system (D). For example, a monatomic gas with a fixed number of particles is a simple case of a two-dimensional system (D = 2). Any two-dimensional system is uniquely specified by two parameters. Choosing a different pair of parameters, such as pressure and volume instead of pressure and temperature, creates a different coordinate system in two-dimensional thermodynamic state space but is otherwise equivalent. Pressure and temperature can be used to find volume, pressure and volume can be used to find temperature, and temperature and volume can be used to find pressure. An analogous statement holds for higher-dimensional spaces, as described by the state postulate. Generally, a state space is defined by an equation of the form F ( P , V , T , … ) = 0 {\displaystyle F(P,V,T,\ldots )=0} , where P denotes pressure, T denotes temperature, V denotes volume, and the ellipsis denotes other possible state variables like particle number N and entropy S. If the state space is two-dimensional as in the above example, it can be visualized as a three-dimensional graph (a surface in three-dimensional space). However, the labels of the axes are not unique (since there are more than three state variables in this case), and only two independent variables are necessary to define the state. When a system changes state continuously, it traces out a "path" in the state space. The path can be specified by noting the values of the state parameters as the system traces out the path, whether as a function of time or a function of some other external variable. For example, having the pressure P(t) and volume V(t) as functions of time from time t0 to t1 will specify a path in two-dimensional state space. Any function of time can then be integrated over the path. For example, to calculate the work done by the system from time t0 to time t1, calculate W ( t 0 , t 1 ) = ∫ 0 1 P d V = ∫ t 0 t 1 P ( t ) d V ( t ) d t d t {\textstyle W(t_{0},t_{1})=\int _{0}^{1}P\,dV=\int _{t_{0}}^{t_{1}}P(t){\frac {dV(t)}{dt}}\,dt} . In order to calculate the work W in the above integral, the functions P(t) and V(t) must be known at each time t over the entire path. In contrast, a state function only depends upon the system parameters' values at the endpoints of the path. For example, the following equation can be used to calculate the work plus the integral of V dP over the path: Φ ( t 0 , t 1 ) = ∫ t 0 t 1 P d V d t d t + ∫ t 0 t 1 V d P d t d t = ∫ t 0 t 1 d ( P V ) d t d t = P ( t 1 ) V ( t 1 ) − P ( t 0 ) V ( t 0 ) . {\displaystyle {\begin{aligned}\Phi (t_{0},t_{1})&=\int _{t_{0}}^{t_{1}}P{\frac {dV}{dt}}\,dt+\int _{t_{0}}^{t_{1}}V{\frac {dP}{dt}}\,dt\\&=\int _{t_{0}}^{t_{1}}{\frac {d(PV)}{dt}}\,dt=P(t_{1})V(t_{1})-P(t_{0})V(t_{0}).\end{aligned}}} In the equation, d ( P V ) d t d t = d ( P V ) {\displaystyle {\frac {d(PV)}{dt}}dt=d(PV)} can be expressed as the exact differential of the function P(t)V(t). Therefore, the integral can be expressed as the difference in the value of P(t)V(t) at the end points of the integration. The product PV is therefore a state function of the system. The notation d will be used for an exact differential. In other words, the integral of dΦ will be equal to Φ(t1) − Φ(t0). The symbol δ will be reserved for an inexact differential, which cannot be integrated without full knowledge of the path. For example, δW = PdV will be used to denote an infinitesimal increment of work. State functions represent quantities or properties of a thermodynamic system, while non-state functions represent a process during which the state functions change. For example, the state function PV is proportional to the internal energy of an ideal gas, but the work W is the amount of energy transferred as the system performs work. Internal energy is identifiable; it is a particular form of energy. Work is the amount of energy that has changed its form or location. == List of state functions == The following are considered to be state functions in thermodynamics: == See also == Markov property Conservative vector field Nonholonomic system Equation of state State variable == Notes == == References == Callen, Herbert B. (1985). Thermodynamics and an Introduction to Thermostatistics. Wiley & Sons. ISBN 978-0-471-86256-7. Gibbs, Josiah Willard (1873). "Graphical Methods in the Thermodynamics of Fluids". Transactions of the Connecticut Academy. II. ASIN B00088UXBK – via WikiSource. Mandl, F. (May 1988). Statistical physics (2nd ed.). Wiley & Sons. ISBN 978-0-471-91533-1. == External links == Media related to State functions at Wikimedia Commons
Wikipedia/State_function
Forced-air gas heating systems are used in central air heating/cooling systems for houses. Sometimes the system is referred to as "forced hot air". == Design == Gas-fired forced-air furnaces have a burner in the furnace fueled by natural gas. A blower forces cold air through a heat exchanger and then through duct-work that distributes the hot air through the building. Each room has an outlet from the duct system, often mounted in the floor or low on the wall – some rooms will also have an opening into the cold air return duct. Depending on the age of the system, forced-air gas furnaces use either a pilot light or a solid-state ignition system (spark or hot surface ignition) to light the natural gas burner. The natural gas is fed to buildings from a main gas line. The duct work supplying the hot air (and sometimes cool air if an AC unit is tied into the system) may be insulated. A thermostat starts and stops the furnace to regulate temperature. Large homes or commercial buildings may have multiple thermostats and heating zones, controlled by powered dampers. A digital thermostat can be programmed to activate the gas furnace at certain times. For example, a resident may want the temperature in their dwelling to rise 15 minutes before returning from work. Simple types of gas-fired furnace lose significant amounts of energy in the hot waste gases. High-efficiency condensing furnaces condense the water vapor (one of the by-products of gas combustion) and extract the latent heat to pre-heat the incoming furnace airflow, using a second heat exchanger. This increases the efficiency (energy delivered into the building vs. heating value of gas purchased) to over 90%. An incidental beneficial effect is that the exhaust flue is much smaller and can be made of plastic pipe since the exhaust gas is much cooler. As a result it can be more easily routed through walls or floors. However, the condensing furnace is more expensive initially because of the extra induced-draft fan and condensate pump required, and the extra heat exchanger in the firebox. The heat exchangers may be damaged by corrosion or metal fatigue from many heating and cooling cycles. A small leak of combustion gases into the heated air can be dangerous to the occupants of the heated space, because of possible carbon monoxide build up. == Areas of usage == Residential and commercial buildings located in rural and remote areas do not often use natural gas forced hot air systems. This is due to the financial impracticality of running natural gas lines many miles past areas of relatively sparse habitation. Usually these rural and remote buildings use oil heat or propane, which is delivered by a truck and stored in a tank on the property. == References == Information on forced air pilot lights and maintenance from GasFurnaceGuide.com [1] == External links == From howstuffworks.com [2]
Wikipedia/Forced-air_gas
The thermodynamic properties of materials are intensive thermodynamic parameters which are specific to a given material. Each is directly related to a second order differential of a thermodynamic potential. Examples for a simple 1-component system are: Compressibility (or its inverse, the bulk modulus) Isothermal compressibility κ T = − 1 V ( ∂ V ∂ P ) T = − 1 V ∂ 2 G ∂ P 2 {\displaystyle \kappa _{T}=-{\frac {1}{V}}\left({\frac {\partial V}{\partial P}}\right)_{T}\quad =-{\frac {1}{V}}\,{\frac {\partial ^{2}G}{\partial P^{2}}}} Adiabatic compressibility κ S = − 1 V ( ∂ V ∂ P ) S = − 1 V ∂ 2 H ∂ P 2 {\displaystyle \kappa _{S}=-{\frac {1}{V}}\left({\frac {\partial V}{\partial P}}\right)_{S}\quad =-{\frac {1}{V}}\,{\frac {\partial ^{2}H}{\partial P^{2}}}} Specific heat (Note - the extensive analog is the heat capacity) Specific heat at constant pressure c P = T N ( ∂ S ∂ T ) P = − T N ∂ 2 G ∂ T 2 {\displaystyle c_{P}={\frac {T}{N}}\left({\frac {\partial S}{\partial T}}\right)_{P}\quad =-{\frac {T}{N}}\,{\frac {\partial ^{2}G}{\partial T^{2}}}} Specific heat at constant volume c V = T N ( ∂ S ∂ T ) V = − T N ∂ 2 A ∂ T 2 {\displaystyle c_{V}={\frac {T}{N}}\left({\frac {\partial S}{\partial T}}\right)_{V}\quad =-{\frac {T}{N}}\,{\frac {\partial ^{2}A}{\partial T^{2}}}} Coefficient of thermal expansion α = 1 V ( ∂ V ∂ T ) P = 1 V ∂ 2 G ∂ P ∂ T {\displaystyle \alpha ={\frac {1}{V}}\left({\frac {\partial V}{\partial T}}\right)_{P}\quad ={\frac {1}{V}}\,{\frac {\partial ^{2}G}{\partial P\partial T}}} where P is pressure, V is volume, T is temperature, S is entropy, and N is the number of particles. For a single component system, only three second derivatives are needed in order to derive all others, and so only three material properties are needed to derive all others. For a single component system, the "standard" three parameters are the isothermal compressibility κ T {\displaystyle \kappa _{T}} , the specific heat at constant pressure c P {\displaystyle c_{P}} , and the coefficient of thermal expansion α {\displaystyle \alpha } . For example, the following equations are true: c P = c V + T V α 2 N κ T {\displaystyle c_{P}=c_{V}+{\frac {TV\alpha ^{2}}{N\kappa _{T}}}} κ T = κ S + T V α 2 N c P {\displaystyle \kappa _{T}=\kappa _{S}+{\frac {TV\alpha ^{2}}{Nc_{P}}}} The three "standard" properties are in fact the three possible second derivatives of the Gibbs free energy with respect to temperature and pressure. Moreover, considering derivatives such as ∂ 3 G ∂ P ∂ T 2 {\displaystyle {\frac {\partial ^{3}G}{\partial P\partial T^{2}}}} and the related Schwartz relations, shows that the properties triplet is not independent. In fact, one property function can be given as an expression of the two others, up to a reference state value. The second principle of thermodynamics has implications on the sign of some thermodynamic properties such isothermal compressibility. == See also == List of materials properties (thermal properties) Heat capacity ratio Statistical mechanics Thermodynamic equations Thermodynamic databases for pure substances Heat transfer coefficient Latent heat Specific heat of melting (Enthalpy of fusion) Specific heat of vaporization (Enthalpy of vaporization) Thermal mass == External links == The Dortmund Data Bank is a factual data bank for thermodynamic and thermophysical data. == References == Callen, Herbert B. (1985). Thermodynamics and an Introduction to Thermostatistics (2nd ed.). New York: John Wiley & Sons. ISBN 0-471-86256-8.
Wikipedia/Material_properties_(thermodynamics)
Domestic energy consumption refers to the total energy consumption of a single household. Globally, the amount of energy used per household may vary significantly, depending on factors such as the standard of living of the country, the climate, the age of the occupant of the home, and type of residence. Households in different parts of the world will have differing levels of consumption, based on latitude and technology. == United States == According to the US EIA as of 2022, the average annual amount of electricity sold to a U.S. residential electric-utility customer was 10,791 kilowatt-hours (kWh), or an average of about 899 kWh per month. The US state of Louisiana had the highest annual electricity purchases per residential customer at 14,774 kWh and the US state of Hawaii had the lowest at 6,178 kWh per residential customer. As of 2008, in an average household in a temperate climate, the yearly use of household energy is comprised as follows: == European Union == According to eurostat as of 2021, households represented 27% of final energy consumption in the EU. The main use of energy by households was for heating their homes (64.4% of final energy consumption in the residential sector), with renewables accounting for more than a quarter (27%) of EU households space heating consumption. == See also == 2000-watt society Zero-energy building == References ==
Wikipedia/Domestic_energy_consumption
The Berkeley Physics Course is a series of college-level physics textbooks written mostly (but not exclusively) by UC Berkeley professors. == Description == The series consists of the following five volumes, each of which was originally used in a one-semester course at Berkeley: Mechanics, by Charles Kittel, et al. Electricity and Magnetism, by Edward M. Purcell Waves, by Frank S. Crawford Jr. Quantum Physics, by Eyvind H. Wichmann Statistical Physics, by Frederick Reif Volume 2, Electricity and Magnetism, by Purcell (Harvard), is particularly well known, and was influential for its use of relativity in the presentation of the subject at the introductory college level. Half a century later the book is still in print, in an updated version by authors Purcell and Morin. The third edition of the text, published by Cambridge University Press in 2013, was completely revised and updated to SI units. == History == A Sputnik-era project funded by a National Science Foundation grant, the course arose from discussions between Philip Morrison (then at Cornell University) and Charles Kittel (Berkeley) in 1961, and was published by McGraw-Hill starting in 1965. The Berkeley course was contemporary with The Feynman Lectures on Physics (a college course at a similar mathematical level), and PSSC Physics (a high school introductory course). These physics courses were all developed in the atmosphere of urgency about science education created in the West by Sputnik. Because of the government support received, the original editions contained notices on their copyright pages stating that the books were to be available royalty-free after five years. The authors got lump-sum payments but did not receive royalties. The legal status of the original editions has been befogged in the case of the renowned second volume by the fact that Cambridge University Press has made it effectively impossible to obtain the royalty-free license promised under the original government contract. There was a parallel series of laboratory courses developed by Alan Portis. The Heathkit company marketed a line of its electronic instruments which had been adapted for use with Berkeley series of lab manuals.: 149  The series was translated into a number of foreign languages. Although the course was influential in physics education worldwide, the book series sold better in foreign markets than in the US, possibly because students in other countries specialized earlier and were therefore better prepared mathematically than US students. It was felt to be too advanced for typical engineering students at Berkeley, but continued to be used there in honors courses for physics majors. Course adoption may have also been hindered by the initial choice of Gaussian units of measurement, and later editions of volumes 1 and 2 were eventually published with the Gaussian system replaced by SI units. == See also == The Feynman Lectures on Physics – another contemporaneously-developed and influential college-level physics series Course of Theoretical Physics – ten-volume series of books covering advanced theoretical physics, by Lev Landau and Evgeniy Lifshitz PSSC Physics – a contemporaneously-developed high-school-level physics textbook Harvard Project Physics – a contemporaneously-developed high-school-level physics textbook == References ==
Wikipedia/Berkeley_Physics_Course
Scalable Vector Graphics (SVG) is an XML-based vector graphics format for defining two-dimensional graphics, having support for interactivity and animation. The SVG specification is an open standard developed by the World Wide Web Consortium since 1999. SVG images are defined in a vector graphics format and stored in XML text files. SVG images can thus be scaled in size without loss of quality, and SVG files can be searched, indexed, scripted, and compressed. The XML text files can be created and edited with text editors or vector graphics editors, and are rendered by most web browsers. If used for images, SVG can host scripts or CSS, potentially leading to cross-site scripting attacks or other security vulnerabilities. == History == SVG has been in development within the World Wide Web Consortium (W3C) since 1999 after six competing proposals for vector graphics languages had been submitted to the consortium during 1998 (see below). The early SVG Working Group decided not to develop any of the commercial submissions, but to create a new markup language that was informed by but not really based on any of them. SVG was developed by the W3C SVG Working Group starting in 1998, after six competing vector graphics submissions were received that year: Web Schematics, from CCLRC PGML, from Adobe Systems, IBM, Netscape and Sun Microsystems VML, by Autodesk, Hewlett-Packard, Macromedia, Microsoft, and Vision Hyper Graphics Markup Language (HGML), by Orange UK and PRP WebCGM, from Boeing, PTC, InterCAP Graphics Systems, Inso Corporation, CCLRC, and Xerox DrawML, from Excosoft AB The working group was chaired at the time by Chris Lilley of the W3C. Early adoption was limited due to lack of support in older versions of Internet Explorer. However, as of 2011, all major desktop browsers began to support SVG. Native browser support offers various advantages, such as not requiring plugins, allowing SVG to be mixed with other content, and improving rendering and scripting reliability. Mobile support for SVG exists in various forms, with different devices and browsers supporting SVG Tiny 1.1 or 1.2. SVG can be produced using vector graphics editors and rendered into raster formats. In web-based applications, Inline SVG allows embedding SVG content within HTML documents. The SVG specification was updated to version 1.1 in 2011. Scalable Vector Graphics 2 became a W3C Candidate Recommendation on 15 September 2016. SVG 2 incorporates several new features in addition to those of SVG 1.1 and SVG Tiny 1.2. === Version 1.x === SVG 1.0 became a W3C Recommendation on 4 September 2001. SVG 1.1 became a W3C Recommendation on 14 January 2003. The SVG 1.1 specification is modularized in order to allow subsets to be defined as profiles. Apart from this, there is very little difference between SVG 1.1 and SVG 1.0. SVG Tiny and SVG Basic (the Mobile SVG Profiles) became W3C Recommendations on 14 January 2003. These are described as profiles of SVG 1.1. SVG Tiny 1.2 became a W3C Recommendation on 22 December 2008. It was initially drafted as a profile of the planned SVG Full 1.2 (which has since been dropped in favor of SVG 2), but was later refactored as a standalone specification. It is generally poorly supported. SVG 1.1 Second Edition, which includes all the errata and clarifications, but no new features to the original SVG 1.1 was released on 16 August 2011. SVG Tiny 1.2 Portable/Secure, a more secure subset of the SVG Tiny 1.2 profile introduced as an IETF draft standard on 29 July 2020. Also known as SVG Tiny P/S. SVG Tiny 1.2 Portable/Secure is a requirement of the BIMI draft standard. === Version 2 === SVG 2 removes or deprecates some features of SVG 1.1 and incorporates new features from HTML5 and Web Open Font Format (WOFF): For example, SVG 2 removes several font elements such as glyph and altGlyph (replaced by the WOFF). The xml:space attribute is deprecated in favor of CSS. HTML5 features such as translate and data-* attributes have been added. Text handling features from SVG Tiny 1.2 are annotated as to be included, but not yet formalized in text. Some other 1.2 features are cherry picked in, but SVG 2 is not a superset of SVG tiny 1.2 in general. SVG 2 reached the Candidate Recommendation stage on 15 September 2016, and revised versions were published on 7 August 2018 and 4 October 2018. The latest draft was released on 08 March 2023. == Features == SVG supports interactivity, animation, and rich graphical capabilities, making it suitable for both web and print applications. SVG images can be compressed with the gzip algorithm, resulting in SVGZ files that are typically 20–50% smaller than the original. SVG also supports metadata, enabling better indexing, searching, and retrieval of SVG content. SVG allows three types of graphic objects: vector graphic shapes (such as paths consisting of straight lines and curves), bitmap images, and text. Graphical objects can be grouped, styled, transformed and composited into previously rendered objects. The feature set includes nested transformations, clipping paths, alpha masks, filter effects and template objects. SVG drawings can be interactive and can include animation, defined in the SVG XML elements or via scripting that accesses the SVG Document Object Model (DOM). SVG uses CSS for styling and JavaScript for scripting. Text, including internationalization and localization, appearing in plain text within the SVG DOM, enhances the accessibility of SVG graphics. === Printing === Though the SVG Specification primarily focuses on vector graphics markup language, its design includes the basic capabilities of a page description language like Adobe's PDF. It contains provisions for rich graphics, and is compatible with CSS for styling purposes. SVG has the information needed to place each glyph and image in a chosen location on a printed page. === Scripting and animation === SVG drawings can be dynamic and interactive. Time-based modifications to the elements can be described in SMIL, or can be programmed in a scripting language (e.g. JavaScript). The W3C explicitly recommends SMIL as the standard for animation in SVG. A rich set of event handlers such as "onmouseover" and "onclick" can be assigned to any SVG graphical object to apply actions and events. === Mobile profiles === Because of industry demand, two mobile profiles were introduced with SVG 1.1: SVG Tiny (SVGT) and SVG Basic (SVGB). These are subsets of the full SVG standard, mainly intended for user agents with limited capabilities. In particular, SVG Tiny was defined for highly restricted mobile devices such as cellphones; it does not support styling or scripting. SVG Basic was defined for higher-level mobile devices, such as smartphones. In 2003, the 3GPP, an international telecommunications standards group, adopted SVG Tiny as the mandatory vector graphics media format for next-generation phones. SVGT is the required vector graphics format and support of SVGB is optional for Multimedia Messaging Service (MMS) and Packet-switched Streaming Service. It was later added as required format for vector graphics in 3GPP IP Multimedia Subsystem (IMS). Neither mobile profile includes support for the full Document Object Model (DOM), while only SVG Basic has optional support for scripting, but because they are fully compatible subsets of the full standard, most SVG graphics can still be rendered by devices which only support the mobile profiles. SVGT 1.2 adds a microDOM (μDOM), styling and scripting. SVGT 1.2 also includes some features not found in SVG 1.1, including non-scaling strokes, which are supported by some SVG 1.1 implementations, such as Opera, Firefox, and WebKit. As shared code bases between desktop and mobile browsers increased, the use of SVG 1.1 over SVGT 1.2 also increased. === Compression === SVG images, being XML, contain many repeated fragments of text, so they are well suited for lossless data compression algorithms. When an SVG image has been compressed with the gzip algorithm, it is referred to as an "SVGZ" image and uses the corresponding .svgz filename extension. Conforming SVG 1.1 viewers will display compressed images. An SVGZ file is typically 20 to 50 percent of the original size. W3C provides SVGZ files to test for conformance. == Design == The SVG 1.1 specification defines 14 functional areas or feature sets: Paths Simple or compound shape outlines are drawn with curved or straight lines that can be filled in, outlined, or used as a clipping path. Paths have a compact coding. For example, M (for "move to") precedes initial numeric x and y coordinates, and L (for "line to") precedes a point to which a line should be drawn. Further command letters (C, S, Q, T, and A) precede data that is used to draw various Bézier and elliptical curves. Z is used to close a path. In all cases, absolute coordinates follow capital letter commands and relative coordinates are used after the equivalent lower-case letters. Basic shapes Straight-line paths and paths made up of a series of connected straight-line segments (polylines), as well as closed polygons, circles, and ellipses can be drawn. Rectangles and round-cornered rectangles are also standard elements. Text Unicode character text included in an SVG file is expressed as XML character data. Many visual effects are possible, and the SVG specification automatically handles bidirectional text (for composing a combination of English and Arabic text, for example), vertical text (as Chinese or Japanese may be written) and characters along a curved path (such as the text around the edge of the Great Seal of the United States). Painting SVG shapes can be filled and outlined (painted with a color, a gradient, or a pattern). Fills may be opaque, or have any degree of transparency. "Markers" are line-end features, such as arrowheads, or symbols that can appear at the vertices of a polygon. Color Colors can be applied to all visible SVG elements, either directly or via fill, stroke, and other properties. Colors are specified in the same way as in CSS2, i.e. using names like black or blue, in hexadecimal such as #2f0 or #22ff00, in decimal like rgb(255,255,127), or as percentages of the form rgb(100%,100%,50%). Gradients and patterns SVG shapes can be filled or outlined with solid colors as above, or with color gradients or with repeating patterns. Color gradients can be linear or radial (circular), and can involve any number of colors as well as repeats. Opacity gradients can also be specified. Patterns are based on predefined raster or vector graphic objects, which can be repeated in x and y directions. Gradients and patterns can be animated and scripted. Since 2008, there has been discussion among professional users of SVG that either gradient meshes or preferably diffusion curves could usefully be added to the SVG specification. It is said that a "simple representation [using diffusion curves] is capable of representing even very subtle shading effects" and that "Diffusion curve images are comparable both in quality and coding efficiency with gradient meshes, but are simpler to create (according to several artists who have used both tools), and can be captured from bitmaps fully automatically." The current draft of SVG 2 includes gradient meshes. Clipping, masking and compositing Graphic elements, including text, paths, basic shapes and combinations of these, can be used as outlines to define both inside and outside regions that can be painted (with colors, gradients and patterns) independently. Fully opaque clipping paths and semi-transparent masks are composited together to calculate the color and opacity of every pixel of the final image, using alpha blending. Filter effects A filter effect consists of a series of graphics operations that are applied to a given source vector graphic to produce a modified bitmapped result. Interactivity SVG images can interact with users in many ways. In addition to hyperlinks as mentioned below, any part of an SVG image can be made receptive to user interface events such as changes in focus, mouse clicks, scrolling or zooming the image and other pointer, keyboard and document events. Event handlers may start, stop or alter animations as well as trigger scripts in response to such events. Linking SVG images can contain hyperlinks to other documents, using XLink. Through the use of the <view> element or a fragment identifier, URLs can link to SVG files that change the visible area of the document. This allows for creating specific view states that are used to zoom in/out of a specific area or to limit the view to a specific element. This is helpful when creating sprites. XLink support in combination with the <use> element also allow linking to and re-using internal and external elements. This allows coders to do more with less markup and makes for cleaner code. Scripting All aspects of an SVG document can be accessed and manipulated using scripts in a similar way to HTML. The default scripting language is JavaScript and there are defined Document Object Model (DOM) objects for every SVG element and attribute. Scripts are enclosed in <script> elements. They can run in response to pointer events, keyboard events and document events as required. Animation SVG content can be animated using the built-in animation elements such as <animate>, <animateMotion> and <animateColor>. Content can be animated by manipulating the DOM using ECMAScript and the scripting language's built-in timers. SVG animation has been designed to be compatible with current and future versions of Synchronized Multimedia Integration Language (SMIL). Animations can be continuous, they can loop and repeat, and they can respond to user events, as mentioned above. Fonts As with HTML and CSS, text in SVG may reference external font files, such as system fonts. If the required font files do not exist on the machine where the SVG file is rendered, the text may not appear as intended. To overcome this limitation, text can be displayed in an SVG font, where the required glyphs are defined in SVG as a font that is then referenced from the <text> element. Metadata In accord with the W3C's Semantic Web initiative, SVG allows authors to provide metadata about SVG content. The main facility is the <metadata> element, where the document can be described using Dublin Core metadata properties (e.g. title, creator/author, subject, description, etc.). Other metadata schemas may also be used. In addition, SVG defines <title> and <desc> elements where authors may also provide plain-text descriptive material within an SVG image to help indexing, searching and retrieval by a number of means. An SVG document can define components including shapes, gradients etc., and use them repeatedly. SVG images can also contain raster graphics, such as PNG and JPEG images, and further SVG images. This code will produce the colored shapes shown in the image, excluding the grid and labels: == Implementation == The use of SVG on the web was limited by the lack of support in older versions of Internet Explorer (IE). Many websites that serve SVG images also provide the images in a raster format, either automatically by HTTP content negotiation or by allowing the user directly to choose the file. === Web browsers === Konqueror was the first browser to support SVG in release version 3.2 in February 2004. As of 2011, all major desktop browsers, and many minor ones, have some level of SVG support. Other browsers' implementations are not yet complete; see comparison of layout engines for further details. Some earlier versions of Firefox (e.g. versions between 1.5 and 3.6), as well as a few other, now outdated, web browsers capable of displaying SVG graphics, needed them embedded in <object> or <iframe> elements to display them integrated as parts of an HTML webpage instead of using the standard way of integrating images with <img>. However, SVG images may be included in XHTML pages using XML namespaces. Tim Berners-Lee, the inventor of the World Wide Web, was critical of early versions of Internet Explorer for its failure to support SVG. Opera (since 8.0) has support for the SVG 1.1 Tiny specification, while Opera 9 includes SVG 1.1 Basic support and some of SVG 1.1 Full. Opera 9.5 has partial SVG Tiny 1.2 support. It also supports SVGZ (compressed SVG). Browsers based on the Gecko layout engine (such as Firefox, Flock, Camino, and SeaMonkey) all have had incomplete support for the SVG 1.1 Full specification since 2005. The Mozilla site has an overview of the modules which are supported in Firefox and of the modules which are in development. Gecko 1.9, included in Firefox 3.0, adds support for more of the SVG specification (including filters). Pale Moon, which uses the Goanna layout engine (a fork of the Gecko engine), supports SVG. Browsers based on WebKit (such as Apple's Safari, Google Chrome, and The Omni Group's OmniWeb) have had incomplete support for the SVG 1.1 Full specification since 2006. Amaya has partial SVG support. Internet Explorer 8 and older versions do not support SVG. IE9 (released 14 March 2011) supports the basic SVG feature set. IE10 extended SVG support by adding SVG 1.1 filters. Microsoft Edge Legacy supports SVG 1.1. The Maxthon Cloud Browser also supports SVG. There are several advantages to native and full support: plugins are not needed, SVG can be freely mixed with other content in a single document, and rendering and scripting become considerably more reliable. === Mobile devices === Support for SVG may be limited to SVGT on older or more limited smart phones or may be primarily limited by their respective operating system. Adobe Flash Lite has optionally supported SVG Tiny since version 1.1. At the SVG Open 2005 conference, Sun demonstrated a mobile implementation of SVG Tiny 1.1 for the Connected Limited Device Configuration (CLDC) platform. Mobiles that use Opera Mobile, as well as the iPhone's built in browser, also include SVG support. However, even though it used the WebKit engine, the Android built-in browser did not support SVG prior to v3.0 (Honeycomb). Prior to v3.0, Firefox Mobile 4.0b2 (beta) for Android was the first browser running under Android to support SVG by default. The level of SVG Tiny support available varies from mobile to mobile, depending on the SVG engine installed. Many newer mobile products support additional features beyond SVG Tiny 1.1, like gradient and opacity; this is sometimes referred to as "SVGT 1.1+", though there is no such standard. RIM's BlackBerry has built-in support for SVG Tiny 1.1 since version 5.0. Support continues for WebKit-based BlackBerry Torch browser in OS 6 and 7. Nokia's S60 platform has built-in support for SVG. For example, icons are generally rendered using the platform's SVG engine. Nokia has also led the JSR 226: Scalable 2D Vector Graphics API expert group that defines Java ME API for SVG presentation and manipulation. This API has been implemented in S60 Platform 3rd Edition Feature Pack 1 and onward. Some Series 40 phones also support SVG (such as Nokia 6280). Most Sony Ericsson phones beginning with K700 (by release date) support SVG Tiny 1.1. Phones beginning with K750 also support such features as opacity and gradients. Phones with Sony Ericsson Java Platform-8 have support for JSR 226. Windows Phone has supported SVG since version 7.5. SVG is also supported on various mobile devices from Motorola, Samsung, LG, and Siemens mobile/BenQ-Siemens. eSVG, an SVG rendering library mainly written for embedded devices, is available on some mobile platforms. === Authoring === SVG images can be hand coded or produced by the use of a vector graphics editor, such as Inkscape, Adobe Illustrator, Adobe Animate, or CorelDRAW, and rendered to common raster image formats such as PNG using the same software. Additionally, editors like Inkscape and Boxy SVG provide tools to trace raster images to Bézier curves typically using image tracing back-ends like potrace, autotrace, and imagetracerjs. Software can be programmed to render SVG images by using a library such as librsvg used by GNOME since 2000, Batik and ThorVG (Thor Vector Graphics) since 2020 for lightweight systems. SVG images can also be rendered to any desired popular image format by using ImageMagick, a free command-line utility (which also uses librsvg under the hood). For web-based applications, the mode of usage termed Inline SVG allows SVG content to be embedded within an HTML document using an <svg> tag. Its graphical capabilities can then be employed to create sophisticated user interfaces as the SVG and HTML share context, event handling, and CSS. Other uses for SVG include embedding for use in word processing (e.g. with LibreOffice) and desktop publishing (e.g. Scribus), plotting graphs (e.g. gnuplot), and importing paths (e.g. for use in GIMP or Blender). The application services Microsoft 365 and Microsoft Office 2019 offer support for exporting, importing and editing SVG images. The Uniform Type Identifier for SVG used by Apple is public.svg-image and conforms to public.image and public.xml. == Security == As a document format, similar to HTML documents, SVG can host scripts or CSS. This is an issue when an attacker can upload a SVG file to a website, such as a profile picture, and the file is treated as a normal picture but contains malicious content. For instance, if an SVG file is deployed as a CSS background image, or a logo on some website, or in some image gallery, then when the image is loaded in a browser it activates a script or other content. This could lock up the browser (the Billion laughs attack), but could also lead to HTML injection and cross-site scripting attacks. The W3C therefore stipulate certain requirements when SVG is simply used for images: SVG Security. The W3C says that Inline SVG (an SVG file loaded natively on a website) is considered less of a security risk because the content is part of a greater document, and so scripting and CSS would not be unexpected. == Related work == The MPEG-4 Part 20 standard - Lightweight Application Scene Representation (LASeR) and Simple Aggregation Format (SAF) is based on SVG Tiny. It was developed by MPEG (ISO/IEC JTC 1/SC29/WG11) and published as ISO/IEC 14496-20:2006. SVG capabilities are enhanced in MPEG-4 Part 20 with key features for mobile services, such as dynamic updates, binary encoding, state-of-art font representation. SVG was also accommodated in MPEG-4 Part 11, in the Extensible MPEG-4 Textual (XMT) format - a textual representation of the MPEG-4 multimedia content using XML. == See also == Canvas element Comparison of graphics file formats Comparison of raster-to-vector conversion software Comparison of vector graphics editors Computer graphics Computer Graphics Metafile Image file format Resolution independence == References == == External links == W3C SVG page specifications, list of implementations W3C SVG primer W3C Primer (draft) under auspices of SVG Interest Group MDN - SVG: Scalable Vector Graphics
Wikipedia/Scalable_Vector_Graphics
Quantum Man: Richard Feynman's Life in Science is the eighth non-fiction book by the American theoretical physicist Lawrence M. Krauss. The text was initially published on March 21, 2011 by W. W. Norton & Company. Physics World chose the book as Book of the Year 2011. In this book, Krauss concentrates on the scientific biography of the physicist Richard Feynman. == Review == Armed with material like this, any biography is going to be an attractive proposition, and Quantum Man certainly has no shortage of intriguing anecdotes and insights. We get a feel for the ebullience, as well as the maddening irreverence, that defined his character. The problem is that Krauss – also a theoretical physicist – concentrates a little too heavily on the science, rather than the life, of Richard Feynman. He seems overly concerned that his subject's antics might distract readers from fully appreciating quantum physics, an arcane world that Feynman ruled but which baffles most others. As a result, we are presented with pages and pages on the minutiae of electron interactions and photon exchanges at the expense of any human interest. The result is a book that strains to do intellectual justice to Feynman the scientist but leaves him short-changed as a rounded personality. —The Guardian == References == == External links == Official website Dead link Google books
Wikipedia/Quantum_Man:_Richard_Feynman's_Life_in_Science
Health physics, also referred to as the science of radiation protection, is the profession devoted to protecting people and their environment from potential radiation hazards, while making it possible to enjoy the beneficial uses of radiation. Health physicists normally require a four-year bachelor’s degree and qualifying experience that demonstrates a professional knowledge of the theory and application of radiation protection principles and closely related sciences. Health physicists principally work at facilities where radionuclides or other sources of ionizing radiation (such as X-ray generators) are used or produced; these include research, industry, education, medical facilities, nuclear power, military, environmental protection, enforcement of government regulations, and decontamination and decommissioning—the combination of education and experience for health physicists depends on the specific field in which the health physicist is engaged. == Sub-specialties == There are many sub-specialties in the field of health physics, including Ionising radiation instrumentation and measurement Internal dosimetry and external dosimetry Radioactive waste management Radioactive contamination, decontamination and decommissioning Radiological engineering (shielding, holdup, etc.) Environmental assessment, radiation monitoring and radon evaluation Operational radiation protection/health physics Particle accelerator physics Radiological emergency response/planning - (e.g., Nuclear Emergency Support Team) Industrial uses of radioactive material Medical health physics Public information and communication involving radioactive materials Biological effects/radiation biology Radiation standards Radiation risk analysis Nuclear power Radioactive materials and homeland security Radiation protection Nanotechnology === Operational health physics === The subfield of operational health physics, also called applied health physics in older sources, focuses on field work and the practical application of health physics knowledge to real-world situations, rather than basic research. === Medical physics === The field of Health Physics is related to the field of medical physics and they are similar to each other in that practitioners rely on much of the same fundamental science (i.e., radiation physics, biology, etc.) in both fields. Health physicists, however, focus on the evaluation and protection of human health from radiation, whereas medical health physicists and medical physicists support the use of radiation and other physics-based technologies by medical practitioners for the diagnosis and treatment of disease. == Radiation protection instruments == Practical ionising radiation measurement is essential for health physics. It enables the evaluation of protection measures, and the assessment of the radiation dose likely, or actually received by individuals. The provision of such instruments is normally controlled by law. In the UK it is the Ionising Radiation Regulations 1999. The measuring instruments for radiation protection are both "installed" (in a fixed position) and portable (hand-held or transportable). === Installed instruments === Installed instruments are fixed in positions which are known to be important in assessing the general radiation hazard in an area. Examples are installed "area" radiation monitors, Gamma interlock monitors, personnel exit monitors, and airborne contamination monitors. The area monitor will measure the ambient radiation, usually X-Ray, Gamma or neutrons; these are radiations which can have significant radiation levels over a range in excess of tens of metres from their source, and thereby cover a wide area. Interlock monitors are used in applications to prevent inadvertent exposure of workers to an excess dose by preventing personnel access to an area when a high radiation level is present. Airborne contamination monitors measure the concentration of radioactive particles in the atmosphere to guard against radioactive particles being deposited in the lungs of personnel. Personnel exit monitors are used to monitor workers who are exiting a "contamination controlled" or potentially contaminated area. These can be in the form of hand monitors, clothing frisk probes, or whole body monitors. These monitor the surface of the workers body and clothing to check if any radioactive contamination has been deposited. These generally measure alpha or beta or gamma, or combinations of these. The UK National Physical Laboratory has published a good practice guide through its Ionising Radiation Metrology Forum concerning the provision of such equipment and the methodology of calculating the alarm levels to be used. === Portable instruments === Portable instruments are hand-held or transportable. The hand-held instrument is generally used as a survey meter to check an object or person in detail, or assess an area where no installed instrumentation exists. They can also be used for personnel exit monitoring or personnel contamination checks in the field. These generally measure alpha, beta or gamma, or combinations of these. Transportable instruments are generally instruments that would have been permanently installed, but are temporarily placed in an area to provide continuous monitoring where it is likely there will be a hazard. Such instruments are often installed on trolleys to allow easy deployment, and are associated with temporary operational situations. === Instrument types === A number of commonly used detection instruments are listed below. ionization chambers proportional counters Geiger counters Semiconductor detectors Scintillation detectors The links should be followed for a fuller description of each. === Guidance on use === In the United Kingdom the HSE has issued a user guidance note on selecting the correct radiation measurement instrument for the application concerned [2] Archived 2020-03-15 at the Wayback Machine. This covers all ionising radiation instrument technologies, and is a useful comparative guide. === Radiation dosimeters === Dosimeters are devices worn by the user which measure the radiation dose that the user is receiving. Common types of wearable dosimeters for ionizing radiation include: Quartz fiber dosimeter Film badge dosimeter Thermoluminescent dosimeter Solid state (MOSFET or silicon diode) dosimeter == Units of measure == === Absorbed dose === The fundamental units do not take into account the amount of damage done to matter (especially living tissue) by ionizing radiation. This is more closely related to the amount of energy deposited rather than the charge. This is called the absorbed dose. The gray (Gy), with units J/kg, is the SI unit of absorbed dose, which represents the amount of radiation required to deposit 1 joule of energy in 1 kilogram of any kind of matter. The rad (radiation absorbed dose), is the corresponding traditional unit, which is 0.01 J deposited per kg. 100 rad = 1 Gy. === Equivalent dose === Equal doses of different types or energies of radiation cause different amounts of damage to living tissue. For example, 1 Gy of alpha radiation causes about 20 times as much damage as 1 Gy of X-rays. Therefore, the equivalent dose was defined to give an approximate measure of the biological effect of radiation. It is calculated by multiplying the absorbed dose by a weighting factor WR, which is different for each type of radiation (see table at Relative biological effectiveness#Standardization). This weighting factor is also called the Q (quality factor), or RBE (relative biological effectiveness of the radiation). The sievert (Sv) is the SI unit of equivalent dose. Although it has the same units as the gray, J/kg, it measures something different. For a given type and dose of radiation(s) applied to a certain body part(s) of a certain organism, it measures the magnitude of an X-rays or gamma radiation dose applied to the whole body of the organism, such that the probabilities of the two scenarios to induce cancer is the same according to current statistics. The rem (Roentgen equivalent man) is the traditional unit of equivalent dose. 1 sievert = 100 rem. Because the rem is a relatively large unit, typical equivalent dose is measured in millirem (mrem), 10−3 rem, or in microsievert (μSv), 10−6 Sv. 1 mrem = 10 μSv. A unit sometimes used for low-level doses of radiation is the BRET (Background Radiation Equivalent Time). This is the number of days of an average person's background radiation exposure the dose is equivalent to. This unit is not standardized, and depends on the value used for the average background radiation dose. Using the 2000 UNSCEAR value (below), one BRET unit is equal to about 6.6 μSv. For comparison, the average 'background' dose of natural radiation received by a person per day, based on 2000 UNSCEAR estimate, makes BRET 6.6 μSv (660 μrem). However local exposures vary, with the yearly average in the US being around 3.6 mSv (360 mrem), and in a small area in India as high as 30 mSv (3 rem). The lethal full-body dose of radiation for a human is around 4–5 Sv (400–500 rem). == History == In 1898, The Röntgen Society (Currently the British Institute of Radiology) established a committee on X-ray injuries, thus initiating the discipline of radiation protection. === The term "health physics" === According to Paul Frame: "The term Health Physics is believed to have originated in the Metallurgical Laboratory at the University of Chicago in 1942, but the exact origin is unknown. The term was possibly coined by Robert Stone or Arthur Compton, since Stone was the head of the Health Division and Arthur Compton was the head of the Metallurgical Laboratory. The first task of the Health Physics Section was to design shielding for reactor CP-1 that Enrico Fermi was constructing, so the original HPs were mostly physicists trying to solve health-related problems. The explanation given by Robert Stone was that '...the term Health Physics has been used on the Plutonium Project to define that field in which physical methods are used to determine the existence of hazards to the health of personnel.' A variation was given by Raymond Finkle, a Health Division employee during this time frame. 'The coinage at first merely denoted the physics section of the Health Division... the name also served security: 'radiation protection' might arouse unwelcome interest; 'health physics' conveyed nothing.'" == Radiation-related quantities == The following table shows radiation quantities in SI and non-SI units. Although the United States Nuclear Regulatory Commission permits the use of the units curie, rad, and rem alongside SI units, the European Union European units of measurement directives required that their use for "public health ... purposes" be phased out by 31 December 1985. == See also == Health Physics Society Certified Health Physicist Radiological Protection of Patients Radiation protection Society for Radiological Protection The principal UK body concerned with promoting the science and practice of radiation protection. It is the UK national affiliated body to IRPA IRPA The International Radiation Protection Association. The International body concerned with promoting the science and practice of radiation protection. == References == == External links == The Health Physics Society, a scientific and professional organization whose members specialize in occupational and environmental radiation safety. [3] - "The confusing world of radiation dosimetry" - M.A. Boyd, 2009, U.S. Environmental Protection Agency. An account of chronological differences between USA and ICRP dosimetry systems. Q&A: Health effects of radiation exposure, BBC News, 21 July 2011.
Wikipedia/Health_physics
Applied physics is physics which is intended for a particular technological or practical use. Applied physics may also refer to: == Scientific journals == Applied Physics, issued as two separate publications: Applied Physics A: Materials Science & Processing Applied Physics B: Lasers and Optics American Institute of Physics journals: Applied Physics Letters, published weekly Applied Physics Reviews, published annually Applied Physics Express, a scientific journal publishing letters == Institutions == Applied Physics, Inc. Applied Physics Corporation, now the Cary Instruments division of Varian Applied Physics Laboratory Ice Station, U.S.A. and Japanese laboratory Johns Hopkins University Applied Physics Laboratory == TV == "Applied Physics" (Sliders), a television episode
Wikipedia/Applied_Physics_(disambiguation)
Materials science is an interdisciplinary field of researching and discovering materials. Materials engineering is an engineering field of finding uses for materials in other fields and industries. The intellectual origins of materials science stem from the Age of Enlightenment, when researchers began to use analytical thinking from chemistry, physics, and engineering to understand ancient, phenomenological observations in metallurgy and mineralogy. Materials science still incorporates elements of physics, chemistry, and engineering. As such, the field was long considered by academic institutions as a sub-field of these related fields. Beginning in the 1940s, materials science began to be more widely recognized as a specific and distinct field of science and engineering, and major technical universities around the world created dedicated schools for its study. Materials scientists emphasize understanding how the history of a material (processing) influences its structure, and thus the material's properties and performance. The understanding of processing -structure-properties relationships is called the materials paradigm. This paradigm is used to advance understanding in a variety of research areas, including nanotechnology, biomaterials, and metallurgy. Materials science is also an important part of forensic engineering and failure analysis – investigating materials, products, structures or components, which fail or do not function as intended, causing personal injury or damage to property. Such investigations are key to understanding, for example, the causes of various aviation accidents and incidents. == History == The material of choice of a given era is often a defining point. Phases such as Stone Age, Bronze Age, Iron Age, and Steel Age are historic, if arbitrary examples. Originally deriving from the manufacture of ceramics and its putative derivative metallurgy, materials science is one of the oldest forms of engineering and applied science. Modern materials science evolved directly from metallurgy, which itself evolved from the use of fire. A major breakthrough in the understanding of materials occurred in the late 19th century, when the American scientist Josiah Willard Gibbs demonstrated that the thermodynamic properties related to atomic structure in various phases are related to the physical properties of a material. Important elements of modern materials science were products of the Space Race; the understanding and engineering of the metallic alloys, and silica and carbon materials, used in building space vehicles enabling the exploration of space. Materials science has driven, and been driven by, the development of revolutionary technologies such as rubbers, plastics, semiconductors, and biomaterials. Before the 1960s (and in some cases decades after), many eventual materials science departments were metallurgy or ceramics engineering departments, reflecting the 19th and early 20th-century emphasis on metals and ceramics. The growth of material science in the United States was catalyzed in part by the Advanced Research Projects Agency, which funded a series of university-hosted laboratories in the early 1960s, "to expand the national program of basic research and training in the materials sciences." In comparison with mechanical engineering, the nascent material science field focused on addressing materials from the macro-level and on the approach that materials are designed on the basis of knowledge of behavior at the microscopic level. Due to the expanded knowledge of the link between atomic and molecular processes as well as the overall properties of materials, the design of materials came to be based on specific desired properties. The materials science field has since broadened to include every class of materials, including ceramics, polymers, semiconductors, magnetic materials, biomaterials, and nanomaterials, generally classified into three distinct groups: ceramics, metals, and polymers. The prominent change in materials science during the recent decades is active usage of computer simulations to find new materials, predict properties and understand phenomena. == Fundamentals == A material is defined as a substance (most often a solid, but other condensed phases can be included) that is intended to be used for certain applications. There are a myriad of materials around us; they can be found in anything from new and advanced materials that are being developed include nanomaterials, biomaterials, and energy materials to name a few. The basis of materials science is studying the interplay between the structure of materials, the processing methods to make that material, and the resulting material properties. The complex combination of these produce the performance of a material in a specific application. Many features across many length scales impact material performance, from the constituent chemical elements, its microstructure, and macroscopic features from processing. Together with the laws of thermodynamics and kinetics materials scientists aim to understand and improve materials. === Structure === Structure is one of the most important components of the field of materials science. The very definition of the field holds that it is concerned with the investigation of "the relationships that exist between the structures and properties of materials". Materials science examines the structure of materials from the atomic scale, all the way up to the macro scale. Characterization is the way materials scientists examine the structure of a material. This involves methods such as diffraction with X-rays, electrons or neutrons, and various forms of spectroscopy and chemical analysis such as Raman spectroscopy, energy-dispersive spectroscopy, chromatography, thermal analysis, electron microscope analysis, etc. Structure is studied in the following levels. ==== Atomic structure ==== Atomic structure deals with the atoms of the materials, and how they are arranged to give rise to molecules, crystals, etc. Much of the electrical, magnetic and chemical properties of materials arise from this level of structure. The length scales involved are in angstroms (Å). The chemical bonding and atomic arrangement (crystallography) are fundamental to studying the properties and behavior of any material. ===== Bonding ===== To obtain a full understanding of the material structure and how it relates to its properties, the materials scientist must study how the different atoms, ions and molecules are arranged and bonded to each other. This involves the study and use of quantum chemistry or quantum physics. Solid-state physics, solid-state chemistry and physical chemistry are also involved in the study of bonding and structure. ===== Crystallography ===== Crystallography is the science that examines the arrangement of atoms in crystalline solids. Crystallography is a useful tool for materials scientists. One of the fundamental concepts regarding the crystal structure of a material includes the unit cell, which is the smallest unit of a crystal lattice (space lattice) that repeats to make up the macroscopic crystal structure. Most common structural materials include parallelpiped and hexagonal lattice types. In single crystals, the effects of the crystalline arrangement of atoms is often easy to see macroscopically, because the natural shapes of crystals reflect the atomic structure. Further, physical properties are often controlled by crystalline defects. The understanding of crystal structures is an important prerequisite for understanding crystallographic defects. Examples of crystal defects consist of dislocations including edges, screws, vacancies, self inter-stitials, and more that are linear, planar, and three dimensional types of defects. New and advanced materials that are being developed include nanomaterials, biomaterials. Mostly, materials do not occur as a single crystal, but in polycrystalline form, as an aggregate of small crystals or grains with different orientations. Because of this, the powder diffraction method, which uses diffraction patterns of polycrystalline samples with a large number of crystals, plays an important role in structural determination. Most materials have a crystalline structure, but some important materials do not exhibit regular crystal structure. Polymers display varying degrees of crystallinity, and many are completely non-crystalline. Glass, some ceramics, and many natural materials are amorphous, not possessing any long-range order in their atomic arrangements. The study of polymers combines elements of chemical and statistical thermodynamics to give thermodynamic and mechanical descriptions of physical properties. ==== Nanostructure ==== Materials, which atoms and molecules form constituents in the nanoscale (i.e., they form nanostructures) are called nanomaterials. Nanomaterials are the subject of intense research in the materials science community due to the unique properties that they exhibit. Nanostructure deals with objects and structures that are in the 1 – 100 nm range. In many materials, atoms or molecules agglomerate to form objects at the nanoscale. This causes many interesting electrical, magnetic, optical, and mechanical properties. In describing nanostructures, it is necessary to differentiate between the number of dimensions on the nanoscale. Nanotextured surfaces have one dimension on the nanoscale, i.e., only the thickness of the surface of an object is between 0.1 and 100 nm. Nanotubes have two dimensions on the nanoscale, i.e., the diameter of the tube is between 0.1 and 100 nm; its length could be much greater. Finally, spherical nanoparticles have three dimensions on the nanoscale, i.e., the particle is between 0.1 and 100 nm in each spatial dimension. The terms nanoparticles and ultrafine particles (UFP) often are used synonymously although UFP can reach into the micrometre range. The term 'nanostructure' is often used, when referring to magnetic technology. Nanoscale structure in biology is often called ultrastructure. ==== Microstructure ==== Microstructure is defined as the structure of a prepared surface or thin foil of material as revealed by a microscope above 25× magnification. It deals with objects from 100 nm to a few cm. The microstructure of a material (which can be broadly classified into metallic, polymeric, ceramic and composite) can strongly influence physical properties such as strength, toughness, ductility, hardness, corrosion resistance, high/low temperature behavior, wear resistance, and so on. Most of the traditional materials (such as metals and ceramics) are microstructured. The manufacture of a perfect crystal of a material is physically impossible. For example, any crystalline material will contain defects such as precipitates, grain boundaries (Hall–Petch relationship), vacancies, interstitial atoms or substitutional atoms. The microstructure of materials reveals these larger defects and advances in simulation have allowed an increased understanding of how defects can be used to enhance material properties. ==== Macrostructure ==== Macrostructure is the appearance of a material in the scale millimeters to meters, it is the structure of the material as seen with the naked eye. === Properties === Materials exhibit myriad properties, including the following. Mechanical properties, see Strength of materials Chemical properties, see Chemistry Electrical properties, see Electricity Thermal properties, see Thermodynamics Optical properties, see Optics and Photonics Magnetic properties, see Magnetism The properties of a material determine its usability and hence its engineering application. === Processing === Synthesis and processing involves the creation of a material with the desired micro-nanostructure. A material cannot be used in industry if no economically viable production method for it has been developed. Therefore, developing processing methods for materials that are reasonably effective and cost-efficient is vital to the field of materials science. Different materials require different processing or synthesis methods. For example, the processing of metals has historically defined eras such as the Bronze Age and Iron Age and is studied under the branch of materials science named physical metallurgy. Chemical and physical methods are also used to synthesize other materials such as polymers, ceramics, semiconductors, and thin films. As of the early 21st century, new methods are being developed to synthesize nanomaterials such as graphene. === Thermodynamics === Thermodynamics is concerned with heat and temperature and their relation to energy and work. It defines macroscopic variables, such as internal energy, entropy, and pressure, that partly describe a body of matter or radiation. It states that the behavior of those variables is subject to general constraints common to all materials. These general constraints are expressed in the four laws of thermodynamics. Thermodynamics describes the bulk behavior of the body, not the microscopic behaviors of the very large numbers of its microscopic constituents, such as molecules. The behavior of these microscopic particles is described by, and the laws of thermodynamics are derived from, statistical mechanics. The study of thermodynamics is fundamental to materials science. It forms the foundation to treat general phenomena in materials science and engineering, including chemical reactions, magnetism, polarizability, and elasticity. It explains fundamental tools such as phase diagrams and concepts such as phase equilibrium. === Kinetics === Chemical kinetics is the study of the rates at which systems that are out of equilibrium change under the influence of various forces. When applied to materials science, it deals with how a material changes with time (moves from non-equilibrium to equilibrium state) due to application of a certain field. It details the rate of various processes evolving in materials including shape, size, composition and structure. Diffusion is important in the study of kinetics as this is the most common mechanism by which materials undergo change. Kinetics is essential in processing of materials because, among other things, it details how the microstructure changes with application of heat. == Research == Materials science is a highly active area of research. Together with materials science departments, physics, chemistry, and many engineering departments are involved in materials research. Materials research covers a broad range of topics; the following non-exhaustive list highlights a few important research areas. === Nanomaterials === Nanomaterials describe, in principle, materials of which a single unit is sized (in at least one dimension) between 1 and 1000 nanometers (10−9 meter), but is usually 1 nm – 100 nm. Nanomaterials research takes a materials science based approach to nanotechnology, using advances in materials metrology and synthesis, which have been developed in support of microfabrication research. Materials with structure at the nanoscale often have unique optical, electronic, or mechanical properties. The field of nanomaterials is loosely organized, like the traditional field of chemistry, into organic (carbon-based) nanomaterials, such as fullerenes, and inorganic nanomaterials based on other elements, such as silicon. Examples of nanomaterials include fullerenes, carbon nanotubes, nanocrystals, etc. === Biomaterials === A biomaterial is any matter, surface, or construct that interacts with biological systems. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering, and materials science. Biomaterials can be derived either from nature or synthesized in a laboratory using a variety of chemical approaches using metallic components, polymers, bioceramics, or composite materials. They are often intended or adapted for medical applications, such as biomedical devices which perform, augment, or replace a natural function. Such functions may be benign, like being used for a heart valve, or may be bioactive with a more interactive functionality such as hydroxylapatite-coated hip implants. Biomaterials are also used every day in dental applications, surgery, and drug delivery. For example, a construct with impregnated pharmaceutical products can be placed into the body, which permits the prolonged release of a drug over an extended period of time. A biomaterial may also be an autograft, allograft or xenograft used as an organ transplant material. === Electronic, optical, and magnetic === Semiconductors, metals, and ceramics are used today to form highly complex systems, such as integrated electronic circuits, optoelectronic devices, and magnetic and optical mass storage media. These materials form the basis of our modern computing world, and hence research into these materials is of vital importance. Semiconductors are a traditional example of these types of materials. They are materials that have properties that are intermediate between conductors and insulators. Their electrical conductivities are very sensitive to the concentration of impurities, which allows the use of doping to achieve desirable electronic properties. Hence, semiconductors form the basis of the traditional computer. This field also includes new areas of research such as superconducting materials, spintronics, metamaterials, etc. The study of these materials involves knowledge of materials science and solid-state physics or condensed matter physics. === Computational materials science === With continuing increases in computing power, simulating the behavior of materials has become possible. This enables materials scientists to understand behavior and mechanisms, design new materials, and explain properties formerly poorly understood. Efforts surrounding integrated computational materials engineering are now focusing on combining computational methods with experiments to drastically reduce the time and effort to optimize materials properties for a given application. This involves simulating materials at all length scales, using methods such as density functional theory, molecular dynamics, Monte Carlo, dislocation dynamics, phase field, finite element, and many more. == Industry == Radical materials advances can drive the creation of new products or even new industries, but stable industries also employ materials scientists to make incremental improvements and troubleshoot issues with currently used materials. Industrial applications of materials science include materials design, cost-benefit tradeoffs in industrial production of materials, processing methods (casting, rolling, welding, ion implantation, crystal growth, thin-film deposition, sintering, glassblowing, etc.), and analytic methods (characterization methods such as electron microscopy, X-ray diffraction, calorimetry, nuclear microscopy (HEFIB), Rutherford backscattering, neutron diffraction, small-angle X-ray scattering (SAXS), etc.). Besides material characterization, the material scientist or engineer also deals with extracting materials and converting them into useful forms. Thus ingot casting, foundry methods, blast furnace extraction, and electrolytic extraction are all part of the required knowledge of a materials engineer. Often the presence, absence, or variation of minute quantities of secondary elements and compounds in a bulk material will greatly affect the final properties of the materials produced. For example, steels are classified based on 1/10 and 1/100 weight percentages of the carbon and other alloying elements they contain. Thus, the extracting and purifying methods used to extract iron in a blast furnace can affect the quality of steel that is produced. Solid materials are generally grouped into three basic classifications: ceramics, metals, and polymers. This broad classification is based on the empirical makeup and atomic structure of the solid materials, and most solids fall into one of these broad categories. An item that is often made from each of these materials types is the beverage container. The material types used for beverage containers accordingly provide different advantages and disadvantages, depending on the material used. Ceramic (glass) containers are optically transparent, impervious to the passage of carbon dioxide, relatively inexpensive, and are easily recycled, but are also heavy and fracture easily. Metal (aluminum alloy) is relatively strong, is a good barrier to the diffusion of carbon dioxide, and is easily recycled. However, the cans are opaque, expensive to produce, and are easily dented and punctured. Polymers (polyethylene plastic) are relatively strong, can be optically transparent, are inexpensive and lightweight, and can be recyclable, but are not as impervious to the passage of carbon dioxide as aluminum and glass. === Ceramics and glasses === Another application of materials science is the study of ceramics and glasses, typically the most brittle materials with industrial relevance. Many ceramics and glasses exhibit covalent or ionic-covalent bonding with SiO2 (silica) as a fundamental building block. Ceramics – not to be confused with raw, unfired clay – are usually seen in crystalline form. The vast majority of commercial glasses contain a metal oxide fused with silica. At the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon cooling. Windowpanes and eyeglasses are important examples. Fibers of glass are also used for long-range telecommunication and optical transmission. Scratch resistant Corning Gorilla Glass is a well-known example of the application of materials science to drastically improve the properties of common components. Engineering ceramics are known for their stiffness and stability under high temperatures, compression and electrical stress. Alumina, silicon carbide, and tungsten carbide are made from a fine powder of their constituents in a process of sintering with a binder. Hot pressing provides higher density material. Chemical vapor deposition can place a film of a ceramic on another material. Cermets are ceramic particles containing some metals. The wear resistance of tools is derived from cemented carbides with the metal phase of cobalt and nickel typically added to modify properties. Ceramics can be significantly strengthened for engineering applications using the principle of crack deflection. This process involves the strategic addition of second-phase particles within a ceramic matrix, optimizing their shape, size, and distribution to direct and control crack propagation. This approach enhances fracture toughness, paving the way for the creation of advanced, high-performance ceramics in various industries. === Composites === Another application of materials science in industry is making composite materials. These are structured materials composed of two or more macroscopic phases. Applications range from structural elements such as steel-reinforced concrete, to the thermal insulating tiles, which play a key and integral role in NASA's Space Shuttle thermal protection system, which is used to protect the surface of the shuttle from the heat of re-entry into the Earth's atmosphere. One example is reinforced Carbon-Carbon (RCC), the light gray material, which withstands re-entry temperatures up to 1,510 °C (2,750 °F) and protects the Space Shuttle's wing leading edges and nose cap. RCC is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. After curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfuryl alcohol in a vacuum chamber, and cured-pyrolized to convert the furfuryl alcohol to carbon. To provide oxidation resistance for reusability, the outer layers of the RCC are converted to silicon carbide. Other examples can be seen in the "plastic" casings of television sets, cell-phones and so on. These plastic casings are usually a composite material made up of a thermoplastic matrix such as acrylonitrile butadiene styrene (ABS) in which calcium carbonate chalk, talc, glass fibers or carbon fibers have been added for added strength, bulk, or electrostatic dispersion. These additions may be termed reinforcing fibers, or dispersants, depending on their purpose. === Polymers === Polymers are chemical compounds made up of a large number of identical components linked together like chains. Polymers are the raw materials (the resins) used to make what are commonly called plastics and rubber. Plastics and rubber are the final product, created after one or more polymers or additives have been added to a resin during processing, which is then shaped into a final form. Plastics in former and in current widespread use include polyethylene, polypropylene, polyvinyl chloride (PVC), polystyrene, nylons, polyesters, acrylics, polyurethanes, and polycarbonates. Rubbers include natural rubber, styrene-butadiene rubber, chloroprene, and butadiene rubber. Plastics are generally classified as commodity, specialty and engineering plastics. Polyvinyl chloride (PVC) is widely used, inexpensive, and annual production quantities are large. It lends itself to a vast array of applications, from artificial leather to electrical insulation and cabling, packaging, and containers. Its fabrication and processing are simple and well-established. The versatility of PVC is due to the wide range of plasticisers and other additives that it accepts. The term "additives" in polymer science refers to the chemicals and compounds added to the polymer base to modify its material properties. Polycarbonate would be normally considered an engineering plastic (other examples include PEEK, ABS). Such plastics are valued for their superior strengths and other special material properties. They are usually not used for disposable applications, unlike commodity plastics. Specialty plastics are materials with unique characteristics, such as ultra-high strength, electrical conductivity, electro-fluorescence, high thermal stability, etc. The dividing lines between the various types of plastics is not based on material but rather on their properties and applications. For example, polyethylene (PE) is a cheap, low friction polymer commonly used to make disposable bags for shopping and trash, and is considered a commodity plastic, whereas medium-density polyethylene (MDPE) is used for underground gas and water pipes, and another variety called ultra-high-molecular-weight polyethylene (UHMWPE) is an engineering plastic which is used extensively as the glide rails for industrial equipment and the low-friction socket in implanted hip joints. === Metal alloys === The alloys of iron (steel, stainless steel, cast iron, tool steel, alloy steels) make up the largest proportion of metals today both by quantity and commercial value. Iron alloyed with various proportions of carbon gives low, mid and high carbon steels. An iron-carbon alloy is only considered steel if the carbon level is between 0.01% and 2.00% by weight. For steels, the hardness and tensile strength of the steel is related to the amount of carbon present, with increasing carbon levels also leading to lower ductility and toughness. Heat treatment processes such as quenching and tempering can significantly change these properties, however. In contrast, certain metal alloys exhibit unique properties where their size and density remain unchanged across a range of temperatures. Cast iron is defined as an iron–carbon alloy with more than 2.00%, but less than 6.67% carbon. Stainless steel is defined as a regular steel alloy with greater than 10% by weight alloying content of chromium. Nickel and molybdenum are typically also added in stainless steels. Other significant metallic alloys are those of aluminium, titanium, copper and magnesium. Copper alloys have been known for a long time (since the Bronze Age), while the alloys of the other three metals have been relatively recently developed. Due to the chemical reactivity of these metals, the electrolytic extraction processes required were only developed relatively recently. The alloys of aluminium, titanium and magnesium are also known and valued for their high strength to weight ratios and, in the case of magnesium, their ability to provide electromagnetic shielding. These materials are ideal for situations where high strength to weight ratios are more important than bulk cost, such as in the aerospace industry and certain automotive engineering applications. === Semiconductors === A semiconductor is a material that has a resistivity between a conductor and insulator. Modern day electronics run on semiconductors, and the industry had an estimated US$530 billion market in 2021. Its electronic properties can be greatly altered through intentionally introducing impurities in a process referred to as doping. Semiconductor materials are used to build diodes, transistors, light-emitting diodes (LEDs), and analog and digital electric circuits, among their many uses. Semiconductor devices have replaced thermionic devices like vacuum tubes in most applications. Semiconductor devices are manufactured both as single discrete devices and as integrated circuits (ICs), which consist of a number—from a few to millions—of devices manufactured and interconnected on a single semiconductor substrate. Of all the semiconductors in use today, silicon makes up the largest portion both by quantity and commercial value. Monocrystalline silicon is used to produce wafers used in the semiconductor and electronics industry. Gallium arsenide (GaAs) is the second most popular semiconductor used. Due to its higher electron mobility and saturation velocity compared to silicon, it is a material of choice for high-speed electronics applications. These superior properties are compelling reasons to use GaAs circuitry in mobile phones, satellite communications, microwave point-to-point links and higher frequency radar systems. Other semiconductor materials include germanium, silicon carbide, and gallium nitride and have various applications. == Relation with other fields == Materials science evolved, starting from the 1950s because it was recognized that to create, discover and design new materials, one had to approach it in a unified manner. Thus, materials science and engineering emerged in many ways: renaming and/or combining existing metallurgy and ceramics engineering departments; splitting from existing solid state physics research (itself growing into condensed matter physics); pulling in relatively new polymer engineering and polymer science; recombining from the previous, as well as chemistry, chemical engineering, mechanical engineering, and electrical engineering; and more. The field of materials science and engineering is important both from a scientific perspective, as well as for applications field. Materials are of the utmost importance for engineers (or other applied fields) because usage of the appropriate materials is crucial when designing systems. As a result, materials science is an increasingly important part of an engineer's education. Materials physics is the use of physics to describe the physical properties of materials. It is a synthesis of physical sciences such as chemistry, solid mechanics, solid state physics, and materials science. Materials physics is considered a subset of condensed matter physics and applies fundamental condensed matter concepts to complex multiphase media, including materials of technological interest. Current fields that materials physicists work in include electronic, optical, and magnetic materials, novel materials and structures, quantum phenomena in materials, nonequilibrium physics, and soft condensed matter physics. New experimental and computational tools are constantly improving how materials systems are modeled and studied and are also fields when materials physicists work in. The field is inherently interdisciplinary, and the materials scientists or engineers must be aware and make use of the methods of the physicist, chemist and engineer. Conversely, fields such as life sciences and archaeology can inspire the development of new materials and processes, in bioinspired and paleoinspired approaches. Thus, there remain close relationships with these fields. Conversely, many physicists, chemists and engineers find themselves working in materials science due to the significant overlaps between the fields. == Emerging technologies == == Subdisciplines == The main branches of materials science stem from the four main classes of materials: ceramics, metals, polymers and composites. Ceramic engineering Metallurgy Polymer science and engineering Composite engineering There are additionally broadly applicable, materials independent, endeavors. Materials characterization (spectroscopy, microscopy, diffraction) Computational materials science Materials informatics and selection There are also relatively broad focuses across materials on specific phenomena and techniques. Crystallography Surface science Tribology Microelectronics == Related or interdisciplinary fields == Condensed matter physics, solid-state physics and solid-state chemistry Nanotechnology Mineralogy Supramolecular chemistry Biomaterials science == Professional societies == American Ceramic Society ASM International Association for Iron and Steel Technology Materials Research Society The Minerals, Metals & Materials Society == See also == == References == === Citations === === Bibliography === Ashby, Michael; Hugh Shercliff; David Cebon (2007). Materials: engineering, science, processing and design (1st ed.). Butterworth-Heinemann. ISBN 978-0-7506-8391-3. Askeland, Donald R.; Pradeep P. Phulé (2005). The Science & Engineering of Materials (5th ed.). Thomson-Engineering. ISBN 978-0-534-55396-8. Callister, Jr., William D. (2000). Materials Science and Engineering – An Introduction (5th ed.). John Wiley and Sons. ISBN 978-0-471-32013-5. Eberhart, Mark (2003). Why Things Break: Understanding the World by the Way It Comes Apart. Harmony. ISBN 978-1-4000-4760-4. Gaskell, David R. (1995). Introduction to the Thermodynamics of Materials (4th ed.). Taylor and Francis Publishing. ISBN 978-1-56032-992-3. González-Viñas, W. & Mancini, H.L. (2004). An Introduction to Materials Science. Princeton University Press. ISBN 978-0-691-07097-1. Gordon, James Edward (1984). The New Science of Strong Materials or Why You Don't Fall Through the Floor (eissue ed.). Princeton University Press. ISBN 978-0-691-02380-9. Mathews, F.L. & Rawlings, R.D. (1999). Composite Materials: Engineering and Science. Boca Raton: CRC Press. ISBN 978-0-8493-0621-1. Lewis, P.R.; Reynolds, K. & Gagg, C. (2003). Forensic Materials Engineering: Case Studies. Boca Raton: CRC Press. ISBN 9780849311826. Wachtman, John B. (1996). Mechanical Properties of Ceramics. New York: Wiley-Interscience, John Wiley & Son's. ISBN 978-0-471-13316-2. Walker, P., ed. (1993). Chambers Dictionary of Materials Science and Technology. Chambers Publishing. ISBN 978-0-550-13249-9. Mahajan, S. (2015). "The role of materials science in the evolution of microelectronics". MRS Bulletin. 12 (40): 1079–1088. Bibcode:2015MRSBu..40.1079M. doi:10.1557/mrs.2015.276. == Further reading == Timeline of Materials Science Archived 2011-07-27 at the Wayback Machine at The Minerals, Metals & Materials Society (TMS) – accessed March 2007 Burns, G.; Glazer, A.M. (1990). Space Groups for Scientists and Engineers (2nd ed.). Boston: Academic Press, Inc. ISBN 978-0-12-145761-7. Cullity, B.D. (1978). Elements of X-Ray Diffraction (2nd ed.). Reading, Massachusetts: Addison-Wesley Publishing Company. ISBN 978-0-534-55396-8. Giacovazzo, C; Monaco HL; Viterbo D; Scordari F; Gilli G; Zanotti G; Catti M (1992). Fundamentals of Crystallography. Oxford: Oxford University Press. ISBN 978-0-19-855578-0. Green, D.J.; Hannink, R.; Swain, M.V. (1989). Transformation Toughening of Ceramics. Boca Raton: CRC Press. ISBN 978-0-8493-6594-2. Lovesey, S. W. (1984). Theory of Neutron Scattering from Condensed Matter; Volume 1: Neutron Scattering. Oxford: Clarendon Press. ISBN 978-0-19-852015-3. Lovesey, S. W. (1984). Theory of Neutron Scattering from Condensed Matter; Volume 2: Condensed Matter. Oxford: Clarendon Press. ISBN 978-0-19-852017-7. O'Keeffe, M.; Hyde, B.G. (1996). "Crystal Structures; I. Patterns and Symmetry". Zeitschrift für Kristallographie – Crystalline Materials. 212 (12). Washington, DC: Mineralogical Society of America, Monograph Series: 899. Bibcode:1997ZK....212..899K. doi:10.1524/zkri.1997.212.12.899. ISBN 978-0-939950-40-9. Squires, G.L. (1996). Introduction to the Theory of Thermal Neutron Scattering (2nd ed.). Mineola, New York: Dover Publications Inc. ISBN 978-0-486-69447-4. Young, R.A., ed. (1993). The Rietveld Method. Oxford: Oxford University Press & International Union of Crystallography. ISBN 978-0-19-855577-3. == External links == MS&T conference organized by the main materials societies MIT OpenCourseWare for MSE
Wikipedia/Materials_science_and_engineering
Materials science is an interdisciplinary field of researching and discovering materials. Materials engineering is an engineering field of finding uses for materials in other fields and industries. The intellectual origins of materials science stem from the Age of Enlightenment, when researchers began to use analytical thinking from chemistry, physics, and engineering to understand ancient, phenomenological observations in metallurgy and mineralogy. Materials science still incorporates elements of physics, chemistry, and engineering. As such, the field was long considered by academic institutions as a sub-field of these related fields. Beginning in the 1940s, materials science began to be more widely recognized as a specific and distinct field of science and engineering, and major technical universities around the world created dedicated schools for its study. Materials scientists emphasize understanding how the history of a material (processing) influences its structure, and thus the material's properties and performance. The understanding of processing -structure-properties relationships is called the materials paradigm. This paradigm is used to advance understanding in a variety of research areas, including nanotechnology, biomaterials, and metallurgy. Materials science is also an important part of forensic engineering and failure analysis – investigating materials, products, structures or components, which fail or do not function as intended, causing personal injury or damage to property. Such investigations are key to understanding, for example, the causes of various aviation accidents and incidents. == History == The material of choice of a given era is often a defining point. Phases such as Stone Age, Bronze Age, Iron Age, and Steel Age are historic, if arbitrary examples. Originally deriving from the manufacture of ceramics and its putative derivative metallurgy, materials science is one of the oldest forms of engineering and applied science. Modern materials science evolved directly from metallurgy, which itself evolved from the use of fire. A major breakthrough in the understanding of materials occurred in the late 19th century, when the American scientist Josiah Willard Gibbs demonstrated that the thermodynamic properties related to atomic structure in various phases are related to the physical properties of a material. Important elements of modern materials science were products of the Space Race; the understanding and engineering of the metallic alloys, and silica and carbon materials, used in building space vehicles enabling the exploration of space. Materials science has driven, and been driven by, the development of revolutionary technologies such as rubbers, plastics, semiconductors, and biomaterials. Before the 1960s (and in some cases decades after), many eventual materials science departments were metallurgy or ceramics engineering departments, reflecting the 19th and early 20th-century emphasis on metals and ceramics. The growth of material science in the United States was catalyzed in part by the Advanced Research Projects Agency, which funded a series of university-hosted laboratories in the early 1960s, "to expand the national program of basic research and training in the materials sciences." In comparison with mechanical engineering, the nascent material science field focused on addressing materials from the macro-level and on the approach that materials are designed on the basis of knowledge of behavior at the microscopic level. Due to the expanded knowledge of the link between atomic and molecular processes as well as the overall properties of materials, the design of materials came to be based on specific desired properties. The materials science field has since broadened to include every class of materials, including ceramics, polymers, semiconductors, magnetic materials, biomaterials, and nanomaterials, generally classified into three distinct groups: ceramics, metals, and polymers. The prominent change in materials science during the recent decades is active usage of computer simulations to find new materials, predict properties and understand phenomena. == Fundamentals == A material is defined as a substance (most often a solid, but other condensed phases can be included) that is intended to be used for certain applications. There are a myriad of materials around us; they can be found in anything from new and advanced materials that are being developed include nanomaterials, biomaterials, and energy materials to name a few. The basis of materials science is studying the interplay between the structure of materials, the processing methods to make that material, and the resulting material properties. The complex combination of these produce the performance of a material in a specific application. Many features across many length scales impact material performance, from the constituent chemical elements, its microstructure, and macroscopic features from processing. Together with the laws of thermodynamics and kinetics materials scientists aim to understand and improve materials. === Structure === Structure is one of the most important components of the field of materials science. The very definition of the field holds that it is concerned with the investigation of "the relationships that exist between the structures and properties of materials". Materials science examines the structure of materials from the atomic scale, all the way up to the macro scale. Characterization is the way materials scientists examine the structure of a material. This involves methods such as diffraction with X-rays, electrons or neutrons, and various forms of spectroscopy and chemical analysis such as Raman spectroscopy, energy-dispersive spectroscopy, chromatography, thermal analysis, electron microscope analysis, etc. Structure is studied in the following levels. ==== Atomic structure ==== Atomic structure deals with the atoms of the materials, and how they are arranged to give rise to molecules, crystals, etc. Much of the electrical, magnetic and chemical properties of materials arise from this level of structure. The length scales involved are in angstroms (Å). The chemical bonding and atomic arrangement (crystallography) are fundamental to studying the properties and behavior of any material. ===== Bonding ===== To obtain a full understanding of the material structure and how it relates to its properties, the materials scientist must study how the different atoms, ions and molecules are arranged and bonded to each other. This involves the study and use of quantum chemistry or quantum physics. Solid-state physics, solid-state chemistry and physical chemistry are also involved in the study of bonding and structure. ===== Crystallography ===== Crystallography is the science that examines the arrangement of atoms in crystalline solids. Crystallography is a useful tool for materials scientists. One of the fundamental concepts regarding the crystal structure of a material includes the unit cell, which is the smallest unit of a crystal lattice (space lattice) that repeats to make up the macroscopic crystal structure. Most common structural materials include parallelpiped and hexagonal lattice types. In single crystals, the effects of the crystalline arrangement of atoms is often easy to see macroscopically, because the natural shapes of crystals reflect the atomic structure. Further, physical properties are often controlled by crystalline defects. The understanding of crystal structures is an important prerequisite for understanding crystallographic defects. Examples of crystal defects consist of dislocations including edges, screws, vacancies, self inter-stitials, and more that are linear, planar, and three dimensional types of defects. New and advanced materials that are being developed include nanomaterials, biomaterials. Mostly, materials do not occur as a single crystal, but in polycrystalline form, as an aggregate of small crystals or grains with different orientations. Because of this, the powder diffraction method, which uses diffraction patterns of polycrystalline samples with a large number of crystals, plays an important role in structural determination. Most materials have a crystalline structure, but some important materials do not exhibit regular crystal structure. Polymers display varying degrees of crystallinity, and many are completely non-crystalline. Glass, some ceramics, and many natural materials are amorphous, not possessing any long-range order in their atomic arrangements. The study of polymers combines elements of chemical and statistical thermodynamics to give thermodynamic and mechanical descriptions of physical properties. ==== Nanostructure ==== Materials, which atoms and molecules form constituents in the nanoscale (i.e., they form nanostructures) are called nanomaterials. Nanomaterials are the subject of intense research in the materials science community due to the unique properties that they exhibit. Nanostructure deals with objects and structures that are in the 1 – 100 nm range. In many materials, atoms or molecules agglomerate to form objects at the nanoscale. This causes many interesting electrical, magnetic, optical, and mechanical properties. In describing nanostructures, it is necessary to differentiate between the number of dimensions on the nanoscale. Nanotextured surfaces have one dimension on the nanoscale, i.e., only the thickness of the surface of an object is between 0.1 and 100 nm. Nanotubes have two dimensions on the nanoscale, i.e., the diameter of the tube is between 0.1 and 100 nm; its length could be much greater. Finally, spherical nanoparticles have three dimensions on the nanoscale, i.e., the particle is between 0.1 and 100 nm in each spatial dimension. The terms nanoparticles and ultrafine particles (UFP) often are used synonymously although UFP can reach into the micrometre range. The term 'nanostructure' is often used, when referring to magnetic technology. Nanoscale structure in biology is often called ultrastructure. ==== Microstructure ==== Microstructure is defined as the structure of a prepared surface or thin foil of material as revealed by a microscope above 25× magnification. It deals with objects from 100 nm to a few cm. The microstructure of a material (which can be broadly classified into metallic, polymeric, ceramic and composite) can strongly influence physical properties such as strength, toughness, ductility, hardness, corrosion resistance, high/low temperature behavior, wear resistance, and so on. Most of the traditional materials (such as metals and ceramics) are microstructured. The manufacture of a perfect crystal of a material is physically impossible. For example, any crystalline material will contain defects such as precipitates, grain boundaries (Hall–Petch relationship), vacancies, interstitial atoms or substitutional atoms. The microstructure of materials reveals these larger defects and advances in simulation have allowed an increased understanding of how defects can be used to enhance material properties. ==== Macrostructure ==== Macrostructure is the appearance of a material in the scale millimeters to meters, it is the structure of the material as seen with the naked eye. === Properties === Materials exhibit myriad properties, including the following. Mechanical properties, see Strength of materials Chemical properties, see Chemistry Electrical properties, see Electricity Thermal properties, see Thermodynamics Optical properties, see Optics and Photonics Magnetic properties, see Magnetism The properties of a material determine its usability and hence its engineering application. === Processing === Synthesis and processing involves the creation of a material with the desired micro-nanostructure. A material cannot be used in industry if no economically viable production method for it has been developed. Therefore, developing processing methods for materials that are reasonably effective and cost-efficient is vital to the field of materials science. Different materials require different processing or synthesis methods. For example, the processing of metals has historically defined eras such as the Bronze Age and Iron Age and is studied under the branch of materials science named physical metallurgy. Chemical and physical methods are also used to synthesize other materials such as polymers, ceramics, semiconductors, and thin films. As of the early 21st century, new methods are being developed to synthesize nanomaterials such as graphene. === Thermodynamics === Thermodynamics is concerned with heat and temperature and their relation to energy and work. It defines macroscopic variables, such as internal energy, entropy, and pressure, that partly describe a body of matter or radiation. It states that the behavior of those variables is subject to general constraints common to all materials. These general constraints are expressed in the four laws of thermodynamics. Thermodynamics describes the bulk behavior of the body, not the microscopic behaviors of the very large numbers of its microscopic constituents, such as molecules. The behavior of these microscopic particles is described by, and the laws of thermodynamics are derived from, statistical mechanics. The study of thermodynamics is fundamental to materials science. It forms the foundation to treat general phenomena in materials science and engineering, including chemical reactions, magnetism, polarizability, and elasticity. It explains fundamental tools such as phase diagrams and concepts such as phase equilibrium. === Kinetics === Chemical kinetics is the study of the rates at which systems that are out of equilibrium change under the influence of various forces. When applied to materials science, it deals with how a material changes with time (moves from non-equilibrium to equilibrium state) due to application of a certain field. It details the rate of various processes evolving in materials including shape, size, composition and structure. Diffusion is important in the study of kinetics as this is the most common mechanism by which materials undergo change. Kinetics is essential in processing of materials because, among other things, it details how the microstructure changes with application of heat. == Research == Materials science is a highly active area of research. Together with materials science departments, physics, chemistry, and many engineering departments are involved in materials research. Materials research covers a broad range of topics; the following non-exhaustive list highlights a few important research areas. === Nanomaterials === Nanomaterials describe, in principle, materials of which a single unit is sized (in at least one dimension) between 1 and 1000 nanometers (10−9 meter), but is usually 1 nm – 100 nm. Nanomaterials research takes a materials science based approach to nanotechnology, using advances in materials metrology and synthesis, which have been developed in support of microfabrication research. Materials with structure at the nanoscale often have unique optical, electronic, or mechanical properties. The field of nanomaterials is loosely organized, like the traditional field of chemistry, into organic (carbon-based) nanomaterials, such as fullerenes, and inorganic nanomaterials based on other elements, such as silicon. Examples of nanomaterials include fullerenes, carbon nanotubes, nanocrystals, etc. === Biomaterials === A biomaterial is any matter, surface, or construct that interacts with biological systems. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering, and materials science. Biomaterials can be derived either from nature or synthesized in a laboratory using a variety of chemical approaches using metallic components, polymers, bioceramics, or composite materials. They are often intended or adapted for medical applications, such as biomedical devices which perform, augment, or replace a natural function. Such functions may be benign, like being used for a heart valve, or may be bioactive with a more interactive functionality such as hydroxylapatite-coated hip implants. Biomaterials are also used every day in dental applications, surgery, and drug delivery. For example, a construct with impregnated pharmaceutical products can be placed into the body, which permits the prolonged release of a drug over an extended period of time. A biomaterial may also be an autograft, allograft or xenograft used as an organ transplant material. === Electronic, optical, and magnetic === Semiconductors, metals, and ceramics are used today to form highly complex systems, such as integrated electronic circuits, optoelectronic devices, and magnetic and optical mass storage media. These materials form the basis of our modern computing world, and hence research into these materials is of vital importance. Semiconductors are a traditional example of these types of materials. They are materials that have properties that are intermediate between conductors and insulators. Their electrical conductivities are very sensitive to the concentration of impurities, which allows the use of doping to achieve desirable electronic properties. Hence, semiconductors form the basis of the traditional computer. This field also includes new areas of research such as superconducting materials, spintronics, metamaterials, etc. The study of these materials involves knowledge of materials science and solid-state physics or condensed matter physics. === Computational materials science === With continuing increases in computing power, simulating the behavior of materials has become possible. This enables materials scientists to understand behavior and mechanisms, design new materials, and explain properties formerly poorly understood. Efforts surrounding integrated computational materials engineering are now focusing on combining computational methods with experiments to drastically reduce the time and effort to optimize materials properties for a given application. This involves simulating materials at all length scales, using methods such as density functional theory, molecular dynamics, Monte Carlo, dislocation dynamics, phase field, finite element, and many more. == Industry == Radical materials advances can drive the creation of new products or even new industries, but stable industries also employ materials scientists to make incremental improvements and troubleshoot issues with currently used materials. Industrial applications of materials science include materials design, cost-benefit tradeoffs in industrial production of materials, processing methods (casting, rolling, welding, ion implantation, crystal growth, thin-film deposition, sintering, glassblowing, etc.), and analytic methods (characterization methods such as electron microscopy, X-ray diffraction, calorimetry, nuclear microscopy (HEFIB), Rutherford backscattering, neutron diffraction, small-angle X-ray scattering (SAXS), etc.). Besides material characterization, the material scientist or engineer also deals with extracting materials and converting them into useful forms. Thus ingot casting, foundry methods, blast furnace extraction, and electrolytic extraction are all part of the required knowledge of a materials engineer. Often the presence, absence, or variation of minute quantities of secondary elements and compounds in a bulk material will greatly affect the final properties of the materials produced. For example, steels are classified based on 1/10 and 1/100 weight percentages of the carbon and other alloying elements they contain. Thus, the extracting and purifying methods used to extract iron in a blast furnace can affect the quality of steel that is produced. Solid materials are generally grouped into three basic classifications: ceramics, metals, and polymers. This broad classification is based on the empirical makeup and atomic structure of the solid materials, and most solids fall into one of these broad categories. An item that is often made from each of these materials types is the beverage container. The material types used for beverage containers accordingly provide different advantages and disadvantages, depending on the material used. Ceramic (glass) containers are optically transparent, impervious to the passage of carbon dioxide, relatively inexpensive, and are easily recycled, but are also heavy and fracture easily. Metal (aluminum alloy) is relatively strong, is a good barrier to the diffusion of carbon dioxide, and is easily recycled. However, the cans are opaque, expensive to produce, and are easily dented and punctured. Polymers (polyethylene plastic) are relatively strong, can be optically transparent, are inexpensive and lightweight, and can be recyclable, but are not as impervious to the passage of carbon dioxide as aluminum and glass. === Ceramics and glasses === Another application of materials science is the study of ceramics and glasses, typically the most brittle materials with industrial relevance. Many ceramics and glasses exhibit covalent or ionic-covalent bonding with SiO2 (silica) as a fundamental building block. Ceramics – not to be confused with raw, unfired clay – are usually seen in crystalline form. The vast majority of commercial glasses contain a metal oxide fused with silica. At the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon cooling. Windowpanes and eyeglasses are important examples. Fibers of glass are also used for long-range telecommunication and optical transmission. Scratch resistant Corning Gorilla Glass is a well-known example of the application of materials science to drastically improve the properties of common components. Engineering ceramics are known for their stiffness and stability under high temperatures, compression and electrical stress. Alumina, silicon carbide, and tungsten carbide are made from a fine powder of their constituents in a process of sintering with a binder. Hot pressing provides higher density material. Chemical vapor deposition can place a film of a ceramic on another material. Cermets are ceramic particles containing some metals. The wear resistance of tools is derived from cemented carbides with the metal phase of cobalt and nickel typically added to modify properties. Ceramics can be significantly strengthened for engineering applications using the principle of crack deflection. This process involves the strategic addition of second-phase particles within a ceramic matrix, optimizing their shape, size, and distribution to direct and control crack propagation. This approach enhances fracture toughness, paving the way for the creation of advanced, high-performance ceramics in various industries. === Composites === Another application of materials science in industry is making composite materials. These are structured materials composed of two or more macroscopic phases. Applications range from structural elements such as steel-reinforced concrete, to the thermal insulating tiles, which play a key and integral role in NASA's Space Shuttle thermal protection system, which is used to protect the surface of the shuttle from the heat of re-entry into the Earth's atmosphere. One example is reinforced Carbon-Carbon (RCC), the light gray material, which withstands re-entry temperatures up to 1,510 °C (2,750 °F) and protects the Space Shuttle's wing leading edges and nose cap. RCC is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. After curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfuryl alcohol in a vacuum chamber, and cured-pyrolized to convert the furfuryl alcohol to carbon. To provide oxidation resistance for reusability, the outer layers of the RCC are converted to silicon carbide. Other examples can be seen in the "plastic" casings of television sets, cell-phones and so on. These plastic casings are usually a composite material made up of a thermoplastic matrix such as acrylonitrile butadiene styrene (ABS) in which calcium carbonate chalk, talc, glass fibers or carbon fibers have been added for added strength, bulk, or electrostatic dispersion. These additions may be termed reinforcing fibers, or dispersants, depending on their purpose. === Polymers === Polymers are chemical compounds made up of a large number of identical components linked together like chains. Polymers are the raw materials (the resins) used to make what are commonly called plastics and rubber. Plastics and rubber are the final product, created after one or more polymers or additives have been added to a resin during processing, which is then shaped into a final form. Plastics in former and in current widespread use include polyethylene, polypropylene, polyvinyl chloride (PVC), polystyrene, nylons, polyesters, acrylics, polyurethanes, and polycarbonates. Rubbers include natural rubber, styrene-butadiene rubber, chloroprene, and butadiene rubber. Plastics are generally classified as commodity, specialty and engineering plastics. Polyvinyl chloride (PVC) is widely used, inexpensive, and annual production quantities are large. It lends itself to a vast array of applications, from artificial leather to electrical insulation and cabling, packaging, and containers. Its fabrication and processing are simple and well-established. The versatility of PVC is due to the wide range of plasticisers and other additives that it accepts. The term "additives" in polymer science refers to the chemicals and compounds added to the polymer base to modify its material properties. Polycarbonate would be normally considered an engineering plastic (other examples include PEEK, ABS). Such plastics are valued for their superior strengths and other special material properties. They are usually not used for disposable applications, unlike commodity plastics. Specialty plastics are materials with unique characteristics, such as ultra-high strength, electrical conductivity, electro-fluorescence, high thermal stability, etc. The dividing lines between the various types of plastics is not based on material but rather on their properties and applications. For example, polyethylene (PE) is a cheap, low friction polymer commonly used to make disposable bags for shopping and trash, and is considered a commodity plastic, whereas medium-density polyethylene (MDPE) is used for underground gas and water pipes, and another variety called ultra-high-molecular-weight polyethylene (UHMWPE) is an engineering plastic which is used extensively as the glide rails for industrial equipment and the low-friction socket in implanted hip joints. === Metal alloys === The alloys of iron (steel, stainless steel, cast iron, tool steel, alloy steels) make up the largest proportion of metals today both by quantity and commercial value. Iron alloyed with various proportions of carbon gives low, mid and high carbon steels. An iron-carbon alloy is only considered steel if the carbon level is between 0.01% and 2.00% by weight. For steels, the hardness and tensile strength of the steel is related to the amount of carbon present, with increasing carbon levels also leading to lower ductility and toughness. Heat treatment processes such as quenching and tempering can significantly change these properties, however. In contrast, certain metal alloys exhibit unique properties where their size and density remain unchanged across a range of temperatures. Cast iron is defined as an iron–carbon alloy with more than 2.00%, but less than 6.67% carbon. Stainless steel is defined as a regular steel alloy with greater than 10% by weight alloying content of chromium. Nickel and molybdenum are typically also added in stainless steels. Other significant metallic alloys are those of aluminium, titanium, copper and magnesium. Copper alloys have been known for a long time (since the Bronze Age), while the alloys of the other three metals have been relatively recently developed. Due to the chemical reactivity of these metals, the electrolytic extraction processes required were only developed relatively recently. The alloys of aluminium, titanium and magnesium are also known and valued for their high strength to weight ratios and, in the case of magnesium, their ability to provide electromagnetic shielding. These materials are ideal for situations where high strength to weight ratios are more important than bulk cost, such as in the aerospace industry and certain automotive engineering applications. === Semiconductors === A semiconductor is a material that has a resistivity between a conductor and insulator. Modern day electronics run on semiconductors, and the industry had an estimated US$530 billion market in 2021. Its electronic properties can be greatly altered through intentionally introducing impurities in a process referred to as doping. Semiconductor materials are used to build diodes, transistors, light-emitting diodes (LEDs), and analog and digital electric circuits, among their many uses. Semiconductor devices have replaced thermionic devices like vacuum tubes in most applications. Semiconductor devices are manufactured both as single discrete devices and as integrated circuits (ICs), which consist of a number—from a few to millions—of devices manufactured and interconnected on a single semiconductor substrate. Of all the semiconductors in use today, silicon makes up the largest portion both by quantity and commercial value. Monocrystalline silicon is used to produce wafers used in the semiconductor and electronics industry. Gallium arsenide (GaAs) is the second most popular semiconductor used. Due to its higher electron mobility and saturation velocity compared to silicon, it is a material of choice for high-speed electronics applications. These superior properties are compelling reasons to use GaAs circuitry in mobile phones, satellite communications, microwave point-to-point links and higher frequency radar systems. Other semiconductor materials include germanium, silicon carbide, and gallium nitride and have various applications. == Relation with other fields == Materials science evolved, starting from the 1950s because it was recognized that to create, discover and design new materials, one had to approach it in a unified manner. Thus, materials science and engineering emerged in many ways: renaming and/or combining existing metallurgy and ceramics engineering departments; splitting from existing solid state physics research (itself growing into condensed matter physics); pulling in relatively new polymer engineering and polymer science; recombining from the previous, as well as chemistry, chemical engineering, mechanical engineering, and electrical engineering; and more. The field of materials science and engineering is important both from a scientific perspective, as well as for applications field. Materials are of the utmost importance for engineers (or other applied fields) because usage of the appropriate materials is crucial when designing systems. As a result, materials science is an increasingly important part of an engineer's education. Materials physics is the use of physics to describe the physical properties of materials. It is a synthesis of physical sciences such as chemistry, solid mechanics, solid state physics, and materials science. Materials physics is considered a subset of condensed matter physics and applies fundamental condensed matter concepts to complex multiphase media, including materials of technological interest. Current fields that materials physicists work in include electronic, optical, and magnetic materials, novel materials and structures, quantum phenomena in materials, nonequilibrium physics, and soft condensed matter physics. New experimental and computational tools are constantly improving how materials systems are modeled and studied and are also fields when materials physicists work in. The field is inherently interdisciplinary, and the materials scientists or engineers must be aware and make use of the methods of the physicist, chemist and engineer. Conversely, fields such as life sciences and archaeology can inspire the development of new materials and processes, in bioinspired and paleoinspired approaches. Thus, there remain close relationships with these fields. Conversely, many physicists, chemists and engineers find themselves working in materials science due to the significant overlaps between the fields. == Emerging technologies == == Subdisciplines == The main branches of materials science stem from the four main classes of materials: ceramics, metals, polymers and composites. Ceramic engineering Metallurgy Polymer science and engineering Composite engineering There are additionally broadly applicable, materials independent, endeavors. Materials characterization (spectroscopy, microscopy, diffraction) Computational materials science Materials informatics and selection There are also relatively broad focuses across materials on specific phenomena and techniques. Crystallography Surface science Tribology Microelectronics == Related or interdisciplinary fields == Condensed matter physics, solid-state physics and solid-state chemistry Nanotechnology Mineralogy Supramolecular chemistry Biomaterials science == Professional societies == American Ceramic Society ASM International Association for Iron and Steel Technology Materials Research Society The Minerals, Metals & Materials Society == See also == == References == === Citations === === Bibliography === Ashby, Michael; Hugh Shercliff; David Cebon (2007). Materials: engineering, science, processing and design (1st ed.). Butterworth-Heinemann. ISBN 978-0-7506-8391-3. Askeland, Donald R.; Pradeep P. Phulé (2005). The Science & Engineering of Materials (5th ed.). Thomson-Engineering. ISBN 978-0-534-55396-8. Callister, Jr., William D. (2000). Materials Science and Engineering – An Introduction (5th ed.). John Wiley and Sons. ISBN 978-0-471-32013-5. Eberhart, Mark (2003). Why Things Break: Understanding the World by the Way It Comes Apart. Harmony. ISBN 978-1-4000-4760-4. Gaskell, David R. (1995). Introduction to the Thermodynamics of Materials (4th ed.). Taylor and Francis Publishing. ISBN 978-1-56032-992-3. González-Viñas, W. & Mancini, H.L. (2004). An Introduction to Materials Science. Princeton University Press. ISBN 978-0-691-07097-1. Gordon, James Edward (1984). The New Science of Strong Materials or Why You Don't Fall Through the Floor (eissue ed.). Princeton University Press. ISBN 978-0-691-02380-9. Mathews, F.L. & Rawlings, R.D. (1999). Composite Materials: Engineering and Science. Boca Raton: CRC Press. ISBN 978-0-8493-0621-1. Lewis, P.R.; Reynolds, K. & Gagg, C. (2003). Forensic Materials Engineering: Case Studies. Boca Raton: CRC Press. ISBN 9780849311826. Wachtman, John B. (1996). Mechanical Properties of Ceramics. New York: Wiley-Interscience, John Wiley & Son's. ISBN 978-0-471-13316-2. Walker, P., ed. (1993). Chambers Dictionary of Materials Science and Technology. Chambers Publishing. ISBN 978-0-550-13249-9. Mahajan, S. (2015). "The role of materials science in the evolution of microelectronics". MRS Bulletin. 12 (40): 1079–1088. Bibcode:2015MRSBu..40.1079M. doi:10.1557/mrs.2015.276. == Further reading == Timeline of Materials Science Archived 2011-07-27 at the Wayback Machine at The Minerals, Metals & Materials Society (TMS) – accessed March 2007 Burns, G.; Glazer, A.M. (1990). Space Groups for Scientists and Engineers (2nd ed.). Boston: Academic Press, Inc. ISBN 978-0-12-145761-7. Cullity, B.D. (1978). Elements of X-Ray Diffraction (2nd ed.). Reading, Massachusetts: Addison-Wesley Publishing Company. ISBN 978-0-534-55396-8. Giacovazzo, C; Monaco HL; Viterbo D; Scordari F; Gilli G; Zanotti G; Catti M (1992). Fundamentals of Crystallography. Oxford: Oxford University Press. ISBN 978-0-19-855578-0. Green, D.J.; Hannink, R.; Swain, M.V. (1989). Transformation Toughening of Ceramics. Boca Raton: CRC Press. ISBN 978-0-8493-6594-2. Lovesey, S. W. (1984). Theory of Neutron Scattering from Condensed Matter; Volume 1: Neutron Scattering. Oxford: Clarendon Press. ISBN 978-0-19-852015-3. Lovesey, S. W. (1984). Theory of Neutron Scattering from Condensed Matter; Volume 2: Condensed Matter. Oxford: Clarendon Press. ISBN 978-0-19-852017-7. O'Keeffe, M.; Hyde, B.G. (1996). "Crystal Structures; I. Patterns and Symmetry". Zeitschrift für Kristallographie – Crystalline Materials. 212 (12). Washington, DC: Mineralogical Society of America, Monograph Series: 899. Bibcode:1997ZK....212..899K. doi:10.1524/zkri.1997.212.12.899. ISBN 978-0-939950-40-9. Squires, G.L. (1996). Introduction to the Theory of Thermal Neutron Scattering (2nd ed.). Mineola, New York: Dover Publications Inc. ISBN 978-0-486-69447-4. Young, R.A., ed. (1993). The Rietveld Method. Oxford: Oxford University Press & International Union of Crystallography. ISBN 978-0-19-855577-3. == External links == MS&T conference organized by the main materials societies MIT OpenCourseWare for MSE
Wikipedia/Materials_physics
Engineering physics (EP), sometimes engineering science, is the field of study combining pure science disciplines (such as physics, mathematics, chemistry or biology) and engineering disciplines (computer, nuclear, electrical, aerospace, medical, materials, mechanical, etc.). In many languages, the term technical physics is also used. It has been used since 1861 by the German physics teacher J. Frick in his publications. == Terminology == In some countries, both what would be translated as "engineering physics" and what would be translated as "technical physics" are disciplines leading to academic degrees. In China, for example, with the former specializing in nuclear power research (i.e. nuclear engineering), and the latter closer to engineering physics. In some universities and their institutions, an engineering physics (or applied physics) major is a discipline or specialization within the scope of engineering science, or applied science. Several related names have existed since the inception of the interdisciplinary field. For example, some university courses are called or contain the phrase "physical technologies" or "physical engineering sciences" or "physical technics". In some cases, a program formerly called "physical engineering" has been renamed "applied physics" or has evolved into specialized fields such as "photonics engineering". == Expertise == Unlike traditional engineering disciplines, engineering science or engineering physics is not necessarily confined to a particular branch of science, engineering or physics. Instead, engineering science or engineering physics is meant to provide a more thorough grounding in applied physics for a selected specialty such as optics, quantum physics, materials science, applied mechanics, electronics, nanotechnology, microfabrication, microelectronics, computing, photonics, mechanical engineering, electrical engineering, nuclear engineering, biophysics, control theory, aerodynamics, energy, solid-state physics, etc. It is the discipline devoted to creating and optimizing engineering solutions through enhanced understanding and integrated application of mathematical, scientific, statistical, and engineering principles. The discipline is also meant for cross-functionality and bridges the gap between theoretical science and practical engineering with emphasis in research and development, design, and analysis. == Degrees == In many universities, engineering science programs may be offered at the levels of B.Tech., B.Sc., M.Sc. and Ph.D. Usually, a core of basic and advanced courses in mathematics, physics, chemistry, and biology forms the foundation of the curriculum, while typical elective areas may include fluid dynamics, quantum physics, economics, plasma physics, relativity, solid mechanics, operations research, quantitative finance, information technology and engineering, dynamical systems, bioengineering, environmental engineering, computational engineering, engineering mathematics and statistics, solid-state devices, materials science, electromagnetism, nanoscience, nanotechnology, energy, and optics. == Awards == There are awards for excellence in engineering physics. For example, Princeton University's Jeffrey O. Kephart '80 Prize is awarded annually to the graduating senior with the best record. Since 2002, the German Physical Society has awarded the Georg-Simon-Ohm-Preis for outstanding research in this field. == See also == Applied physics Engineering Engineering science and mechanics Environmental engineering science Index of engineering science and mechanics articles Industrial engineering == Notes and references == == External links == "Engineering Physics at Xavier" "The Engineering Physicist Profession" "Engineering Physicist Professional Profile" Society of Engineering Science Inc. Archived 2017-08-07 at the Wayback Machine
Wikipedia/Engineering_Physics
Cavity optomechanics is a branch of physics which focuses on the interaction between light and mechanical objects on low-energy scales. It is a cross field of optics, quantum optics, solid-state physics and materials science. The motivation for research on cavity optomechanics comes from fundamental effects of quantum theory and gravity, as well as technological applications, such as quantum precision measurement. The name of the field relates to the main effect of interest: the enhancement of radiation pressure interaction between light (photons) and matter using optical resonators (cavities). It first became relevant in the context of gravitational wave detection, since optomechanical effects must be taken into account in interferometric gravitational wave detectors. Furthermore, one may envision optomechanical structures to allow the realization of Schrödinger's cat. Macroscopic objects consisting of billions of atoms share collective degrees of freedom which may behave quantum mechanically (e.g. a sphere of micrometer diameter being in a spatial superposition between two different places). Such a quantum state of motion would allow researchers to experimentally investigate decoherence, which describes the transition of objects from states that are described by quantum mechanics to states that are described by Newtonian mechanics. Optomechanical structures provide new methods to test the predictions of quantum mechanics and decoherence models and thereby might allow to answer some of the most fundamental questions in modern physics. There is a broad range of experimental optomechanical systems which are almost equivalent in their description, but completely different in size, mass, and frequency. Cavity optomechanics was featured as the most recent "milestone of photon history" in nature photonics along well established concepts and technology like quantum information, Bell inequalities and the laser. == Concepts of cavity optomechanics == === Physical processes === ==== Stokes and anti-Stokes scattering ==== The most elementary light-matter interaction is a light beam scattering off an arbitrary object (atom, molecule, nanobeam etc.). There is always elastic light scattering, with the outgoing light frequency identical to the incoming frequency ω ′ = ω {\displaystyle \omega '=\omega } . Inelastic scattering, in contrast, is accompanied by excitation or de-excitation of the material object (e.g. internal atomic transitions may be excited). However, it is always possible to have Brillouin scattering independent of the internal electronic details of atoms or molecules due to the object's mechanical vibrations: ω ′ = ω ± ω m , {\displaystyle \omega '=\omega \pm \omega _{m},} where ω m {\displaystyle \omega _{m}} is the vibrational frequency. The vibrations gain or lose energy, respectively, for these Stokes/anti-Stokes processes, while optical sidebands are created around the incoming light frequency: ω ′ = ω ∓ ω m . {\displaystyle \omega '=\omega \mp \omega _{m}.} If Stokes and anti-Stokes scattering occur at an equal rate, the vibrations will only heat up the object. However, an optical cavity can be used to suppress the (anti-)Stokes process, which reveals the principle of the basic optomechanical setup: a laser-driven optical cavity is coupled to the mechanical vibrations of some object. The purpose of the cavity is to select optical frequencies (e.g. to suppress the Stokes process) that resonantly enhance the light intensity and to enhance the sensitivity to the mechanical vibrations. The setup displays features of a true two-way interaction between light and mechanics, which is in contrast to optical tweezers, optical lattices, or vibrational spectroscopy, where the light field controls the mechanics (or vice versa) but the loop is not closed. ==== Radiation pressure force ==== Another but equivalent way to interpret the principle of optomechanical cavities is by using the concept of radiation pressure. According to the quantum theory of light, every photon with wavenumber k {\displaystyle k} carries a momentum p = ℏ k {\displaystyle p=\hbar k} , where ℏ {\displaystyle \hbar } is the Planck constant. This means that a photon reflected off a mirror surface transfers a momentum Δ p = 2 ℏ k {\displaystyle \Delta p=2\hbar k} onto the mirror due to the conservation of momentum. This effect is extremely small and cannot be observed on most everyday objects; it becomes more significant when the mass of the mirror is very small and/or the number of photons is very large (i.e. high intensity of the light). Since the momentum of photons is extremely small and not enough to change the position of a suspended mirror significantly, the interaction needs to be enhanced. One possible way to do this is by using optical cavities. If a photon is enclosed between two mirrors, where one is the oscillator and the other is a heavy fixed one, it will bounce off the mirrors many times and transfer its momentum every time it hits the mirrors. The number of times a photon can transfer its momentum is directly related to the finesse of the cavity, which can be improved with highly reflective mirror surfaces. The radiation pressure of the photons does not simply shift the suspended mirror further and further away as the effect on the cavity light field must be taken into account: if the mirror is displaced, the cavity's length changes, which also alters the cavity resonance frequency. Therefore, the detuning—which determines the light amplitude inside the cavity—between the changed cavity and the unchanged laser driving frequency is modified. It determines the light amplitude inside the cavity – at smaller levels of detuning more light actually enters the cavity because it is closer to the cavity resonance frequency. Since the light amplitude, i.e. the number of photons inside the cavity, causes the radiation pressure force and consequently the displacement of the mirror, the loop is closed: the radiation pressure force effectively depends on the mirror position. Another advantage of optical cavities is that the modulation of the cavity length through an oscillating mirror can directly be seen in the spectrum of the cavity. ==== Optical spring effect ==== Some first effects of the light on the mechanical resonator can be captured by converting the radiation pressure force into a potential, d d x V rad ( x ) = − F ( x ) , {\displaystyle {\frac {d}{dx}}V_{\text{rad}}(x)=-F(x),} and adding it to the intrinsic harmonic oscillator potential of the mechanical oscillator, where F ( x ) {\displaystyle F(x)} is the slope of the radiation pressure force. This combined potential reveals the possibility of static multi-stability in the system, i.e. the potential can feature several stable minima. In addition, F ( x ) {\displaystyle F(x)} can be understood to be a modification of the mechanical spring constant, D = D 0 − d F d x . {\displaystyle D=D_{0}-{\frac {dF}{dx}}.} This effect is known as the optical spring effect (light-induced spring constant). However, the model is incomplete as it neglects retardation effects due to the finite cavity photon decay rate κ {\displaystyle \kappa } . The force follows the motion of the mirror only with some time delay, which leads to effects like friction. For example, assume the equilibrium position sits somewhere on the rising slope of the resonance. In thermal equilibrium, there will be oscillations around this position that do not follow the shape of the resonance because of retardation. The consequence of this delayed radiation force during one cycle of oscillation is that work is performed, in this particular case it is negative, ∮ F d x < 0 {\textstyle \oint F\,dx<0} , i.e. the radiation force extracts mechanical energy (there is extra, light-induced damping). This can be used to cool down the mechanical motion and is referred to as optical or optomechanical cooling. It is important for reaching the quantum regime of the mechanical oscillator where thermal noise effects on the device become negligible. Similarly, if the equilibrium position sits on the falling slope of the cavity resonance, the work is positive and the mechanical motion is amplified. In this case the extra, light-induced damping is negative and leads to amplification of the mechanical motion (heating). Radiation-induced damping of this kind has first been observed in pioneering experiments by Braginsky and coworkers in 1970. ==== Quantized energy transfer ==== Another explanation for the basic optomechanical effects of cooling and amplification can be given in a quantized picture: by detuning the incoming light from the cavity resonance to the red sideband, the photons can only enter the cavity if they take phonons with energy ℏ ω m {\displaystyle \hbar \omega _{m}} from the mechanics; it effectively cools the device until a balance with heating mechanisms from the environment and laser noise is reached. Similarly, it is also possible to heat structures (amplify the mechanical motion) by detuning the driving laser to the blue side; in this case the laser photons scatter into a cavity photon and create an additional phonon in the mechanical oscillator. The principle can be summarized as: phonons are converted into photons when cooled and vice versa in amplification. === Three regimes of operation: cooling, heating, resonance === The basic behaviour of the optomechanical system can generally be divided into different regimes, depending on the detuning between the laser frequency and the cavity resonance frequency Δ = ω L − ω cav {\displaystyle \Delta =\omega _{L}-\omega _{\text{cav}}} : Red-detuned regime, Δ < 0 {\displaystyle \Delta <0} (most prominent effects on the red sideband, Δ = − ω m {\displaystyle \Delta =-\omega _{m}} ): In this regime state exchange between two resonant oscillators can occur (i.e. a beam-splitter in quantum optics language). This can be used for state transfer between phonons and photons (which requires the so-called "strong coupling regime") or the above-mentioned optical cooling. Blue-detuned regime, Δ > 0 {\displaystyle \Delta >0} (most prominent effects on the blue sideband, Δ = + ω m {\displaystyle \Delta =+\omega _{m}} ): This regime describes "two-mode squeezing". It can be used to achieve quantum entanglement, squeezing, and mechanical "lasing" (amplification of the mechanical motion to self-sustained optomechanical oscillations / limit cycle oscillations), if the growth of the mechanical energy overwhelms the intrinsic losses (mainly mechanical friction). On-resonance regime, Δ = 0 {\displaystyle \Delta =0} : In this regime the cavity is simply operated as an interferometer to read the mechanical motion. The optical spring effect also depends on the detuning. It can be observed for high levels of detuning ( Δ ≫ ω m , κ {\displaystyle \Delta \gg \omega _{m},\kappa } ) and its strength varies with detuning and the laser drive. === Mathematical treatment === ==== Hamiltonian ==== The standard optomechanical setup is a Fabry–Pérot cavity, where one mirror is movable and thus provides an additional mechanical degree of freedom. This system can be mathematically described by a single optical cavity mode coupled to a single mechanical mode. The coupling originates from the radiation pressure of the light field that eventually moves the mirror, which changes the cavity length and resonance frequency. The optical mode is driven by an external laser. This system can be described by the following effective Hamiltonian: H tot = ℏ ω cav ( x ) a † a + ℏ ω m b † b + i ℏ E ( a e i ω L t − a † e − i ω L t ) {\displaystyle H_{\text{tot}}=\hbar \omega _{\text{cav}}(x)a^{\dagger }a+\hbar \omega _{m}b^{\dagger }b+i\hbar E\left(ae^{i\omega _{L}t}-a^{\dagger }e^{-i\omega _{L}t}\right)} where a {\displaystyle a} and b {\displaystyle b} are the bosonic annihilation operators of the given cavity mode and the mechanical resonator respectively, ω cav {\displaystyle \omega _{\text{cav}}} is the frequency of the optical mode, x {\displaystyle x} is the position of the mechanical resonator, ω m {\displaystyle \omega _{m}} is the mechanical mode frequency, ω L {\displaystyle \omega _{L}} is the driving laser frequency, and E {\displaystyle E} is the amplitude. It satisfies the commutation relations [ a , a † ] = [ b , b † ] = 1. {\displaystyle [a,a^{\dagger }]=[b,b^{\dagger }]=1.} ω c a v {\displaystyle \omega _{cav}} is now dependent on x {\displaystyle x} . The last term describes the driving, given by E = P κ ℏ ω L {\displaystyle E={\sqrt {\frac {P\kappa }{\hbar \omega _{L}}}}} where P {\displaystyle P} is the input power coupled to the optical mode under consideration and κ {\displaystyle \kappa } its linewidth. The system is coupled to the environment so the full treatment of the system would also include optical and mechanical dissipation (denoted by κ {\displaystyle \kappa } and Γ {\displaystyle \Gamma } respectively) and the corresponding noise entering the system. The standard optomechanical Hamiltonian is obtained by getting rid of the explicit time dependence of the laser driving term and separating the optomechanical interaction from the free optical oscillator. This is done by switching into a reference frame rotating at the laser frequency ω L {\displaystyle \omega _{L}} (in which case the optical mode annihilation operator undergoes the transformation a → a e − i ω L t {\displaystyle a\rightarrow ae^{-i\omega _{L}t}} ) and applying a Taylor expansion on ω cav {\displaystyle \omega _{\text{cav}}} . Quadratic and higher-order coupling terms are usually neglected, such that the standard Hamiltonian becomes H tot = − ℏ Δ a † a + ℏ ω m b † b − ℏ g 0 a † a x x zpf + i ℏ E ( a − a † ) {\displaystyle H_{\text{tot}}=-\hbar \Delta a^{\dagger }a+\hbar \omega _{m}b^{\dagger }b-\hbar g_{0}a^{\dagger }a{\frac {x}{x_{\text{zpf}}}}+i\hbar E\left(a-a^{\dagger }\right)} where Δ = ω L − ω cav {\displaystyle \Delta =\omega _{L}-\omega _{\text{cav}}} the laser detuning and the position operator x = x zpf ( b + b † ) {\displaystyle x=x_{\text{zpf}}(b+b^{\dagger })} . The first two terms ( − ℏ Δ a † a {\displaystyle -\hbar \Delta a^{\dagger }a} and ℏ ω m b † b {\displaystyle \hbar \omega _{m}b^{\dagger }b} ) are the free optical and mechanical Hamiltonians respectively. The third term contains the optomechanical interaction, where g 0 = d ω cav d x | x = 0 x zpf {\displaystyle g_{0}=\left.{\tfrac {d\omega _{\text{cav}}}{dx}}\right|_{x=0}x_{\text{zpf}}} is the single-photon optomechanical coupling strength (also known as the bare optomechanical coupling). It determines the amount of cavity resonance frequency shift if the mechanical oscillator is displaced by the zero point uncertainty x zpf = ℏ / 2 m eff ω m {\textstyle x_{\text{zpf}}={\sqrt {\hbar /2m_{\text{eff}}\omega _{m}}}} , where m eff {\displaystyle m_{\text{eff}}} is the effective mass of the mechanical oscillator. It is sometimes more convenient to use the frequency pull parameter, or G = g 0 x zpf {\displaystyle G={\frac {g_{0}}{x_{\text{zpf}}}}} , to determine the frequency change per displacement of the mirror. For example, the optomechanical coupling strength of a Fabry–Pérot cavity of length L {\displaystyle L} with a moving end-mirror can be directly determined from the geometry to be g 0 = ω cav ( 0 ) x zpf L {\displaystyle g_{0}={\frac {\omega _{\text{cav}}(0)x_{\text{zpf}}}{L}}} . This standard Hamiltonian H tot {\displaystyle H_{\text{tot}}} is based on the assumption that only one optical and mechanical mode interact. In principle, each optical cavity supports an infinite number of modes and mechanical oscillators which have more than a single oscillation/vibration mode. The validity of this approach relies on the possibility to tune the laser in such a way that it only populates a single optical mode (implying that the spacing between the cavity modes needs to be sufficiently large). Furthermore, scattering of photons to other modes is supposed to be negligible, which holds if the mechanical (motional) sidebands of the driven mode do not overlap with other cavity modes; i.e. if the mechanical mode frequency is smaller than the typical separation of the optical modes. ==== Linearization ==== The single-photon optomechanical coupling strength g 0 {\displaystyle g_{0}} is usually a small frequency, much smaller than the cavity decay rate κ {\displaystyle \kappa } , but the effective optomechanical coupling can be enhanced by increasing the drive power. With a strong enough drive, the dynamics of the system can be considered as quantum fluctuations around a classical steady state, i.e. a = α + δ a {\displaystyle a=\alpha +\delta a} , where α {\displaystyle \alpha } is the mean light field amplitude and δ a {\displaystyle \delta a} denotes the fluctuations. Expanding the photon number a † a {\displaystyle a^{\dagger }a} , the term α 2 {\displaystyle ~\alpha ^{2}} can be omitted as it leads to a constant radiation pressure force which simply shifts the resonator's equilibrium position. The linearized optomechanical Hamiltonian H lin {\displaystyle H_{\text{lin}}} can be obtained by neglecting the second order term δ a † δ a {\displaystyle ~\delta a^{\dagger }\delta a} : H lin = − ℏ Δ δ a † δ a + ℏ ω m b † b − ℏ g ( δ a + δ a † ) ( b + b † ) {\displaystyle H_{\text{lin}}=-\hbar \Delta \delta a^{\dagger }\delta a+\hbar \omega _{m}b^{\dagger }b-\hbar g(\delta a+\delta a^{\dagger })(b+b^{\dagger })} where g = g 0 α {\displaystyle g=g_{0}\alpha } . While this Hamiltonian is a quadratic function, it is considered "linearized" because it leads to linear equations of motion. It is a valid description of many experiments, where g 0 {\displaystyle g_{0}} is typically very small and needs to be enhanced by the driving laser. For a realistic description, dissipation should be added to both the optical and the mechanical oscillator. The driving term from the standard Hamiltonian is not part of the linearized Hamiltonian, since it is the source of the classical light amplitude α {\displaystyle \alpha } around which the linearization was executed. With a particular choice of detuning, different phenomena can be observed (see also the section about physical processes). The clearest distinction can be made between the following three cases: Δ ≈ − ω m {\displaystyle \Delta \approx -\omega _{m}} : a rotating wave approximation of the linearized Hamiltonian, where one omits all non-resonant terms, reduces the coupling Hamiltonian to a beamsplitter operator, H int = ℏ g 0 ( δ a † b + δ a b † ) {\displaystyle H_{\text{int}}=\hbar g_{0}(\delta a^{\dagger }b+\delta ab^{\dagger })} . This approximation works best on resonance; i.e. if the detuning becomes exactly equal to the negative mechanical frequency. Negative detuning (red detuning of the laser from the cavity resonance) by an amount equal to the mechanical mode frequency favors the anti-Stokes sideband and leads to a net cooling of the resonator. Laser photons absorb energy from the mechanical oscillator by annihilating phonons in order to become resonant with the cavity. Δ ≈ ω m {\displaystyle \Delta \approx \omega _{m}} : a rotating wave approximation of the linearized Hamiltonian leads to other resonant terms. The coupling Hamiltonian takes the form H int = ℏ g 0 ( δ a b + δ a † b † ) {\displaystyle H_{\text{int}}=\hbar g_{0}(\delta ab+\delta a^{\dagger }b^{\dagger })} , which is proportional to the two-mode squeezing operator. Therefore, two-mode squeezing and entanglement between the mechanical and optical modes can be observed with this parameter choice. Positive detuning (blue detuning of the laser from the cavity resonance) can also lead to instability. The Stokes sideband is enhanced, i.e. the laser photons shed energy, increasing the number of phonons and becoming resonant with the cavity in the process. Δ = 0 {\displaystyle \Delta =0} : In this case of driving on-resonance, all the terms in H int = ℏ g 0 ( δ a + δ a † ) ( b + b † ) {\displaystyle H_{\text{int}}=\hbar g_{0}(\delta a+\delta a^{\dagger })(b+b^{\dagger })} must be considered. The optical mode experiences a shift proportional to the mechanical displacement, which translates into a phase shift of the light transmitted through (or reflected off) the cavity. The cavity serves as an interferometer augmented by the factor of the optical finesse and can be used to measure very small displacements. This setup has enabled LIGO to detect gravitational waves. ==== Equations of motion ==== From the linearized Hamiltonian, the so-called linearized quantum Langevin equations, which govern the dynamics of the optomechanical system, can be derived when dissipation and noise terms to the Heisenberg equations of motion are added. δ a ˙ = ( i Δ − κ / 2 ) δ a + i g ( b + b † ) − κ a in b ˙ = − ( i ω m + Γ / 2 ) b + i g ( δ a + δ a † ) − Γ b in {\displaystyle {\begin{aligned}\delta {\dot {a}}&=(i\Delta -\kappa /2)\delta a+ig(b+b^{\dagger })-{\sqrt {\kappa }}a_{\text{in}}\\[1ex]{\dot {b}}&=-(i\omega _{m}+\Gamma /2)b+ig(\delta a+\delta a^{\dagger })-{\sqrt {\Gamma }}b_{\text{in}}\end{aligned}}} Here a in {\displaystyle a_{\text{in}}} and b in {\displaystyle b_{\text{in}}} are the input noise operators (either quantum or thermal noise) and − κ δ a {\displaystyle -\kappa \delta a} and − Γ δ p {\displaystyle -\Gamma \delta p} are the corresponding dissipative terms. For optical photons, thermal noise can be neglected due to the high frequencies, such that the optical input noise can be described by quantum noise only; this does not apply to microwave implementations of the optomechanical system. For the mechanical oscillator thermal noise has to be taken into account and is the reason why many experiments are placed in additional cooling environments to lower the ambient temperature. These first order differential equations can be solved easily when they are rewritten in frequency space (i.e. a Fourier transform is applied). Two main effects of the light on the mechanical oscillator can then be expressed in the following ways: δ ω m = g 2 ( Δ − ω m κ 2 / 4 + ( Δ − ω m ) 2 + Δ + ω m κ 2 / 4 + ( Δ + ω m ) 2 ) {\displaystyle \delta \omega _{m}=g^{2}\left({\frac {\Delta -\omega _{m}}{\kappa ^{2}/4+(\Delta -\omega _{m})^{2}}}+{\frac {\Delta +\omega _{m}}{\kappa ^{2}/4+(\Delta +\omega _{m})^{2}}}\right)} The equation above is termed the optical-spring effect and may lead to significant frequency shifts in the case of low-frequency oscillators, such as pendulum mirrors. In the case of higher resonance frequencies ( ω m ≳ 1 {\displaystyle \omega _{m}\gtrsim 1} MHz), it does not significantly alter the frequency. For a harmonic oscillator, the relation between a frequency shift and a change in the spring constant originates from Hooke's law. Γ eff = Γ + g 2 ( κ κ 2 / 4 + ( Δ + ω m ) 2 − κ κ 2 / 4 + ( Δ − ω m ) 2 ) {\displaystyle \Gamma ^{\text{eff}}=\Gamma +g^{2}\left({\frac {\kappa }{\kappa ^{2}/4+(\Delta +\omega _{m})^{2}}}-{\frac {\kappa }{\kappa ^{2}/4+(\Delta -\omega _{m})^{2}}}\right)} The equation above shows optical damping, i.e. the intrinsic mechanical damping Γ {\displaystyle \Gamma } becomes stronger (or weaker) due to the optomechanical interaction. From the formula, in the case of negative detuning and large coupling, mechanical damping can be greatly increased, which corresponds to the cooling of the mechanical oscillator. In the case of positive detuning the optomechanical interaction reduces effective damping. Instability can occur when the effective damping drops below zero ( Γ eff < 0 {\displaystyle \Gamma ^{\text{eff}}<0} ), which means that it turns into an overall amplification rather than a damping of the mechanical oscillator. === Important parameter regimes === The most basic regimes in which the optomechanical system can be operated are defined by the laser detuning Δ {\displaystyle \Delta } and described above. The resulting phenomena are either cooling or heating of the mechanical oscillator. However, additional parameters determine what effects can actually be observed. The good/bad cavity regime (also called the resolved/unresolved sideband regime) relates the mechanical frequency to the optical linewidth. The good cavity regime (resolved sideband limit) is of experimental relevance since it is a necessary requirement to achieve ground state cooling of the mechanical oscillator, i.e. cooling to an average mechanical occupation number below 1 {\displaystyle 1} . The term "resolved sideband regime" refers to the possibility of distinguishing the motional sidebands from the cavity resonance, which is true if the linewidth of the cavity, κ {\displaystyle \kappa } , is smaller than the distance from the cavity resonance to the sideband ( ω m {\displaystyle \omega _{m}} ). This requirement leads to a condition for the so-called sideband parameter: ω m / κ ≫ 1 {\displaystyle \omega _{m}/\kappa \gg 1} . If ω m / κ ≪ 1 {\displaystyle \omega _{m}/\kappa \ll 1} the system resides in the bad cavity regime (unresolved sideband limit), where the motional sideband lies within the peak of the cavity resonance. In the unresolved sideband regime, many motional sidebands can be included in the broad cavity linewidth, which allows a single photon to create more than one phonon, which leads to greater amplification of the mechanical oscillator. Another distinction can be made depending on the optomechanical coupling strength. If the (enhanced) optomechanical coupling becomes larger than the cavity linewidth ( g ≥ κ {\displaystyle g\geq \kappa } ), a strong-coupling regime is achieved. There the optical and mechanical modes hybridize and normal-mode splitting occurs. This regime must be distinguished from the (experimentally much more challenging) single-photon strong-coupling regime, where the bare optomechanical coupling becomes of the order of the cavity linewidth, g 0 ≥ κ {\displaystyle g_{0}\geq \kappa } . Effects of the full non-linear interaction described by ℏ g 0 a † a ( b + b † ) {\displaystyle \hbar g_{0}a^{\dagger }a(b+b^{\dagger })} only become observable in this regime. For example, it is a precondition to create non-Gaussian states with the optomechanical system. Typical experiments currently operate in the linearized regime (small g 0 ≪ κ {\displaystyle g_{0}\ll \kappa } ) and only investigate effects of the linearized Hamiltonian. == Experimental realizations == === Setup === The strength of the optomechanical Hamiltonian is the large range of experimental implementations to which it can be applied, which results in wide parameter ranges for the optomechanical parameters. For example, the size of optomechanical systems can be on the order of micrometers or in the case for LIGO, kilometers. (although LIGO is dedicated to the detection of gravitational waves and not the investigation of optomechanics specifically). Examples of real optomechanical implementations are: Cavities with a moving mirror: the archetype of an optomechanical system. The light is reflected from the mirrors and transfers momentum onto the movable one, which in turn changes the cavity resonance frequency. Membrane-in-the-middle system: a micromechanical membrane is brought into a cavity consisting of fixed massive mirrors. The membrane takes the role of the mechanical oscillator. Depending on the positioning of the membrane inside the cavity, this system behaves like the standard optomechanical system. Levitated system: an optically levitated nanoparticle is brought into a cavity consisting of fixed massive mirrors. The levitated nanoparticle takes the role of the mechanical oscillator. Depending on the positioning of the particle inside the cavity, this system behaves like the standard optomechanical system. Microtoroids that support an optical whispering gallery mode can be either coupled to a mechanical mode of the toroid or evanescently to a nanobeam that is brought in proximity. Optomechanical crystal structures: patterned dielectrics or metamaterials can confine optical and/or mechanical (acoustic) modes. If the patterned material is designed to confine light, it is called a photonic crystal cavity. If it is designed to confine sound, it is called a phononic crystal cavity. Either can be used respectively as the optical or mechanical component. Hybrid crystals, which confine both sound and light to the same area, are especially useful, as they form a complete optomechanical system. Electromechanical implementations of an optomechanical system use superconducting LC circuits with a mechanically compliant capacitance like a membrane with metallic coating or a tiny capacitor plate glued onto it. By using movable capacitor plates, mechanical motion (physical displacement) of the plate or membrane changes the capacitance C {\displaystyle C} , which transforms mechanical oscillation into electrical oscillation. LC oscillators have resonances in the microwave frequency range; therefore, LC circuits are also termed microwave resonators. The physics is exactly the same as in optical cavities but the range of parameters is different because microwave radiation has a larger wavelength than optical light or infrared laser light. A purpose of studying different designs of the same system is the different parameter regimes that are accessible by different setups and their different potential to be converted into tools of commercial use. === Measurement === The optomechanical system can be measured by using a scheme like homodyne detection. Either the light of the driving laser is measured, or a two-mode scheme is followed where a strong laser is used to drive the optomechanical system into the state of interest and a second laser is used for the read-out of the state of the system. This second "probe" laser is typically weak, i.e. its optomechanical interaction can be neglected compared to the effects caused by the strong "pump" laser. The optical output field can also be measured with single photon detectors to achieve photon counting statistics. == Relation to fundamental research == One of the questions which are still subject to current debate is the exact mechanism of decoherence. In the Schrödinger's cat thought experiment, the cat would never be seen in a quantum state: there needs to be something like a collapse of the quantum wave functions, which brings it from a quantum state to a pure classical state. The question is where the boundary lies between objects with quantum properties and classical objects. Taking spatial superpositions as an example, there might be a size limit to objects which can be brought into superpositions, there might be a limit to the spatial separation of the centers of mass of a superposition or even a limit to the superposition of gravitational fields and its impact on small test masses. Those predictions can be checked with large mechanical structures that can be manipulated at the quantum level. Some easier to check predictions of quantum mechanics are the prediction of negative Wigner functions for certain quantum states, measurement precision beyond the standard quantum limit using squeezed states of light, or the asymmetry of the sidebands in the spectrum of a cavity near the quantum ground state. == Applications == Years before cavity optomechanics gained the status of an independent field of research, many of its techniques were already used in gravitational wave detectors where it is necessary to measure displacements of mirrors on the order of the Planck scale. Even if these detectors do not address the measurement of quantum effects, they encounter related issues (photon shot noise) and use similar tricks (squeezed coherent states) to enhance the precision. Further applications include the development of quantum memory for quantum computers, high precision sensors (e.g. acceleration sensors) and quantum transducers e.g. between the optical and the microwave domain (taking advantage of the fact that the mechanical oscillator can easily couple to both frequency regimes). == Related fields and expansions == In addition to the standard cavity optomechanics explained above, there are variations of the simplest model: Pulsed optomechanics: the continuous laser driving is replaced by pulsed laser driving. It is useful for creating entanglement and allows backaction-evading measurements. Quadratic coupling: a system with quadratic optomechanical coupling can be investigated beyond the linear coupling term g 0 = d ω cav ( x ) d x | x = 0 x zpf {\displaystyle g_{0}=\left.{\tfrac {d\omega _{\text{cav}}(x)}{dx}}\right|_{x=0}x_{\text{zpf}}} . The interaction Hamiltonian would then feature a term ℏ g quad a † a ( b + b † ) 2 {\displaystyle \hbar g_{\text{quad}}a^{\dagger }a(b+b^{\dagger })^{2}} with g sq = 1 2 d 2 ω cav ( x ) d x 2 | x = 0 x zpf 2 {\displaystyle g_{\text{sq}}={\frac {1}{2}}\left.{\tfrac {d^{2}\omega _{\text{cav}}(x)}{dx^{2}}}\right|_{x=0}x_{\text{zpf}}^{2}} . In membrane-in-the-middle setups it is possible to achieve quadratic coupling in the absence of linear coupling by positioning the membrane at an extremum of the standing wave inside the cavity. One possible application is to carry out a quantum nondemolition measurement of the phonon number. Reversed dissipation regime: in the standard optomechanical system the mechanical damping is much smaller than the optical damping. A system where this hierarchy is reversed can be engineered; i.e. the optical damping is much smaller than the mechanical damping ( κ ≪ Γ {\displaystyle \kappa \ll \Gamma } ). Within the linearized regime, symmetry implies an inversion of the above described effects; For example, cooling of the mechanical oscillator in the standard optomechanical system is replaced by cooling of the optical oscillator in a system with reversed dissipation hierarchy. This effect was also seen in optical fiber loops in the 1970s. Dissipative coupling: the coupling between optics and mechanics arises from a position-dependent optical dissipation rate κ ( x ) {\displaystyle \kappa (x)} instead of a position-dependent cavity resonance frequency ω c a v {\displaystyle \omega _{cav}} , which changes the interaction Hamiltonian and alters many effects of the standard optomechanical system. For example, this scheme allows the mechanical resonator to cool to its ground state without the requirement of the good cavity regime. Extensions to the standard optomechanical system include coupling to more and physically different systems: Optomechanical arrays: coupling several optomechanical systems to each other (e.g. using evanescent coupling of the optical modes) allows multi-mode phenomena like synchronization to be studied. So far many theoretical predictions have been made, but only few experiments exist. The first optomechanical array (with more than two coupled systems) consists of seven optomechanical systems. Hybrid systems: an optomechanical system can be coupled to a system of a different nature (e.g. a cloud of ultracold atoms and a two-level system), which can lead to new effects on both the optomechanical and the additional system. Cavity optomechanics is closely related to trapped ion physics and Bose–Einstein condensates. These systems share very similar Hamiltonians, but have fewer particles (about 10 for ion traps and 105–108 for Bose–Einstein condensates) interacting with the field of light. It is also related to the field of cavity quantum electrodynamics. == See also == Quantum harmonic oscillator Optical cavity Laser cooling Coherent control == References == === Further reading === Daniel Steck, Classical and Modern Optics Michel Deverot, Bejamin Huard, Robert Schoelkopf, Leticia F. Cugliandolo (2014). Quantum Machines: Measurement and Control of Engineered Quantum Systems. Lecture Notes of the Les Houches Summer School: Volume 96, July 2011. Oxford University Press Kippenberg, Tobias J.; Vahala, Kerry J. (2007). "Cavity Opto-Mechanics". Optics Express. 15 (25): 17172. arXiv:0712.1618. Bibcode:2007OExpr..1517172K. doi:10.1364/OE.15.017172. ISSN 1094-4087. PMID 19551012. Demir, Dilek,"A table-top demonstration of radiation pressure", 2011, Diplomathesis, E-Theses univie. doi:10.25365/thesis.16381
Wikipedia/Cavity_optomechanics
"A Dynamical Theory of the Electromagnetic Field" is a paper by James Clerk Maxwell on electromagnetism, published in 1865. Physicist Freeman Dyson called the publishing of the paper the "most important event of the nineteenth century in the history of the physical sciences". The paper was key in establishing the classical theory of electromagnetism. Maxwell derives an electromagnetic wave equation with a velocity for light in close agreement with measurements made by experiment, and also deduces that light is an electromagnetic wave. == Publication == Following standard procedure for the time, the paper was first read to the Royal Society on 8 December 1864, having been sent by Maxwell to the society on 27 October. It then underwent peer review, being sent to William Thomson (later Lord Kelvin) on 24 December 1864. It was then sent to George Gabriel Stokes, the Society's physical sciences secretary, on 23 March 1865. It was approved for publication in the Philosophical Transactions of the Royal Society on 15 June 1865, by the Committee of Papers (essentially the society's governing council) and sent to the printer the following day (16 June). During this period, Philosophical Transactions was only published as a bound volume once a year, and would have been prepared for the society's anniversary day on 30 November (the exact date is not recorded). However, the printer would have prepared and delivered to Maxwell offprints, for the author to distribute as he wished, soon after 16 June. == Maxwell's original equations == In part III of the paper, which is entitled "General Equations of the Electromagnetic Field", Maxwell formulated twenty equations which were to become known as Maxwell's equations, until this term became applied instead to a vectorized set of four equations selected in 1884, which had all appeared in his 1861 paper "On Physical Lines of Force". Heaviside's versions of Maxwell's equations are distinct by virtue of the fact that they are written in modern vector notation. They actually only contain one of the original eight—equation "G" (Gauss's Law). Another of Heaviside's four equations is an amalgamation of Maxwell's law of total currents (equation "A") with Ampère's circuital law (equation "C"). This amalgamation, which Maxwell himself had actually originally made at equation (112) in "On Physical Lines of Force", is the one that modifies Ampère's Circuital Law to include Maxwell's displacement current. === Heaviside's equations === Eighteen of Maxwell's twenty original equations can be vectorized into six equations, labeled (A) to (F) below, each of which represents a group of three original equations in component form. The 19th and 20th of Maxwell's component equations appear as (G) and (H) below, making a total of eight vector equations. These are listed below in Maxwell's original order, designated by the letters that Maxwell assigned to them in his 1865 paper. (A) The law of total currents (B) Definition of the magnetic potential (C) Ampère's circuital law (D) The Lorentz force and Faraday's law of induction (E) The electric elasticity equation (F) Ohm's law (G) Gauss's law (H) Equation of continuity of charge Notation Maxwell did not consider completely general materials; his initial formulation used linear, isotropic, nondispersive media with permittivity ϵ and permeability μ, although he also discussed the possibility of anisotropic materials. Gauss's law for magnetism (∇⋅ B = 0) is not included in the above list, but follows directly from equation (B) by taking divergences (because the divergence of the curl is zero). Substituting (A) into (C) yields the familiar differential form of the Maxwell-Ampère law. Equation (D) implicitly contains the Lorentz force law and the differential form of Faraday's law of induction. For a static magnetic field, ∂ A / ∂ t {\displaystyle \partial \mathbf {A} /\partial t} vanishes, and the electric field E becomes conservative and is given by −∇ϕ, so that (D) reduces to This is simply the Lorentz force law on a per-unit-charge basis — although Maxwell's equation (D) first appeared at equation (77) in "On Physical Lines of Force" in 1861, 34 years before Lorentz derived his force law, which is now usually presented as a supplement to the four "Maxwell's equations". The cross-product term in the Lorentz force law is the source of the so-called motional emf in electric generators (see also Moving magnet and conductor problem). Where there is no motion through the magnetic field — e.g., in transformers — we can drop the cross-product term, and the force per unit charge (called f) reduces to the electric field E, so that Maxwell's equation (D) reduces to Taking curls, noting that the curl of a gradient is zero, we obtain which is the differential form of Faraday's law. Thus the three terms on the right side of equation (D) may be described, from left to right, as the motional term, the transformer term, and the conservative term. In deriving the electromagnetic wave equation, Maxwell considers the situation only from the rest frame of the medium, and accordingly drops the cross-product term. But he still works from equation (D), in contrast to modern textbooks which tend to work from Faraday's law (see below). The constitutive equations (E) and (F) are now usually written in the rest frame of the medium as D = ϵE and J = σE. Maxwell's equation (G), as printed in the 1865 paper, requires his e to mean minus the charge density (if his f, g, h are the components of D), whereas his equation (H) requires his e to mean plus the charge density (if his p, q, r are the components of J). John W. Arthur: 7, 8  concludes that the sign of e in (G) is wrong, and observes: 8  that this sign is corrected in Maxwell's subsequent Treatise. Arthur speculates that the sign confusion may have arisen from the analogy between momentum and the magnetic vector potential (Maxwell's "electromagnetic momentum"), in which positive mass corresponds to negative charge: 4 . Arthur: 3  also lists some corresponding equations from Maxwell's earlier paper of 1861-2, and notes that the signs do not always match the later ones. The earlier signs (1861-2) are correct if F, G, H are the components of −A while f, g, h are the components of −D. == Maxwell – electromagnetic light wave == In part VI of "A Dynamical Theory of the Electromagnetic Field", subtitled "Electromagnetic theory of light", Maxwell uses the correction to Ampère's Circuital Law made in part III of his 1862 paper, "On Physical Lines of Force", which is defined as displacement current, to derive the electromagnetic wave equation. He obtained a wave equation with a speed in close agreement to experimental determinations of the speed of light. He commented, The agreement of the results seems to show that light and magnetism are affections of the same substance, and that light is an electromagnetic disturbance propagated through the field according to electromagnetic laws. Maxwell's derivation of the electromagnetic wave equation has been replaced in modern physics by a much less cumbersome method which combines the corrected version of Ampère's Circuital Law with Faraday's law of electromagnetic induction. === Modern equation methods === To obtain the electromagnetic wave equation in a vacuum using the modern method, we begin with the modern 'Heaviside' form of Maxwell's equations. Using (SI units) in a vacuum, these equations are If we take the curl of the curl equations we obtain If we note the vector identity where V {\displaystyle \mathbf {V} } is any vector function of space, we recover the wave equations where is the speed of light in free space. == Legacy and impact == Of this paper and Maxwell's related works, fellow physicist Richard Feynman said: "From the long view of this history of mankind – seen from, say, 10,000 years from now – there can be little doubt that the most significant event of the 19th century will be judged as Maxwell's discovery of the laws of electromagnetism." Albert Einstein used Maxwell's equations as the starting point for his special theory of relativity, presented in The Electrodynamics of Moving Bodies, one of Einstein's 1905 Annus Mirabilis papers. In it is stated: the same laws of electrodynamics and optics will be valid for all frames of reference for which the equations of mechanics hold good and Any ray of light moves in the "stationary" system of co-ordinates with the determined velocity c, whether the ray be emitted by a stationary or by a moving body. Maxwell's equations can also be derived by extending general relativity into five physical dimensions. == See also == A Treatise on Electricity and Magnetism Gauge theory == References == == Further reading == Maxwell, James C.; Torrance, Thomas F. (March 1996). A Dynamical Theory of the Electromagnetic Field. Eugene, OR: Wipf and Stock. ISBN 1-57910-015-5. Niven, W. D. (1952). The Scientific Papers of James Clerk Maxwell. Vol. 1. New York: Dover. Johnson, Kevin (May 2002). "The electromagnetic field". James Clerk Maxwell – The Great Unknown. Archived from the original on September 15, 2008. Retrieved September 7, 2009. Darrigol, Olivier (2000). Electromagnetism from Ampère to Einstein. Oxford University Press. ISBN 978-0198505945 Katz, Randy H. (February 22, 1997). "'Look Ma, No Wires': Marconi and the Invention of Radio". History of Communications Infrastructures. Retrieved Sep 7, 2009.
Wikipedia/A_Dynamical_Theory_of_the_Electromagnetic_Field
In theoretical physics, an invariant is an observable of a physical system which remains unchanged under some transformation. Invariance, as a broader term, also applies to the no change of form of physical laws under a transformation, and is closer in scope to the mathematical definition. Invariants of a system are deeply tied to the symmetries imposed by its environment. Invariance is an important concept in modern theoretical physics, and many theories are expressed in terms of their symmetries and invariants. == Examples == In classical and quantum mechanics, invariance of space under translation results in momentum being an invariant and the conservation of momentum, whereas invariance of the origin of time, i.e. translation in time, results in energy being an invariant and the conservation of energy. In general, by Noether's theorem, any invariance of a physical system under a continuous symmetry leads to a fundamental conservation law. In crystals, the electron density is periodic and invariant with respect to discrete translations by unit cell vectors. In very few materials, this symmetry can be broken due to enhanced electron correlations. Another examples of physical invariants are the speed of light, and charge and mass of a particle observed from two reference frames moving with respect to one another (invariance under a spacetime Lorentz transformation), and invariance of time and acceleration under a Galilean transformation between two such frames moving at low velocities. Quantities can be invariant under some common transformations but not under others. For example, the velocity of a particle is invariant when switching coordinate representations from rectangular to curvilinear coordinates, but is not invariant when transforming between frames of reference that are moving with respect to each other. Other quantities, like the speed of light, are always invariant. Physical laws are said to be invariant under transformations when their predictions remain unchanged. This generally means that the form of the law (e.g. the type of differential equations used to describe the law) is unchanged in transformations so that no additional or different solutions are obtained. For example the rule describing Newton's force of gravity between two chunks of matter is the same whether they are in this galaxy or another (translational invariance in space). It is also the same today as it was a million years ago (translational invariance in time). The law does not work differently depending on whether one chunk is east or north of the other one (rotational invariance). Nor does the law have to be changed depending on whether you measure the force between the two chunks in a railroad station, or do the same experiment with the two chunks on a uniformly moving train (principle of relativity). Covariance and contravariance generalize the mathematical properties of invariance in tensor mathematics, and are frequently used in electromagnetism, special relativity, and general relativity. == Informal usage == In the field of physics, the adjective covariant (as in covariance and contravariance of vectors) is often used informally as a synonym for "invariant". For example, the Schrödinger equation does not keep its written form under the coordinate transformations of special relativity. Thus, a physicist might say that the Schrödinger equation is not covariant. In contrast, the Klein–Gordon equation and the Dirac equation do keep their written form under these coordinate transformations. Thus, a physicist might say that these equations are covariant. Despite this usage of "covariant", it is more accurate to say that the Klein–Gordon and Dirac equations are invariant, and that the Schrödinger equation is not invariant. Additionally, to remove ambiguity, the transformation by which the invariance is evaluated should be indicated. == See also == == References ==
Wikipedia/Invariant_(physics)
In quantum field theory, a nonlinear σ model describes a field Σ that takes on values in a nonlinear manifold called the target manifold T. The non-linear σ-model was introduced by Gell-Mann & Lévy (1960, §6), who named it after a field corresponding to a spinless meson called σ in their model. This article deals primarily with the quantization of the non-linear sigma model; please refer to the base article on the sigma model for general definitions and classical (non-quantum) formulations and results. == Description == The target manifold T is equipped with a Riemannian metric g. Σ is a differentiable map from Minkowski space M (or some other space) to T. The Lagrangian density in contemporary chiral form is given by L = 1 2 g ( ∂ μ Σ , ∂ μ Σ ) − V ( Σ ) {\displaystyle {\mathcal {L}}={1 \over 2}g(\partial ^{\mu }\Sigma ,\partial _{\mu }\Sigma )-V(\Sigma )} where we have used a + − − − metric signature and the partial derivative ∂Σ is given by a section of the jet bundle of T×M and V is the potential. In the coordinate notation, with the coordinates Σa, a = 1, ..., n where n is the dimension of T, L = 1 2 g a b ( Σ ) ( ∂ μ Σ a ) ( ∂ μ Σ b ) − V ( Σ ) . {\displaystyle {\mathcal {L}}={1 \over 2}g_{ab}(\Sigma )(\partial ^{\mu }\Sigma ^{a})(\partial _{\mu }\Sigma ^{b})-V(\Sigma ).} In more than two dimensions, nonlinear σ models contain a dimensionful coupling constant and are thus not perturbatively renormalizable. Nevertheless, they exhibit a non-trivial ultraviolet fixed point of the renormalization group both in the lattice formulation and in the double expansion originally proposed by Kenneth G. Wilson. In both approaches, the non-trivial renormalization-group fixed point found for the O(n)-symmetric model is seen to simply describe, in dimensions greater than two, the critical point separating the ordered from the disordered phase. In addition, the improved lattice or quantum field theory predictions can then be compared to laboratory experiments on critical phenomena, since the O(n) model describes physical Heisenberg ferromagnets and related systems. The above results point therefore to a failure of naive perturbation theory in describing correctly the physical behavior of the O(n)-symmetric model above two dimensions, and to the need for more sophisticated non-perturbative methods such as the lattice formulation. This means they can only arise as effective field theories. New physics is needed at around the distance scale where the two point connected correlation function is of the same order as the curvature of the target manifold. This is called the UV completion of the theory. There is a special class of nonlinear σ models with the internal symmetry group G *. If G is a Lie group and H is a Lie subgroup, then the quotient space G/H is a manifold (subject to certain technical restrictions like H being a closed subset) and is also a homogeneous space of G or in other words, a nonlinear realization of G. In many cases, G/H can be equipped with a Riemannian metric which is G-invariant. This is always the case, for example, if G is compact. A nonlinear σ model with G/H as the target manifold with a G-invariant Riemannian metric and a zero potential is called a quotient space (or coset space) nonlinear σ model. When computing path integrals, the functional measure needs to be "weighted" by the square root of the determinant of g, det g D Σ . {\displaystyle {\sqrt {\det g}}{\mathcal {D}}\Sigma .} == Renormalization == This model proved to be relevant in string theory where the two-dimensional manifold is named worldsheet. Appreciation of its generalized renormalizability was provided by Daniel Friedan. He showed that the theory admits a renormalization group equation, at the leading order of perturbation theory, in the form λ ∂ g a b ∂ λ = β a b ( T − 1 g ) = R a b + O ( T 2 ) , {\displaystyle \lambda {\frac {\partial g_{ab}}{\partial \lambda }}=\beta _{ab}(T^{-1}g)=R_{ab}+O(T^{2})~,} Rab being the Ricci tensor of the target manifold. This represents a Ricci flow, obeying Einstein field equations for the target manifold as a fixed point. The existence of such a fixed point is relevant, as it grants, at this order of perturbation theory, that conformal invariance is not lost due to quantum corrections, so that the quantum field theory of this model is sensible (renormalizable). Further adding nonlinear interactions representing flavor-chiral anomalies results in the Wess–Zumino–Witten model, which augments the geometry of the flow to include torsion, preserving renormalizability and leading to an infrared fixed point as well, on account of teleparallelism ("geometrostasis"). == O(3) non-linear sigma model == A celebrated example, of particular interest due to its topological properties, is the O(3) nonlinear σ-model in 1 + 1 dimensions, with the Lagrangian density L = 1 2 ∂ μ n ^ ⋅ ∂ μ n ^ {\displaystyle {\mathcal {L}}={\tfrac {1}{2}}\ \partial ^{\mu }{\hat {n}}\cdot \partial _{\mu }{\hat {n}}} where n̂=(n1, n2, n3) with the constraint n̂⋅n̂=1 and μ=1,2. This model allows for topological finite action solutions, as at infinite space-time the Lagrangian density must vanish, meaning n̂ = constant at infinity. Therefore, in the class of finite-action solutions, one may identify the points at infinity as a single point, i.e. that space-time can be identified with a Riemann sphere. Since the n̂-field lives on a sphere as well, the mapping S2→ S2 is in evidence, the solutions of which are classified by the second homotopy group of a 2-sphere: These solutions are called the O(3) Instantons. This model can also be considered in 1+2 dimensions, where the topology now comes only from the spatial slices. These are modelled as R^2 with a point at infinity, and hence have the same topology as the O(3) instantons in 1+1 dimensions. They are called sigma model lumps. == See also == Sigma model Chiral model Little Higgs Skyrmion, a soliton in non-linear sigma models Polyakov action WZW model Fubini–Study metric, a metric often used with non-linear sigma models Ricci flow Scale invariance == References == == External links == Ketov, Sergei (2009). "Nonlinear Sigma model". Scholarpedia. 4 (1): 8508. Bibcode:2009SchpJ...4.8508K. doi:10.4249/scholarpedia.8508. Kulshreshtha, U.; Kulshreshtha, D. S. (2002). "Front-Form Hamiltonian, Path Integral, and BRST Formulations of the Nonlinear Sigma Model". International Journal of Theoretical Physics. 41 (10): 1941–1956. doi:10.1023/A:1021009008129. S2CID 115710780.
Wikipedia/Nonlinear_sigma_model
In classical mechanics, the Laplace–Runge–Lenz vector (LRL vector) is a vector used chiefly to describe the shape and orientation of the orbit of one astronomical body around another, such as a binary star or a planet revolving around a star. For two bodies interacting by Newtonian gravity, the LRL vector is a constant of motion, meaning that it is the same no matter where it is calculated on the orbit; equivalently, the LRL vector is said to be conserved. More generally, the LRL vector is conserved in all problems in which two bodies interact by a central force that varies as the inverse square of the distance between them; such problems are called Kepler problems. Thus the hydrogen atom is a Kepler problem, since it comprises two charged particles interacting by Coulomb's law of electrostatics, another inverse-square central force. The LRL vector was essential in the first quantum mechanical derivation of the spectrum of the hydrogen atom, before the development of the Schrödinger equation. However, this approach is rarely used today. In classical and quantum mechanics, conserved quantities generally correspond to a symmetry of the system. The conservation of the LRL vector corresponds to an unusual symmetry; the Kepler problem is mathematically equivalent to a particle moving freely on the surface of a four-dimensional (hyper-)sphere, so that the whole problem is symmetric under certain rotations of the four-dimensional space. This higher symmetry results from two properties of the Kepler problem: the velocity vector always moves in a perfect circle and, for a given total energy, all such velocity circles intersect each other in the same two points. The Laplace–Runge–Lenz vector is named after Pierre-Simon de Laplace, Carl Runge and Wilhelm Lenz. It is also known as the Laplace vector, the Runge–Lenz vector and the Lenz vector. Ironically, none of those scientists discovered it. The LRL vector has been re-discovered and re-formulated several times; for example, it is equivalent to the dimensionless eccentricity vector of celestial mechanics. Various generalizations of the LRL vector have been defined, which incorporate the effects of special relativity, electromagnetic fields and even different types of central forces. == Context == A single particle moving under any conservative central force has at least four constants of motion: the total energy E and the three Cartesian components of the angular momentum vector L with respect to the center of force. The particle's orbit is confined to the plane defined by the particle's initial momentum p (or, equivalently, its velocity v) and the vector r between the particle and the center of force (see Figure 1). This plane of motion is perpendicular to the constant angular momentum vector L = r × p; this may be expressed mathematically by the vector dot product equation r ⋅ L = 0. Given its mathematical definition below, the Laplace–Runge–Lenz vector (LRL vector) A is always perpendicular to the constant angular momentum vector L for all central forces (A ⋅ L = 0). Therefore, A always lies in the plane of motion. As shown below, A points from the center of force to the periapsis of the motion, the point of closest approach, and its length is proportional to the eccentricity of the orbit. The LRL vector A is constant in length and direction, but only for an inverse-square central force. For other central forces, the vector A is not constant, but changes in both length and direction. If the central force is approximately an inverse-square law, the vector A is approximately constant in length, but slowly rotates its direction. A generalized conserved LRL vector A {\displaystyle {\mathcal {A}}} can be defined for all central forces, but this generalized vector is a complicated function of position, and usually not expressible in closed form. The LRL vector differs from other conserved quantities in the following property. Whereas for typical conserved quantities, there is a corresponding cyclic coordinate in the three-dimensional Lagrangian of the system, there does not exist such a coordinate for the LRL vector. Thus, the conservation of the LRL vector must be derived directly, e.g., by the method of Poisson brackets, as described below. Conserved quantities of this kind are called "dynamic", in contrast to the usual "geometric" conservation laws, e.g., that of the angular momentum. == History of rediscovery == The LRL vector A is a constant of motion of the Kepler problem, and is useful in describing astronomical orbits, such as the motion of planets and binary stars. Nevertheless, it has never been well known among physicists, possibly because it is less intuitive than momentum and angular momentum. Consequently, it has been rediscovered independently several times over the last three centuries. Jakob Hermann was the first to show that A is conserved for a special case of the inverse-square central force, and worked out its connection to the eccentricity of the orbital ellipse. Hermann's work was generalized to its modern form by Johann Bernoulli in 1710. At the end of the century, Pierre-Simon de Laplace rediscovered the conservation of A, deriving it analytically, rather than geometrically. In the middle of the nineteenth century, William Rowan Hamilton derived the equivalent eccentricity vector defined below, using it to show that the momentum vector p moves on a circle for motion under an inverse-square central force (Figure 3). At the beginning of the twentieth century, Josiah Willard Gibbs derived the same vector by vector analysis. Gibbs' derivation was used as an example by Carl Runge in a popular German textbook on vectors, which was referenced by Wilhelm Lenz in his paper on the (old) quantum mechanical treatment of the hydrogen atom. In 1926, Wolfgang Pauli used the LRL vector to derive the energy levels of the hydrogen atom using the matrix mechanics formulation of quantum mechanics, after which it became known mainly as the Runge–Lenz vector. == Definition == An inverse-square central force acting on a single particle is described by the equation F ( r ) = − k r 2 r ^ ; {\displaystyle \mathbf {F} (r)=-{\frac {k}{r^{2}}}\mathbf {\hat {r}} ;} The corresponding potential energy is given by V ( r ) = − k / r {\displaystyle V(r)=-k/r} . The constant parameter k describes the strength of the central force; it is equal to GMm for gravitational and −⁠1/4πε0⁠Qq for electrostatic forces. The force is attractive if k > 0 and repulsive if k < 0. The LRL vector A is defined mathematically by the formula where m is the mass of the point particle moving under the central force, p is its momentum vector, L = r × p is its angular momentum vector, r is the position vector of the particle (Figure 1), r ^ {\displaystyle \mathbf {\hat {r}} } is the corresponding unit vector, i.e., r ^ = r r {\displaystyle \mathbf {\hat {r}} ={\frac {\mathbf {r} }{r}}} , and r is the magnitude of r, the distance of the mass from the center of force. The SI units of the LRL vector are joule-kilogram-meter (J⋅kg⋅m). This follows because the units of p and L are kg⋅m/s and J⋅s, respectively. This agrees with the units of m (kg) and of k (N⋅m2). This definition of the LRL vector A pertains to a single point particle of mass m moving under the action of a fixed force. However, the same definition may be extended to two-body problems such as the Kepler problem, by taking m as the reduced mass of the two bodies and r as the vector between the two bodies. Since the assumed force is conservative, the total energy E is a constant of motion, E = p 2 2 m − k r = 1 2 m v 2 − k r . {\displaystyle E={\frac {p^{2}}{2m}}-{\frac {k}{r}}={\frac {1}{2}}mv^{2}-{\frac {k}{r}}.} The assumed force is also a central force. Hence, the angular momentum vector L is also conserved and defines the plane in which the particle travels. The LRL vector A is perpendicular to the angular momentum vector L because both p × L and r are perpendicular to L. It follows that A lies in the plane of motion. Alternative formulations for the same constant of motion may be defined, typically by scaling the vector with constants, such as the mass m, the force parameter k or the angular momentum L. The most common variant is to divide A by mk, which yields the eccentricity vector, a dimensionless vector along the semi-major axis whose modulus equals the eccentricity of the conic: e = A m k = 1 m k ( p × L ) − r ^ . {\displaystyle \mathbf {e} ={\frac {\mathbf {A} }{mk}}={\frac {1}{mk}}(\mathbf {p} \times \mathbf {L} )-\mathbf {\hat {r}} .} An equivalent formulation multiplies this eccentricity vector by the major semiaxis a, giving the resulting vector the units of length. Yet another formulation divides A by L 2 {\displaystyle L^{2}} , yielding an equivalent conserved quantity with units of inverse length, a quantity that appears in the solution of the Kepler problem u ≡ 1 r = k m L 2 + A L 2 cos ⁡ θ {\displaystyle u\equiv {\frac {1}{r}}={\frac {km}{L^{2}}}+{\frac {A}{L^{2}}}\cos \theta } where θ {\displaystyle \theta } is the angle between A and the position vector r. Further alternative formulations are given below. == Derivation of the Kepler orbits == The shape and orientation of the orbits can be determined from the LRL vector as follows. Taking the dot product of A with the position vector r gives the equation A ⋅ r = A ⋅ r ⋅ cos ⁡ θ = r ⋅ ( p × L ) − m k r , {\displaystyle \mathbf {A} \cdot \mathbf {r} =A\cdot r\cdot \cos \theta =\mathbf {r} \cdot \left(\mathbf {p} \times \mathbf {L} \right)-mkr,} where θ is the angle between r and A (Figure 2). Permuting the scalar triple product yields r ⋅ ( p × L ) = ( r × p ) ⋅ L = L ⋅ L = L 2 {\displaystyle \mathbf {r} \cdot \left(\mathbf {p} \times \mathbf {L} \right)=\left(\mathbf {r} \times \mathbf {p} \right)\cdot \mathbf {L} =\mathbf {L} \cdot \mathbf {L} =L^{2}} Rearranging yields the solution for the Kepler equation This corresponds to the formula for a conic section of eccentricity e 1 r = C ⋅ ( 1 + e ⋅ cos ⁡ θ ) {\displaystyle {\frac {1}{r}}=C\cdot \left(1+e\cdot \cos \theta \right)} where the eccentricity e = A | m k | ≥ 0 {\displaystyle e={\frac {A}{\left|mk\right|}}\geq 0} and C is a constant. Taking the dot product of A with itself yields an equation involving the total energy E, A 2 = m 2 k 2 + 2 m E L 2 , {\displaystyle A^{2}=m^{2}k^{2}+2mEL^{2},} which may be rewritten in terms of the eccentricity, e 2 = 1 + 2 L 2 m k 2 E . {\displaystyle e^{2}=1+{\frac {2L^{2}}{mk^{2}}}E.} Thus, if the energy E is negative (bound orbits), the eccentricity is less than one and the orbit is an ellipse. Conversely, if the energy is positive (unbound orbits, also called "scattered orbits"), the eccentricity is greater than one and the orbit is a hyperbola. Finally, if the energy is exactly zero, the eccentricity is one and the orbit is a parabola. In all cases, the direction of A lies along the symmetry axis of the conic section and points from the center of force toward the periapsis, the point of closest approach. == Circular momentum hodographs == The conservation of the LRL vector A and angular momentum vector L is useful in showing that the momentum vector p moves on a circle under an inverse-square central force. Taking the dot product of m k r ^ = p × L − A {\displaystyle mk{\hat {\mathbf {r} }}=\mathbf {p} \times \mathbf {L} -\mathbf {A} } with itself yields ( m k ) 2 = A 2 + p 2 L 2 + 2 L ⋅ ( p × A ) . {\displaystyle (mk)^{2}=A^{2}+p^{2}L^{2}+2\mathbf {L} \cdot (\mathbf {p} \times \mathbf {A} ).} Further choosing L along the z-axis, and the major semiaxis as the x-axis, yields the locus equation for p, In other words, the momentum vector p is confined to a circle of radius mk/L = L/ℓ centered on (0, A/L). For bounded orbits, the eccentricity e corresponds to the cosine of the angle η shown in Figure 3. For unbounded orbits, we have A > m k {\displaystyle A>mk} and so the circle does not intersect the p x {\displaystyle p_{x}} -axis. In the degenerate limit of circular orbits, and thus vanishing A, the circle centers at the origin (0,0). For brevity, it is also useful to introduce the variable p 0 = 2 m | E | {\textstyle p_{0}={\sqrt {2m|E|}}} . This circular hodograph is useful in illustrating the symmetry of the Kepler problem. == Constants of motion and superintegrability == The seven scalar quantities E, A and L (being vectors, the latter two contribute three conserved quantities each) are related by two equations, A ⋅ L = 0 and A2 = m2k2 + 2 mEL2, giving five independent constants of motion. (Since the magnitude of A, hence the eccentricity e of the orbit, can be determined from the total angular momentum L and the energy E, only the direction of A is conserved independently; moreover, since A must be perpendicular to L, it contributes only one additional conserved quantity.) This is consistent with the six initial conditions (the particle's initial position and velocity vectors, each with three components) that specify the orbit of the particle, since the initial time is not determined by a constant of motion. The resulting 1-dimensional orbit in 6-dimensional phase space is thus completely specified. A mechanical system with d degrees of freedom can have at most 2d − 1 constants of motion, since there are 2d initial conditions and the initial time cannot be determined by a constant of motion. A system with more than d constants of motion is called superintegrable and a system with 2d − 1 constants is called maximally superintegrable. Since the solution of the Hamilton–Jacobi equation in one coordinate system can yield only d constants of motion, superintegrable systems must be separable in more than one coordinate system. The Kepler problem is maximally superintegrable, since it has three degrees of freedom (d = 3) and five independent constant of motion; its Hamilton–Jacobi equation is separable in both spherical coordinates and parabolic coordinates, as described below. Maximally superintegrable systems follow closed, one-dimensional orbits in phase space, since the orbit is the intersection of the phase-space isosurfaces of their constants of motion. Consequently, the orbits are perpendicular to all gradients of all these independent isosurfaces, five in this specific problem, and hence are determined by the generalized cross products of all of these gradients. As a result, all superintegrable systems are automatically describable by Nambu mechanics, alternatively, and equivalently, to Hamiltonian mechanics. Maximally superintegrable systems can be quantized using commutation relations, as illustrated below. Nevertheless, equivalently, they are also quantized in the Nambu framework, such as this classical Kepler problem into the quantum hydrogen atom. == Evolution under perturbed potentials == The Laplace–Runge–Lenz vector A is conserved only for a perfect inverse-square central force. In most practical problems such as planetary motion, however, the interaction potential energy between two bodies is not exactly an inverse square law, but may include an additional central force, a so-called perturbation described by a potential energy h(r). In such cases, the LRL vector rotates slowly in the plane of the orbit, corresponding to a slow apsidal precession of the orbit. By assumption, the perturbing potential h(r) is a conservative central force, which implies that the total energy E and angular momentum vector L are conserved. Thus, the motion still lies in a plane perpendicular to L and the magnitude A is conserved, from the equation A2 = m2k2 + 2mEL2. The perturbation potential h(r) may be any sort of function, but should be significantly weaker than the main inverse-square force between the two bodies. The rate at which the LRL vector rotates provides information about the perturbing potential h(r). Using canonical perturbation theory and action-angle coordinates, it is straightforward to show that A rotates at a rate of, ∂ ∂ L ⟨ h ( r ) ⟩ = ∂ ∂ L { 1 T ∫ 0 T h ( r ) d t } = ∂ ∂ L { m L 2 ∫ 0 2 π r 2 h ( r ) d θ } , {\displaystyle {\begin{aligned}{\frac {\partial }{\partial L}}\langle h(r)\rangle &={\frac {\partial }{\partial L}}\left\{{\frac {1}{T}}\int _{0}^{T}h(r)\,dt\right\}\\[1em]&={\frac {\partial }{\partial L}}\left\{{\frac {m}{L^{2}}}\int _{0}^{2\pi }r^{2}h(r)\,d\theta \right\},\end{aligned}}} where T is the orbital period, and the identity L dt = m r2 dθ was used to convert the time integral into an angular integral (Figure 5). The expression in angular brackets, ⟨h(r)⟩, represents the perturbing potential, but averaged over one full period; that is, averaged over one full passage of the body around its orbit. Mathematically, this time average corresponds to the following quantity in curly braces. This averaging helps to suppress fluctuations in the rate of rotation. This approach was used to help verify Einstein's theory of general relativity, which adds a small effective inverse-cubic perturbation to the normal Newtonian gravitational potential, h ( r ) = k L 2 m 2 c 2 ( 1 r 3 ) . {\displaystyle h(r)={\frac {kL^{2}}{m^{2}c^{2}}}\left({\frac {1}{r^{3}}}\right).} Inserting this function into the integral and using the equation 1 r = m k L 2 ( 1 + A m k cos ⁡ θ ) {\displaystyle {\frac {1}{r}}={\frac {mk}{L^{2}}}\left(1+{\frac {A}{mk}}\cos \theta \right)} to express r in terms of θ, the precession rate of the periapsis caused by this non-Newtonian perturbation is calculated to be 6 π k 2 T L 2 c 2 , {\displaystyle {\frac {6\pi k^{2}}{TL^{2}c^{2}}},} which closely matches the observed anomalous precession of Mercury and binary pulsars. This agreement with experiment is strong evidence for general relativity. == Poisson brackets == === Unscaled functions === The algebraic structure of the problem is, as explained in later sections, SO(4)/Z2 ~ SO(3) × SO(3). The three components Li of the angular momentum vector L have the Poisson brackets { L i , L j } = ∑ s = 1 3 ε i j s L s , {\displaystyle \{L_{i},L_{j}\}=\sum _{s=1}^{3}\varepsilon _{ijs}L_{s},} where i=1,2,3 and εijs is the fully antisymmetric tensor, i.e., the Levi-Civita symbol; the summation index s is used here to avoid confusion with the force parameter k defined above. Then since the LRL vector A transforms like a vector, we have the following Poisson bracket relations between A and L: { A i , L j } = ∑ s = 1 3 ε i j s A s . {\displaystyle \{A_{i},L_{j}\}=\sum _{s=1}^{3}\varepsilon _{ijs}A_{s}.} Finally, the Poisson bracket relations between the different components of A are as follows: { A i , A j } = − 2 m H ∑ s = 1 3 ε i j s L s , {\displaystyle \{A_{i},A_{j}\}=-2mH\sum _{s=1}^{3}\varepsilon _{ijs}L_{s},} where H {\displaystyle H} is the Hamiltonian. Note that the span of the components of A and the components of L is not closed under Poisson brackets, because of the factor of H {\displaystyle H} on the right-hand side of this last relation. Finally, since both L and A are constants of motion, we have { A i , H } = { L i , H } = 0. {\displaystyle \{A_{i},H\}=\{L_{i},H\}=0.} The Poisson brackets will be extended to quantum mechanical commutation relations in the next section and to Lie brackets in a following section. === Scaled functions === As noted below, a scaled Laplace–Runge–Lenz vector D may be defined with the same units as angular momentum by dividing A by p 0 = 2 m | H | {\textstyle p_{0}={\sqrt {2m|H|}}} . Since D still transforms like a vector, the Poisson brackets of D with the angular momentum vector L can then be written in a similar form { D i , L j } = ∑ s = 1 3 ε i j s D s . {\displaystyle \{D_{i},L_{j}\}=\sum _{s=1}^{3}\varepsilon _{ijs}D_{s}.} The Poisson brackets of D with itself depend on the sign of H, i.e., on whether the energy is negative (producing closed, elliptical orbits under an inverse-square central force) or positive (producing open, hyperbolic orbits under an inverse-square central force). For negative energies—i.e., for bound systems—the Poisson brackets are { D i , D j } = ∑ s = 1 3 ε i j s L s . {\displaystyle \{D_{i},D_{j}\}=\sum _{s=1}^{3}\varepsilon _{ijs}L_{s}.} We may now appreciate the motivation for the chosen scaling of D: With this scaling, the Hamiltonian no longer appears on the right-hand side of the preceding relation. Thus, the span of the three components of L and the three components of D forms a six-dimensional Lie algebra under the Poisson bracket. This Lie algebra is isomorphic to so(4), the Lie algebra of the 4-dimensional rotation group SO(4). By contrast, for positive energy, the Poisson brackets have the opposite sign, { D i , D j } = − ∑ s = 1 3 ε i j s L s . {\displaystyle \{D_{i},D_{j}\}=-\sum _{s=1}^{3}\varepsilon _{ijs}L_{s}.} In this case, the Lie algebra is isomorphic to so(3,1). The distinction between positive and negative energies arises because the desired scaling—the one that eliminates the Hamiltonian from the right-hand side of the Poisson bracket relations between the components of the scaled LRL vector—involves the square root of the Hamiltonian. To obtain real-valued functions, we must then take the absolute value of the Hamiltonian, which distinguishes between positive values (where | H | = H {\displaystyle |H|=H} ) and negative values (where | H | = − H {\displaystyle |H|=-H} ). === Laplace-Runge-Lenz operator for the hydrogen atom in momentum space === Scaled Laplace-Runge-Lenz operator in the momentum space was found in 2022 . The formula for the operator is simpler than in position space: A ^ p = ı ( l ^ p + 1 ) p − ( p 2 + 1 ) 2 ı ∇ p , {\displaystyle {\hat {\mathbf {A} }}_{\mathbf {p} }=\imath ({\hat {l}}_{\mathbf {p} }+1)\mathbf {p} -{\frac {(p^{2}+1)}{2}}\imath \mathbf {\nabla } _{\mathbf {p} },} where the "degree operator" l ^ p = ( p ∇ p ) {\displaystyle {\hat {l}}_{\mathbf {p} }=(\mathbf {p} \mathbf {\nabla } _{\mathbf {p} })} multiplies a homogeneous polynomial by its degree. === Casimir invariants and the energy levels === The Casimir invariants for negative energies are C 1 = D ⋅ D + L ⋅ L = m k 2 2 | E | , C 2 = D ⋅ L = 0 , {\displaystyle {\begin{aligned}C_{1}&=\mathbf {D} \cdot \mathbf {D} +\mathbf {L} \cdot \mathbf {L} ={\frac {mk^{2}}{2|E|}},\\C_{2}&=\mathbf {D} \cdot \mathbf {L} =0,\end{aligned}}} and have vanishing Poisson brackets with all components of D and L, { C 1 , L i } = { C 1 , D i } = { C 2 , L i } = { C 2 , D i } = 0. {\displaystyle \{C_{1},L_{i}\}=\{C_{1},D_{i}\}=\{C_{2},L_{i}\}=\{C_{2},D_{i}\}=0.} C2 is trivially zero, since the two vectors are always perpendicular. However, the other invariant, C1, is non-trivial and depends only on m, k and E. Upon canonical quantization, this invariant allows the energy levels of hydrogen-like atoms to be derived using only quantum mechanical canonical commutation relations, instead of the conventional solution of the Schrödinger equation. This derivation is discussed in detail in the next section. == Quantum mechanics of the hydrogen atom == Poisson brackets provide a simple guide for quantizing most classical systems: the commutation relation of two quantum mechanical operators is specified by the Poisson bracket of the corresponding classical variables, multiplied by iħ. By carrying out this quantization and calculating the eigenvalues of the C1 Casimir operator for the Kepler problem, Wolfgang Pauli was able to derive the energy levels of hydrogen-like atoms (Figure 6) and, thus, their atomic emission spectrum. This elegant 1926 derivation was obtained before the development of the Schrödinger equation. A subtlety of the quantum mechanical operator for the LRL vector A is that the momentum and angular momentum operators do not commute; hence, the quantum operator cross product of p and L must be defined carefully. Typically, the operators for the Cartesian components As are defined using a symmetrized (Hermitian) product, A s = − m k r ^ s + 1 2 ∑ i = 1 3 ∑ j = 1 3 ε s i j ( p i ℓ j + ℓ j p i ) , {\displaystyle A_{s}=-mk{\hat {r}}_{s}+{\frac {1}{2}}\sum _{i=1}^{3}\sum _{j=1}^{3}\varepsilon _{sij}(p_{i}\ell _{j}+\ell _{j}p_{i}),} Once this is done, one can show that the quantum LRL operators satisfy commutations relations exactly analogous to the Poisson bracket relations in the previous section—just replacing the Poisson bracket with 1 / ( i ℏ ) {\displaystyle 1/(i\hbar )} times the commutator. From these operators, additional ladder operators for L can be defined, J 0 = A 3 , J ± 1 = ∓ 1 2 ( A 1 ± i A 2 ) . {\displaystyle {\begin{aligned}J_{0}&=A_{3},\\J_{\pm 1}&=\mp {\tfrac {1}{\sqrt {2}}}\left(A_{1}\pm iA_{2}\right).\end{aligned}}} These further connect different eigenstates of L2, so different spin multiplets, among themselves. A normalized first Casimir invariant operator, quantum analog of the above, can likewise be defined, C 1 = − m k 2 2 ℏ 2 H − 1 − I , {\displaystyle C_{1}=-{\frac {mk^{2}}{2\hbar ^{2}}}H^{-1}-I,} where H−1 is the inverse of the Hamiltonian energy operator, and I is the identity operator. Applying these ladder operators to the eigenstates |ℓmn〉 of the total angular momentum, azimuthal angular momentum and energy operators, the eigenvalues of the first Casimir operator, C1, are seen to be quantized, n2 − 1. Importantly, by dint of the vanishing of C2, they are independent of the ℓ and m quantum numbers, making the energy levels degenerate. Hence, the energy levels are given by E n = − m k 2 2 ℏ 2 n 2 , {\displaystyle E_{n}=-{\frac {mk^{2}}{2\hbar ^{2}n^{2}}},} which coincides with the Rydberg formula for hydrogen-like atoms (Figure 6). The additional symmetry operators A have connected the different ℓ multiplets among themselves, for a given energy (and C1), dictating n2 states at each level. In effect, they have enlarged the angular momentum group SO(3) to SO(4)/Z2 ~ SO(3) × SO(3). == Conservation and symmetry == The conservation of the LRL vector corresponds to a subtle symmetry of the system. In classical mechanics, symmetries are continuous operations that map one orbit onto another without changing the energy of the system; in quantum mechanics, symmetries are continuous operations that "mix" electronic orbitals of the same energy, i.e., degenerate energy levels. A conserved quantity is usually associated with such symmetries. For example, every central force is symmetric under the rotation group SO(3), leading to the conservation of the angular momentum L. Classically, an overall rotation of the system does not affect the energy of an orbit; quantum mechanically, rotations mix the spherical harmonics of the same quantum number ℓ without changing the energy. The symmetry for the inverse-square central force is higher and more subtle. The peculiar symmetry of the Kepler problem results in the conservation of both the angular momentum vector L and the LRL vector A (as defined above) and, quantum mechanically, ensures that the energy levels of hydrogen do not depend on the angular momentum quantum numbers ℓ and m. The symmetry is more subtle, however, because the symmetry operation must take place in a higher-dimensional space; such symmetries are often called "hidden symmetries". Classically, the higher symmetry of the Kepler problem allows for continuous alterations of the orbits that preserve energy but not angular momentum; expressed another way, orbits of the same energy but different angular momentum (eccentricity) can be transformed continuously into one another. Quantum mechanically, this corresponds to mixing orbitals that differ in the ℓ and m quantum numbers, such as the s(ℓ = 0) and p(ℓ = 1) atomic orbitals. Such mixing cannot be done with ordinary three-dimensional translations or rotations, but is equivalent to a rotation in a higher dimension. For negative energies – i.e., for bound systems – the higher symmetry group is SO(4), which preserves the length of four-dimensional vectors | e | 2 = e 1 2 + e 2 2 + e 3 2 + e 4 2 . {\displaystyle |\mathbf {e} |^{2}=e_{1}^{2}+e_{2}^{2}+e_{3}^{2}+e_{4}^{2}.} In 1935, Vladimir Fock showed that the quantum mechanical bound Kepler problem is equivalent to the problem of a free particle confined to a three-dimensional unit sphere in four-dimensional space. Specifically, Fock showed that the Schrödinger wavefunction in the momentum space for the Kepler problem was the stereographic projection of the spherical harmonics on the sphere. Rotation of the sphere and re-projection results in a continuous mapping of the elliptical orbits without changing the energy, an SO(4) symmetry sometimes known as Fock symmetry; quantum mechanically, this corresponds to a mixing of all orbitals of the same energy quantum number n. Valentine Bargmann noted subsequently that the Poisson brackets for the angular momentum vector L and the scaled LRL vector A formed the Lie algebra for SO(4). Simply put, the six quantities A and L correspond to the six conserved angular momenta in four dimensions, associated with the six possible simple rotations in that space (there are six ways of choosing two axes from four). This conclusion does not imply that our universe is a three-dimensional sphere; it merely means that this particular physics problem (the two-body problem for inverse-square central forces) is mathematically equivalent to a free particle on a three-dimensional sphere. For positive energies – i.e., for unbound, "scattered" systems – the higher symmetry group is SO(3,1), which preserves the Minkowski length of 4-vectors d s 2 = e 1 2 + e 2 2 + e 3 2 − e 4 2 . {\displaystyle ds^{2}=e_{1}^{2}+e_{2}^{2}+e_{3}^{2}-e_{4}^{2}.} Both the negative- and positive-energy cases were considered by Fock and Bargmann and have been reviewed encyclopedically by Bander and Itzykson. The orbits of central-force systems – and those of the Kepler problem in particular – are also symmetric under reflection. Therefore, the SO(3), SO(4) and SO(3,1) groups cited above are not the full symmetry groups of their orbits; the full groups are O(3), O(4), and O(3,1), respectively. Nevertheless, only the connected subgroups, SO(3), SO(4), and SO+(3,1), are needed to demonstrate the conservation of the angular momentum and LRL vectors; the reflection symmetry is irrelevant for conservation, which may be derived from the Lie algebra of the group. == Rotational symmetry in four dimensions == The connection between the Kepler problem and four-dimensional rotational symmetry SO(4) can be readily visualized. Let the four-dimensional Cartesian coordinates be denoted (w, x, y, z) where (x, y, z) represent the Cartesian coordinates of the normal position vector r. The three-dimensional momentum vector p is associated with a four-dimensional vector η {\displaystyle {\boldsymbol {\eta }}} on a three-dimensional unit sphere η = p 2 − p 0 2 p 2 + p 0 2 w ^ + 2 p 0 p 2 + p 0 2 p = m k − r p 0 2 m k w ^ + r p 0 m k p , {\displaystyle {\begin{aligned}{\boldsymbol {\eta }}&={\frac {p^{2}-p_{0}^{2}}{p^{2}+p_{0}^{2}}}\mathbf {\hat {w}} +{\frac {2p_{0}}{p^{2}+p_{0}^{2}}}\mathbf {p} \\[1em]&={\frac {mk-rp_{0}^{2}}{mk}}\mathbf {\hat {w}} +{\frac {rp_{0}}{mk}}\mathbf {p} ,\end{aligned}}} where w ^ {\displaystyle \mathbf {\hat {w}} } is the unit vector along the new w axis. The transformation mapping p to η can be uniquely inverted; for example, the x component of the momentum equals p x = p 0 η x 1 − η w , {\displaystyle p_{x}=p_{0}{\frac {\eta _{x}}{1-\eta _{w}}},} and similarly for py and pz. In other words, the three-dimensional vector p is a stereographic projection of the four-dimensional η {\displaystyle {\boldsymbol {\eta }}} vector, scaled by p0 (Figure 8). Without loss of generality, we may eliminate the normal rotational symmetry by choosing the Cartesian coordinates such that the z axis is aligned with the angular momentum vector L and the momentum hodographs are aligned as they are in Figure 7, with the centers of the circles on the y axis. Since the motion is planar, and p and L are perpendicular, pz = ηz = 0 and attention may be restricted to the three-dimensional vector η = ( η w , η x , η y ) {\displaystyle {\boldsymbol {\eta }}=(\eta _{w},\eta _{x},\eta _{y})} . The family of Apollonian circles of momentum hodographs (Figure 7) correspond to a family of great circles on the three-dimensional η {\displaystyle {\boldsymbol {\eta }}} sphere, all of which intersect the ηx axis at the two foci ηx = ±1, corresponding to the momentum hodograph foci at px = ±p0. These great circles are related by a simple rotation about the ηx-axis (Figure 8). This rotational symmetry transforms all the orbits of the same energy into one another; however, such a rotation is orthogonal to the usual three-dimensional rotations, since it transforms the fourth dimension ηw. This higher symmetry is characteristic of the Kepler problem and corresponds to the conservation of the LRL vector. An elegant action-angle variables solution for the Kepler problem can be obtained by eliminating the redundant four-dimensional coordinates η {\displaystyle {\boldsymbol {\eta }}} in favor of elliptic cylindrical coordinates (χ, ψ, φ) η w = cn ⁡ χ cn ⁡ ψ , η x = sn ⁡ χ dn ⁡ ψ cos ⁡ ϕ , η y = sn ⁡ χ dn ⁡ ψ sin ⁡ ϕ , η z = dn ⁡ χ sn ⁡ ψ , {\displaystyle {\begin{aligned}\eta _{w}&=\operatorname {cn} \chi \operatorname {cn} \psi ,\\[1ex]\eta _{x}&=\operatorname {sn} \chi \operatorname {dn} \psi \cos \phi ,\\[1ex]\eta _{y}&=\operatorname {sn} \chi \operatorname {dn} \psi \sin \phi ,\\[1ex]\eta _{z}&=\operatorname {dn} \chi \operatorname {sn} \psi ,\end{aligned}}} where sn, cn and dn are Jacobi's elliptic functions. == Generalizations to other potentials and relativity == The Laplace–Runge–Lenz vector can also be generalized to identify conserved quantities that apply to other situations. In the presence of a uniform electric field E, the generalized Laplace–Runge–Lenz vector A {\displaystyle {\mathcal {A}}} is A = A + m q 2 [ ( r × E ) × r ] , {\displaystyle {\mathcal {A}}=\mathbf {A} +{\frac {mq}{2}}\left[\left(\mathbf {r} \times \mathbf {E} \right)\times \mathbf {r} \right],} where q is the charge of the orbiting particle. Although A {\displaystyle {\mathcal {A}}} is not conserved, it gives rise to a conserved quantity, namely A ⋅ E {\displaystyle {\mathcal {A}}\cdot \mathbf {E} } . Further generalizing the Laplace–Runge–Lenz vector to other potentials and special relativity, the most general form can be written as A = ( ∂ ξ ∂ u ) ( p × L ) + [ ξ − u ( ∂ ξ ∂ u ) ] L 2 r ^ , {\displaystyle {\mathcal {A}}=\left({\frac {\partial \xi }{\partial u}}\right)\left(\mathbf {p} \times \mathbf {L} \right)+\left[\xi -u\left({\frac {\partial \xi }{\partial u}}\right)\right]L^{2}\mathbf {\hat {r}} ,} where u = 1/r and ξ = cos θ, with the angle θ defined by θ = L ∫ u d u m 2 c 2 ( γ 2 − 1 ) − L 2 u 2 , {\displaystyle \theta =L\int ^{u}{\frac {du}{\sqrt {m^{2}c^{2}(\gamma ^{2}-1)-L^{2}u^{2}}}},} and γ is the Lorentz factor. As before, we may obtain a conserved binormal vector B by taking the cross product with the conserved angular momentum vector B = L × A . {\displaystyle {\mathcal {B}}=\mathbf {L} \times {\mathcal {A}}.} These two vectors may likewise be combined into a conserved dyadic tensor W, W = α A ⊗ A + β B ⊗ B . {\displaystyle {\mathcal {W}}=\alpha {\mathcal {A}}\otimes {\mathcal {A}}+\beta \,{\mathcal {B}}\otimes {\mathcal {B}}.} In illustration, the LRL vector for a non-relativistic, isotropic harmonic oscillator can be calculated. Since the force is central, F ( r ) = − k r , {\displaystyle \mathbf {F} (r)=-k\mathbf {r} ,} the angular momentum vector is conserved and the motion lies in a plane. The conserved dyadic tensor can be written in a simple form W = 1 2 m p ⊗ p + k 2 r ⊗ r , {\displaystyle {\mathcal {W}}={\frac {1}{2m}}\mathbf {p} \otimes \mathbf {p} +{\frac {k}{2}}\,\mathbf {r} \otimes \mathbf {r} ,} although p and r are not necessarily perpendicular. The corresponding Runge–Lenz vector is more complicated, A = 1 m r 2 ω 0 A − m r 2 E + L 2 { ( p × L ) + ( m r ω 0 A − m r E ) r ^ } , {\displaystyle {\mathcal {A}}={\frac {1}{\sqrt {mr^{2}\omega _{0}A-mr^{2}E+L^{2}}}}\left\{\left(\mathbf {p} \times \mathbf {L} \right)+\left(mr\omega _{0}A-mrE\right)\mathbf {\hat {r}} \right\},} where ω 0 = k m {\displaystyle \omega _{0}={\sqrt {\frac {k}{m}}}} is the natural oscillation frequency, and A = ( E 2 − ω 2 L 2 ) 1 / 2 / ω . {\displaystyle A=(E^{2}-\omega ^{2}L^{2})^{1/2}/\omega .} == Proofs that the Laplace–Runge–Lenz vector is conserved in Kepler problems == The following are arguments showing that the LRL vector is conserved under central forces that obey an inverse-square law. === Direct proof of conservation === A central force F {\displaystyle \mathbf {F} } acting on the particle is F = d p d t = f ( r ) r r = f ( r ) r ^ {\displaystyle \mathbf {F} ={\frac {d\mathbf {p} }{dt}}=f(r){\frac {\mathbf {r} }{r}}=f(r)\mathbf {\hat {r}} } for some function f ( r ) {\displaystyle f(r)} of the radius r {\displaystyle r} . Since the angular momentum L = r × p {\displaystyle \mathbf {L} =\mathbf {r} \times \mathbf {p} } is conserved under central forces, d d t L = 0 {\textstyle {\frac {d}{dt}}\mathbf {L} =0} and d d t ( p × L ) = d p d t × L = f ( r ) r ^ × ( r × m d r d t ) = f ( r ) m r [ r ( r ⋅ d r d t ) − r 2 d r d t ] , {\displaystyle {\frac {d}{dt}}\left(\mathbf {p} \times \mathbf {L} \right)={\frac {d\mathbf {p} }{dt}}\times \mathbf {L} =f(r)\mathbf {\hat {r}} \times \left(\mathbf {r} \times m{\frac {d\mathbf {r} }{dt}}\right)=f(r){\frac {m}{r}}\left[\mathbf {r} \left(\mathbf {r} \cdot {\frac {d\mathbf {r} }{dt}}\right)-r^{2}{\frac {d\mathbf {r} }{dt}}\right],} where the momentum p = m d r d t {\textstyle \mathbf {p} =m{\frac {d\mathbf {r} }{dt}}} and where the triple cross product has been simplified using Lagrange's formula r × ( r × d r d t ) = r ( r ⋅ d r d t ) − r 2 d r d t . {\displaystyle \mathbf {r} \times \left(\mathbf {r} \times {\frac {d\mathbf {r} }{dt}}\right)=\mathbf {r} \left(\mathbf {r} \cdot {\frac {d\mathbf {r} }{dt}}\right)-r^{2}{\frac {d\mathbf {r} }{dt}}.} The identity d d t ( r ⋅ r ) = 2 r ⋅ d r d t = d d t ( r 2 ) = 2 r d r d t {\displaystyle {\frac {d}{dt}}\left(\mathbf {r} \cdot \mathbf {r} \right)=2\mathbf {r} \cdot {\frac {d\mathbf {r} }{dt}}={\frac {d}{dt}}(r^{2})=2r{\frac {dr}{dt}}} yields the equation d d t ( p × L ) = − m f ( r ) r 2 [ 1 r d r d t − r r 2 d r d t ] = − m f ( r ) r 2 d d t ( r r ) . {\displaystyle {\frac {d}{dt}}\left(\mathbf {p} \times \mathbf {L} \right)=-mf(r)r^{2}\left[{\frac {1}{r}}{\frac {d\mathbf {r} }{dt}}-{\frac {\mathbf {r} }{r^{2}}}{\frac {dr}{dt}}\right]=-mf(r)r^{2}{\frac {d}{dt}}\left({\frac {\mathbf {r} }{r}}\right).} For the special case of an inverse-square central force f ( r ) = − k r 2 {\textstyle f(r)={\frac {-k}{r^{2}}}} , this equals d d t ( p × L ) = m k d d t ( r r ) = d d t ( m k r ^ ) . {\displaystyle {\frac {d}{dt}}\left(\mathbf {p} \times \mathbf {L} \right)=mk{\frac {d}{dt}}\left({\frac {\mathbf {r} }{r}}\right)={\frac {d}{dt}}(mk\mathbf {\hat {r}} ).} Therefore, A is conserved for inverse-square central forces d d t A = d d t ( p × L ) − d d t ( m k r ^ ) = 0 . {\displaystyle {\frac {d}{dt}}\mathbf {A} ={\frac {d}{dt}}\left(\mathbf {p} \times \mathbf {L} \right)-{\frac {d}{dt}}\left(mk\mathbf {\hat {r}} \right)=\mathbf {0} .} A shorter proof is obtained by using the relation of angular momentum to angular velocity, L = m r 2 ω {\displaystyle \mathbf {L} =mr^{2}{\boldsymbol {\omega }}} , which holds for a particle traveling in a plane perpendicular to L {\displaystyle \mathbf {L} } . Specifying to inverse-square central forces, the time derivative of p × L {\displaystyle \mathbf {p} \times \mathbf {L} } is d d t p × L = ( − k r 2 r ^ ) × ( m r 2 ω ) = m k ω × r ^ = m k d d t r ^ , {\displaystyle {\frac {d}{dt}}\mathbf {p} \times \mathbf {L} =\left({\frac {-k}{r^{2}}}\mathbf {\hat {r}} \right)\times \left(mr^{2}{\boldsymbol {\omega }}\right)=mk\,{\boldsymbol {\omega }}\times \mathbf {\hat {r}} =mk\,{\frac {d}{dt}}\mathbf {\hat {r}} ,} where the last equality holds because a unit vector can only change by rotation, and ω × r ^ {\displaystyle {\boldsymbol {\omega }}\times \mathbf {\hat {r}} } is the orbital velocity of the rotating vector. Thus, A is seen to be a difference of two vectors with equal time derivatives. As described elsewhere in this article, this LRL vector A is a special case of a general conserved vector A {\displaystyle {\mathcal {A}}} that can be defined for all central forces. However, since most central forces do not produce closed orbits (see Bertrand's theorem), the analogous vector A {\displaystyle {\mathcal {A}}} rarely has a simple definition and is generally a multivalued function of the angle θ between r and A {\displaystyle {\mathcal {A}}} . === Hamilton–Jacobi equation in parabolic coordinates === The constancy of the LRL vector can also be derived from the Hamilton–Jacobi equation in parabolic coordinates (ξ, η), which are defined by the equations ξ = r + x , η = r − x , {\displaystyle {\begin{aligned}\xi &=r+x,\\\eta &=r-x,\end{aligned}}} where r represents the radius in the plane of the orbit r = x 2 + y 2 . {\displaystyle r={\sqrt {x^{2}+y^{2}}}.} The inversion of these coordinates is x = 1 2 ( ξ − η ) , y = ξ η , {\displaystyle {\begin{aligned}x&={\tfrac {1}{2}}(\xi -\eta ),\\y&={\sqrt {\xi \eta }},\end{aligned}}} Separation of the Hamilton–Jacobi equation in these coordinates yields the two equivalent equations 2 ξ p ξ 2 − m k − m E ξ = − Γ , 2 η p η 2 − m k − m E η = Γ , {\displaystyle {\begin{aligned}2\xi p_{\xi }^{2}-mk-mE\xi &=-\Gamma ,\\2\eta p_{\eta }^{2}-mk-mE\eta &=\Gamma ,\end{aligned}}} where Γ is a constant of motion. Subtraction and re-expression in terms of the Cartesian momenta px and py shows that Γ is equivalent to the LRL vector Γ = p y ( x p y − y p x ) − m k x r = A x . {\displaystyle \Gamma =p_{y}(xp_{y}-yp_{x})-mk{\frac {x}{r}}=A_{x}.} === Noether's theorem === The connection between the rotational symmetry described above and the conservation of the LRL vector can be made quantitative by way of Noether's theorem. This theorem, which is used for finding constants of motion, states that any infinitesimal variation of the generalized coordinates of a physical system δ q i = ε g i ( q , q ˙ , t ) {\displaystyle \delta q_{i}=\varepsilon g_{i}(\mathbf {q} ,\mathbf {\dot {q}} ,t)} that causes the Lagrangian to vary to first order by a total time derivative δ L = ε d d t G ( q , t ) {\displaystyle \delta L=\varepsilon {\frac {d}{dt}}G(\mathbf {q} ,t)} corresponds to a conserved quantity Γ Γ = − G + ∑ i g i ( ∂ L ∂ q ˙ i ) . {\displaystyle \Gamma =-G+\sum _{i}g_{i}\left({\frac {\partial L}{\partial {\dot {q}}_{i}}}\right).} In particular, the conserved LRL vector component As corresponds to the variation in the coordinates δ s x i = ε 2 [ 2 p i x s − x i p s − δ i s ( r ⋅ p ) ] , {\displaystyle \delta _{s}x_{i}={\frac {\varepsilon }{2}}\left[2p_{i}x_{s}-x_{i}p_{s}-\delta _{is}\left(\mathbf {r} \cdot \mathbf {p} \right)\right],} where i equals 1, 2 and 3, with xi and pi being the i-th components of the position and momentum vectors r and p, respectively; as usual, δis represents the Kronecker delta. The resulting first-order change in the Lagrangian is δ L = 1 2 ε m k d d t ( x s r ) . {\displaystyle \delta L={\frac {1}{2}}\varepsilon mk{\frac {d}{dt}}\left({\frac {x_{s}}{r}}\right).} Substitution into the general formula for the conserved quantity Γ yields the conserved component As of the LRL vector, A s = [ p 2 x s − p s ( r ⋅ p ) ] − m k ( x s r ) = [ p × ( r × p ) ] s − m k ( x s r ) . {\displaystyle A_{s}=\left[p^{2}x_{s}-p_{s}\ \left(\mathbf {r} \cdot \mathbf {p} \right)\right]-mk\left({\frac {x_{s}}{r}}\right)=\left[\mathbf {p} \times \left(\mathbf {r} \times \mathbf {p} \right)\right]_{s}-mk\left({\frac {x_{s}}{r}}\right).} === Lie transformation === Noether's theorem derivation of the conservation of the LRL vector A is elegant, but has one drawback: the coordinate variation δxi involves not only the position r, but also the momentum p or, equivalently, the velocity v. This drawback may be eliminated by instead deriving the conservation of A using an approach pioneered by Sophus Lie. Specifically, one may define a Lie transformation in which the coordinates r and the time t are scaled by different powers of a parameter λ (Figure 9), t → λ 3 t , r → λ 2 r , p → 1 λ p . {\displaystyle t\rightarrow \lambda ^{3}t,\qquad \mathbf {r} \rightarrow \lambda ^{2}\mathbf {r} ,\qquad \mathbf {p} \rightarrow {\frac {1}{\lambda }}\mathbf {p} .} This transformation changes the total angular momentum L and energy E, L → λ L , E → 1 λ 2 E , {\displaystyle L\rightarrow \lambda L,\qquad E\rightarrow {\frac {1}{\lambda ^{2}}}E,} but preserves their product EL2. Therefore, the eccentricity e and the magnitude A are preserved, as may be seen from the equation for A2 A 2 = m 2 k 2 e 2 = m 2 k 2 + 2 m E L 2 . {\displaystyle A^{2}=m^{2}k^{2}e^{2}=m^{2}k^{2}+2mEL^{2}.} The direction of A is preserved as well, since the semiaxes are not altered by a global scaling. This transformation also preserves Kepler's third law, namely, that the semiaxis a and the period T form a constant T2/a3. == Alternative scalings, symbols and formulations == Unlike the momentum and angular momentum vectors p and L, there is no universally accepted definition of the Laplace–Runge–Lenz vector; several different scaling factors and symbols are used in the scientific literature. The most common definition is given above, but another common alternative is to divide by the quantity mk to obtain a dimensionless conserved eccentricity vector e = 1 m k ( p × L ) − r ^ = m k ( v × ( r × v ) ) − r ^ , {\displaystyle \mathbf {e} ={\frac {1}{mk}}\left(\mathbf {p} \times \mathbf {L} \right)-\mathbf {\hat {r}} ={\frac {m}{k}}\left(\mathbf {v} \times \left(\mathbf {r} \times \mathbf {v} \right)\right)-\mathbf {\hat {r}} ,} where v is the velocity vector. This scaled vector e has the same direction as A and its magnitude equals the eccentricity of the orbit, and thus vanishes for circular orbits. Other scaled versions are also possible, e.g., by dividing A by m alone M = v × L − k r ^ , {\displaystyle \mathbf {M} =\mathbf {v} \times \mathbf {L} -k\mathbf {\hat {r}} ,} or by p0 D = A p 0 = 1 2 m | E | ( p × L − m k r ^ ) , {\displaystyle \mathbf {D} ={\frac {\mathbf {A} }{p_{0}}}={\frac {1}{\sqrt {2m|E|}}}\left(\mathbf {p} \times \mathbf {L} -mk\mathbf {\hat {r}} \right),} which has the same units as the angular momentum vector L. In rare cases, the sign of the LRL vector may be reversed, i.e., scaled by −1. Other common symbols for the LRL vector include a, R, F, J and V. However, the choice of scaling and symbol for the LRL vector do not affect its conservation. An alternative conserved vector is the binormal vector B studied by William Rowan Hamilton, which is conserved and points along the minor semiaxis of the ellipse. (It is not defined for vanishing eccentricity.) The LRL vector A = B × L is the cross product of B and L (Figure 4). On the momentum hodograph in the relevant section above, B is readily seen to connect the origin of momenta with the center of the circular hodograph, and to possess magnitude A/L. At perihelion, it points in the direction of the momentum. The vector B is denoted as "binormal" since it is perpendicular to both A and L. Similar to the LRL vector itself, the binormal vector can be defined with different scalings and symbols. The two conserved vectors, A and B can be combined to form a conserved dyadic tensor W, W = α A ⊗ A + β B ⊗ B , {\displaystyle \mathbf {W} =\alpha \mathbf {A} \otimes \mathbf {A} +\beta \,\mathbf {B} \otimes \mathbf {B} ,} where α and β are arbitrary scaling constants and ⊗ {\displaystyle \otimes } represents the tensor product (which is not related to the vector cross product, despite their similar symbol). Written in explicit components, this equation reads W i j = α A i A j + β B i B j . {\displaystyle W_{ij}=\alpha A_{i}A_{j}+\beta B_{i}B_{j}.} Being perpendicular to each another, the vectors A and B can be viewed as the principal axes of the conserved tensor W, i.e., its scaled eigenvectors. W is perpendicular to L , L ⋅ W = α ( L ⋅ A ) A + β ( L ⋅ B ) B = 0 , {\displaystyle \mathbf {L} \cdot \mathbf {W} =\alpha \left(\mathbf {L} \cdot \mathbf {A} \right)\mathbf {A} +\beta \left(\mathbf {L} \cdot \mathbf {B} \right)\mathbf {B} =0,} since A and B are both perpendicular to L as well, L ⋅ A = L ⋅ B = 0. More directly, this equation reads, in explicit components, ( L ⋅ W ) j = α ( ∑ i = 1 3 L i A i ) A j + β ( ∑ i = 1 3 L i B i ) B j = 0. {\displaystyle \left(\mathbf {L} \cdot \mathbf {W} \right)_{j}=\alpha \left(\sum _{i=1}^{3}L_{i}A_{i}\right)A_{j}+\beta \left(\sum _{i=1}^{3}L_{i}B_{i}\right)B_{j}=0.} == See also == Astrodynamics Orbit Eccentricity vector Orbital elements Bertrand's theorem Binet equation Two-body problem == References == == Further reading == Baez, John (2008). "The Kepler Problem Revisited: The Laplace–Runge–Lenz Vector" (PDF). Retrieved 2021-05-31. Baez, John (2003). "Mysteries of the gravitational 2-body problem". Archived from the original on 2008-10-21. Retrieved 2004-12-11. Baez, John (2018). "Mysteries of the gravitational 2-body problem". Retrieved 2021-05-31. Updated version of previous source. D'Eliseo, M. M. (2007). "The first-order orbital equation". American Journal of Physics. 75 (4): 352–355. Bibcode:2007AmJPh..75..352D. doi:10.1119/1.2432126. Hall, Brian C. (2013), Quantum Theory for Mathematicians, Graduate Texts in Mathematics, vol. 267, Springer, Bibcode:2013qtm..book.....H, ISBN 978-1461471158. Leach, P. G. L.; G. P. Flessas (2003). "Generalisations of the Laplace–Runge–Lenz vector". J. Nonlinear Math. Phys. 10 (3): 340–423. arXiv:math-ph/0403028. Bibcode:2003JNMP...10..340L. doi:10.2991/jnmp.2003.10.3.6. S2CID 73707398.
Wikipedia/Fock_symmetry_in_theory_of_hydrogen
The Chern–Simons theory is a 3-dimensional topological quantum field theory of Schwarz type. It was discovered first by mathematical physicist Albert Schwarz. It is named after mathematicians Shiing-Shen Chern and James Harris Simons, who introduced the Chern–Simons 3-form. In the Chern–Simons theory, the action is proportional to the integral of the Chern–Simons 3-form. In condensed-matter physics, Chern–Simons theory describes composite fermions and the topological order in fractional quantum Hall effect states. In mathematics, it has been used to calculate knot invariants and three-manifold invariants such as the Jones polynomial. Particularly, Chern–Simons theory is specified by a choice of simple Lie group G known as the gauge group of the theory and also a number referred to as the level of the theory, which is a constant that multiplies the action. The action is gauge dependent, however the partition function of the quantum theory is well-defined when the level is an integer and the gauge field strength vanishes on all boundaries of the 3-dimensional spacetime. It is also the central mathematical object in theoretical models for topological quantum computers (TQC). Specifically, an SU(2) Chern–Simons theory describes the simplest non-abelian anyonic model of a TQC, the Yang–Lee–Fibonacci model. The dynamics of Chern–Simons theory on the 2-dimensional boundary of a 3-manifold is closely related to fusion rules and conformal blocks in conformal field theory, and in particular WZW theory. == The classical theory == === Mathematical origin === In the 1940s S. S. Chern and A. Weil studied the global curvature properties of smooth manifolds M as de Rham cohomology (Chern–Weil theory), which is an important step in the theory of characteristic classes in differential geometry. Given a flat G-principal bundle P on M there exists a unique homomorphism, called the Chern–Weil homomorphism, from the algebra of G-adjoint invariant polynomials on g (Lie algebra of G) to the cohomology H ∗ ( M , R ) {\displaystyle H^{*}(M,\mathbb {R} )} . If the invariant polynomial is homogeneous one can write down concretely any k-form of the closed connection ω as some 2k-form of the associated curvature form Ω of ω. In 1974 S. S. Chern and J. H. Simons had concretely constructed a (2k − 1)-form df(ω) such that d T f ( ω ) = f ( Ω k ) , {\displaystyle dTf(\omega )=f(\Omega ^{k}),} where T is the Chern–Weil homomorphism. This form is called Chern–Simons form. If df(ω) is closed one can integrate the above formula T f ( ω ) = ∫ C f ( Ω k ) , {\displaystyle Tf(\omega )=\int _{C}f(\Omega ^{k}),} where C is a (2k − 1)-dimensional cycle on M. This invariant is called Chern–Simons invariant. As pointed out in the introduction of the Chern–Simons paper, the Chern–Simons invariant CS(M) is the boundary term that cannot be determined by any pure combinatorial formulation. It also can be defined as CS ⁡ ( M ) = ∫ s ( M ) 1 2 T p 1 ∈ R / Z , {\displaystyle \operatorname {CS} (M)=\int _{s(M)}{\tfrac {1}{2}}Tp_{1}\in \mathbb {R} /\mathbb {Z} ,} where p 1 {\displaystyle p_{1}} is the first Pontryagin number and s(M) is the section of the normal orthogonal bundle P. Moreover, the Chern–Simons term is described as the eta invariant defined by Atiyah, Patodi and Singer. The gauge invariance and the metric invariance can be viewed as the invariance under the adjoint Lie group action in the Chern–Weil theory. The action integral (path integral) of the field theory in physics is viewed as the Lagrangian integral of the Chern–Simons form and Wilson loop, holonomy of vector bundle on M. These explain why the Chern–Simons theory is closely related to topological field theory. === Configurations === Chern–Simons theories can be defined on any topological 3-manifold M, with or without boundary. As these theories are Schwarz-type topological theories, no metric needs to be introduced on M. Chern–Simons theory is a gauge theory, which means that a classical configuration in the Chern–Simons theory on M with gauge group G is described by a principal G-bundle on M. The connection of this bundle is characterized by a connection one-form A which is valued in the Lie algebra g of the Lie group G. In general the connection A is only defined on individual coordinate patches, and the values of A on different patches are related by maps known as gauge transformations. These are characterized by the assertion that the covariant derivative, which is the sum of the exterior derivative operator d and the connection A, transforms in the adjoint representation of the gauge group G. The square of the covariant derivative with itself can be interpreted as a g-valued 2-form F called the curvature form or field strength. It also transforms in the adjoint representation. === Dynamics === The action S of Chern–Simons theory is proportional to the integral of the Chern–Simons 3-form S = k 4 π ∫ M tr ( A ∧ d A + 2 3 A ∧ A ∧ A ) . {\displaystyle S={\frac {k}{4\pi }}\int _{M}{\text{tr}}\,(A\wedge dA+{\tfrac {2}{3}}A\wedge A\wedge A).} The constant k is called the level of the theory. The classical physics of Chern–Simons theory is independent of the choice of level k. Classically the system is characterized by its equations of motion which are the extrema of the action with respect to variations of the field A. In terms of the field curvature F = d A + A ∧ A {\displaystyle F=dA+A\wedge A\,} the field equation is explicitly 0 = δ S δ A = k 2 π F . {\displaystyle 0={\frac {\delta S}{\delta A}}={\frac {k}{2\pi }}F.} The classical equations of motion are therefore satisfied if and only if the curvature vanishes everywhere, in which case the connection is said to be flat. Thus the classical solutions to G Chern–Simons theory are the flat connections of principal G-bundles on M. Flat connections are determined entirely by holonomies around noncontractible cycles on the base M. More precisely, they are in one-to-one correspondence with equivalence classes of homomorphisms from the fundamental group of M to the gauge group G up to conjugation. If M has a boundary N then there is additional data which describes a choice of trivialization of the principal G-bundle on N. Such a choice characterizes a map from N to G. The dynamics of this map is described by the Wess–Zumino–Witten (WZW) model on N at level k. == Quantization == To canonically quantize Chern–Simons theory one defines a state on each 2-dimensional surface Σ in M. As in any quantum field theory, the states correspond to rays in a Hilbert space. There is no preferred notion of time in a Schwarz-type topological field theory and so one can require that Σ be a Cauchy surface, in fact, a state can be defined on any surface. Σ is of codimension one, and so one may cut M along Σ. After such a cutting M will be a manifold with boundary and in particular classically the dynamics of Σ will be described by a WZW model. Witten has shown that this correspondence holds even quantum mechanically. More precisely, he demonstrated that the Hilbert space of states is always finite-dimensional and can be canonically identified with the space of conformal blocks of the G WZW model at level k. For example, when Σ is a 2-sphere, this Hilbert space is one-dimensional and so there is only one state. When Σ is a 2-torus the states correspond to the integrable representations of the affine Lie algebra corresponding to g at level k. Characterizations of the conformal blocks at higher genera are not necessary for Witten's solution of Chern–Simons theory. == Observables == === Wilson loops === The observables of Chern–Simons theory are the n-point correlation functions of gauge-invariant operators. The most often studied class of gauge invariant operators are Wilson loops. A Wilson loop is the holonomy around a loop in M, traced in a given representation R of G. As we will be interested in products of Wilson loops, without loss of generality we may restrict our attention to irreducible representations R. More concretely, given an irreducible representation R and a loop K in M, one may define the Wilson loop W R ( K ) {\displaystyle W_{R}(K)} by W R ( K ) = Tr R P exp ⁡ ( i ∮ K A ) {\displaystyle W_{R}(K)=\operatorname {Tr} _{R}\,{\mathcal {P}}\exp \left(i\oint _{K}A\right)} where A is the connection 1-form and we take the Cauchy principal value of the contour integral and P exp {\displaystyle {\mathcal {P}}\exp } is the path-ordered exponential. === HOMFLY and Jones polynomials === Consider a link L in M, which is a collection of ℓ disjoint loops. A particularly interesting observable is the ℓ-point correlation function formed from the product of the Wilson loops around each disjoint loop, each traced in the fundamental representation of G. One may form a normalized correlation function by dividing this observable by the partition function Z(M), which is just the 0-point correlation function. In the special case in which M is the 3-sphere, Witten has shown that these normalized correlation functions are proportional to known knot polynomials. For example, in G = U(N) Chern–Simons theory at level k the normalized correlation function is, up to a phase, equal to sin ⁡ ( π / ( k + N ) ) sin ⁡ ( π N / ( k + N ) ) {\displaystyle {\frac {\sin(\pi /(k+N))}{\sin(\pi N/(k+N))}}} times the HOMFLY polynomial. In particular when N = 2 the HOMFLY polynomial reduces to the Jones polynomial. In the SO(N) case, one finds a similar expression with the Kauffman polynomial. The phase ambiguity reflects the fact that, as Witten has shown, the quantum correlation functions are not fully defined by the classical data. The linking number of a loop with itself enters into the calculation of the partition function, but this number is not invariant under small deformations and in particular, is not a topological invariant. This number can be rendered well defined if one chooses a framing for each loop, which is a choice of preferred nonzero normal vector at each point along which one deforms the loop to calculate its self-linking number. This procedure is an example of the point-splitting regularization procedure introduced by Paul Dirac and Rudolf Peierls to define apparently divergent quantities in quantum field theory in 1934. Sir Michael Atiyah has shown that there exists a canonical choice of 2-framing, which is generally used in the literature today and leads to a well-defined linking number. With the canonical framing the above phase is the exponential of 2πi/(k + N) times the linking number of L with itself. Problem (Extension of Jones polynomial to general 3-manifolds)  "The original Jones polynomial was defined for 1-links in the 3-sphere (the 3-ball, the 3-space R3). Can you define the Jones polynomial for 1-links in any 3-manifold?" See section 1.1 of this paper for the background and the history of this problem. Kauffman submitted a solution in the case of the product manifold of closed oriented surface and the closed interval, by introducing virtual 1-knots. It is open in the other cases. Witten's path integral for Jones polynomial is written for links in any compact 3-manifold formally, but the calculus is not done even in physics level in any case other than the 3-sphere (the 3-ball, the 3-space R3). This problem is also open in physics level. In the case of Alexander polynomial, this problem is solved. == Relationships with other theories == === Topological string theories === In the context of string theory, a U(N) Chern–Simons theory on an oriented Lagrangian 3-submanifold M of a 6-manifold X arises as the string field theory of open strings ending on a D-brane wrapping X in the A-model topological string theory on X. The B-model topological open string field theory on the spacefilling worldvolume of a stack of D5-branes is a 6-dimensional variant of Chern–Simons theory known as holomorphic Chern–Simons theory. === WZW and matrix models === Chern–Simons theories are related to many other field theories. For example, if one considers a Chern–Simons theory with gauge group G on a manifold with boundary then all of the 3-dimensional propagating degrees of freedom may be gauged away, leaving a two-dimensional conformal field theory known as a G Wess–Zumino–Witten model on the boundary. In addition the U(N) and SO(N) Chern–Simons theories at large N are well approximated by matrix models. === Chern–Simons gravity theory === In 1982, S. Deser, R. Jackiw and S. Templeton proposed the Chern–Simons gravity theory in three dimensions, in which the Einstein–Hilbert action in gravity theory is modified by adding the Chern–Simons term. (Deser, Jackiw & Templeton (1982)) In 2003, R. Jackiw and S. Y. Pi extended this theory to four dimensions (Jackiw & Pi (2003)) and Chern–Simons gravity theory has some considerable effects not only to fundamental physics but also condensed matter theory and astronomy. The four-dimensional case is very analogous to the three-dimensional case. In three dimensions, the gravitational Chern–Simons term is CS ⁡ ( Γ ) = 1 2 π 2 ∫ d 3 x ε i j k ( Γ i q p ∂ j Γ k p q + 2 3 Γ i q p Γ j r q Γ k p r ) . {\displaystyle \operatorname {CS} (\Gamma )={\frac {1}{2\pi ^{2}}}\int d^{3}x\varepsilon ^{ijk}{\biggl (}\Gamma _{iq}^{p}\partial _{j}\Gamma _{kp}^{q}+{\frac {2}{3}}\Gamma _{iq}^{p}\Gamma _{jr}^{q}\Gamma _{kp}^{r}{\biggr )}.} This variation gives the Cotton tensor = − 1 2 g ( ε m i j D i R j n + ε n i j D i R j m ) . {\displaystyle =-{\frac {1}{2{\sqrt {g}}}}{\bigl (}\varepsilon ^{mij}D_{i}R_{j}^{n}+\varepsilon ^{nij}D_{i}R_{j}^{m}).} Then, Chern–Simons modification of three-dimensional gravity is made by adding the above Cotton tensor to the field equation, which can be obtained as the vacuum solution by varying the Einstein–Hilbert action. === Chern–Simons matter theories === In 2013 Kenneth A. Intriligator and Nathan Seiberg solved these 3d Chern–Simons gauge theories and their phases using monopoles carrying extra degrees of freedom. The Witten index of the many vacua discovered was computed by compactifying the space by turning on mass parameters and then computing the index. In some vacua, supersymmetry was computed to be broken. These monopoles were related to condensed matter vortices. (Intriligator & Seiberg (2013)) The N = 6 Chern–Simons matter theory is the holographic dual of M-theory on A d S 4 × S 7 {\displaystyle AdS_{4}\times S_{7}} . === Four-dimensional Chern–Simons theory === In 2013 Kevin Costello defined a closely related theory defined on a four-dimensional manifold consisting of the product of a two-dimensional 'topological plane' and a two-dimensional (or one complex dimensional) complex curve. He later studied the theory in more detail together with Witten and Masahito Yamazaki, demonstrating how the gauge theory could be related to many notions in integrable systems theory, including exactly solvable lattice models (like the six-vertex model or the XXZ spin chain), integrable quantum field theories (such as the Gross–Neveu model, principal chiral model and symmetric space coset sigma models), the Yang–Baxter equation and quantum groups such as the Yangian which describe symmetries underpinning the integrability of the aforementioned systems. The action on the 4-manifold M = Σ × C {\displaystyle M=\Sigma \times C} where Σ {\displaystyle \Sigma } is a two-dimensional manifold and C {\displaystyle C} is a complex curve is S = ∫ M ω ∧ C S ( A ) {\displaystyle S=\int _{M}\omega \wedge CS(A)} where ω {\displaystyle \omega } is a meromorphic one-form on C {\displaystyle C} . == Chern–Simons terms in other theories == The Chern–Simons term can also be added to models which aren't topological quantum field theories. In 3D, this gives rise to a massive photon if this term is added to the action of Maxwell's theory of electrodynamics. This term can be induced by integrating over a massive charged Dirac field. It also appears for example in the quantum Hall effect. The addition of the Chern–Simons term to various theories gives rise to vortex- or soliton-type solutions Ten- and eleven-dimensional generalizations of Chern–Simons terms appear in the actions of all ten- and eleven-dimensional supergravity theories. === One-loop renormalization of the level === If one adds matter to a Chern–Simons gauge theory then, in general it is no longer topological. However, if one adds n Majorana fermions then, due to the parity anomaly, when integrated out they lead to a pure Chern–Simons theory with a one-loop renormalization of the Chern–Simons level by −n/2, in other words the level k theory with n fermions is equivalent to the level k − n/2 theory without fermions. == See also == Gauge theory (mathematics) Chern–Simons form Topological quantum field theory Alexander polynomial Jones polynomial 2+1D topological gravity Skyrmion ∞-Chern–Simons theory == References == Arthur, K.; Tchrakian, D.H.; Y.-S., Yang (1996). "Topological and nontopological selfdual Chern-Simons solitons in a gauged O(3) sigma model". Physical Review D. 54 (8): 5245–5258. Bibcode:1996PhRvD..54.5245A. doi:10.1103/PhysRevD.54.5245. PMID 10021215. Chern, S.-S. & Simons, J. (1974). "Characteristic forms and geometric invariants". Annals of Mathematics. 99 (1): 48–69. doi:10.2307/1971013. JSTOR 1971013. Deser, Stanley; Jackiw, Roman; Templeton, S. (1982). "Three-Dimensional Massive Gauge Theories" (PDF). Physical Review Letters. 48 (15): 975–978. Bibcode:1982PhRvL..48..975D. doi:10.1103/PhysRevLett.48.975. S2CID 122537043. Intriligator, Kenneth; Seiberg, Nathan (2013). "Aspects of 3d N = 2 Chern–Simons Matter Theories". Journal of High Energy Physics. 2013: 79. arXiv:1305.1633. Bibcode:2013JHEP...07..079I. doi:10.1007/JHEP07(2013)079. S2CID 119106931. Jackiw, Roman; Pi, S.-Y (2003). "Chern–Simons modification of general relativity". Physical Review D. 68 (10): 104012. arXiv:gr-qc/0308071. Bibcode:2003PhRvD..68j4012J. doi:10.1103/PhysRevD.68.104012. S2CID 2243511. Kulshreshtha, Usha; Kulshreshtha, D.S.; Mueller-Kirsten, H. J. W.; Vary, J. P. (2009). "Hamiltonian, path integral and BRST formulations of the Chern-Simons-Higgs theory under appropriate gauge fixing". Physica Scripta . 79 (4): 045001. Bibcode:2009PhyS...79d5001K. doi:10.1088/0031-8949/79/04/045001. S2CID 120594654. Kulshreshtha, Usha; Kulshreshtha, D.S.; Vary, J. P. (2010). "Light-front Hamiltonian, path integral and BRST formulations of the Chern-Simons-Higgs theory under appropriate gauge fixing". Physica Scripta. 82 (5): 055101. Bibcode:2010PhyS...82e5101K. doi:10.1088/0031-8949/82/05/055101. S2CID 54602971. Lopez, Ana; Fradkin, Eduardo (1991). "Fractional quantum Hall effect and Chern-Simons gauge theories". Physical Review B. 44 (10): 5246–5262. Bibcode:1991PhRvB..44.5246L. doi:10.1103/PhysRevB.44.5246. PMID 9998334. Marino, Marcos (2005). "Chern–Simons Theory and Topological Strings". Reviews of Modern Physics. 77 (2): 675–720. arXiv:hep-th/0406005. Bibcode:2005RvMP...77..675M. doi:10.1103/RevModPhys.77.675. S2CID 6207500. Marino, Marcos (2005). Chern–Simons Theory, Matrix Models, And Topological Strings. International Series of Monographs on Physics. Oxford University Press. Witten, Edward (1988). "Topological Quantum Field Theory". Communications in Mathematical Physics. 117 (3): 353–386. Bibcode:1988CMaPh.117..353W. doi:10.1007/BF01223371. S2CID 43230714. Witten, Edward (1995). "Chern–Simons Theory as a String Theory". Progress in Mathematics. 133: 637–678. arXiv:hep-th/9207094. Bibcode:1992hep.th....7094W. Specific == External links == "Chern-Simons functional". Encyclopedia of Mathematics. EMS Press. 2001 [1994].
Wikipedia/Chern–Simons_model
The Lanczos tensor or Lanczos potential is a rank 3 tensor in general relativity that generates the Weyl tensor. It was first introduced by Cornelius Lanczos in 1949. The theoretical importance of the Lanczos tensor is that it serves as the gauge field for the gravitational field in the same way that, by analogy, the electromagnetic four-potential generates the electromagnetic field. == Definition == The Lanczos tensor can be defined in a few different ways. The most common modern definition is through the Weyl–Lanczos equations, which demonstrate the generation of the Weyl tensor from the Lanczos tensor. These equations, presented below, were given by Takeno in 1964. The way that Lanczos introduced the tensor originally was as a Lagrange multiplier on constraint terms studied in the variational approach to general relativity. Under any definition, the Lanczos tensor H exhibits the following symmetries: H a b c + H b a c = 0 , {\displaystyle H_{abc}+H_{bac}=0,\,} H a b c + H b c a + H c a b = 0. {\displaystyle H_{abc}+H_{bca}+H_{cab}=0.} The Lanczos tensor always exists in four dimensions but does not generalize to higher dimensions. This highlights the specialness of four dimensions. Note further that the full Riemann tensor cannot in general be derived from derivatives of the Lanczos potential alone. The Einstein field equations must provide the Ricci tensor to complete the components of the Ricci decomposition. The Curtright field has a gauge-transformation dynamics similar to that of Lanczos tensor. But Curtright field exists in arbitrary dimensions > 4D. === Weyl–Lanczos equations === The Weyl–Lanczos equations express the Weyl tensor entirely as derivatives of the Lanczos tensor: C a b c d = H a b c ; d + H c d a ; b + H b a d ; c + H d c b ; a + ( H e ( a c ) ; e + H ( a | e | e ; c ) ) g b d + ( H e ( b d ) ; e + H ( b | e | e ; d ) ) g a c − ( H e ( a d ) ; e + H ( a | e | e ; d ) ) g b c − ( H e ( b c ) ; e + H ( b | e | e ; c ) ) g a d − 2 3 H e f f ; e ( g a c g b d − g a d g b c ) {\displaystyle {\begin{aligned}C_{abcd}&=H_{abc;d}+H_{cda;b}+H_{bad;c}+H_{dcb;a}+(H^{e}{}_{(ac);e}+H_{(a|e|}{}^{e}{}_{;c)})g_{bd}+(H^{e}{}_{(bd);e}+H_{(b|e|}{}^{e}{}_{;d)})g_{ac}\\&\quad -(H^{e}{}_{(ad);e}+H_{(a|e|}{}^{e}{}_{;d)})g_{bc}-(H^{e}{}_{(bc);e}+H_{(b|e|}{}^{e}{}_{;c)})g_{ad}-{\frac {2}{3}}H^{ef}{}_{f;e}(g_{ac}g_{bd}-g_{ad}g_{bc})\end{aligned}}} where C a b c d {\displaystyle C_{abcd}} is the Weyl tensor, the semicolon denotes the covariant derivative, and the subscripted parentheses indicate symmetrization. Although the above equations can be used to define the Lanczos tensor, they also show that it is not unique but rather has gauge freedom under an affine group. If Φ a {\displaystyle \Phi ^{a}} is an arbitrary vector field, then the Weyl–Lanczos equations are invariant under the gauge transformation H a b c ′ = H a b c + Φ [ a g b ] c {\displaystyle H'_{abc}=H_{abc}+\Phi _{[a}g_{b]c}} where the subscripted brackets indicate antisymmetrization. An often convenient choice is the Lanczos algebraic gauge, Φ a = − 2 3 H a b b , {\displaystyle \Phi _{a}=-{\frac {2}{3}}H_{ab}{}^{b},} which sets H a b ′ b = 0. {\displaystyle H'_{ab}{}^{b}=0.} The gauge can be further restricted through the Lanczos differential gauge H a b c ; c = 0 {\displaystyle H_{ab}{}^{c}{}_{;c}=0} . These gauge choices reduce the Weyl–Lanczos equations to the simpler form C a b c d = H a b c ; d + H c d a ; b + H b a d ; c + H d c b ; a + H e a c ; e g b d + H e b d ; e g a c − H e a d ; e g b c − H e b c ; e g a d . {\displaystyle C_{abcd}=H_{abc;d}+H_{cda;b}+H_{bad;c}+H_{dcb;a}+H^{e}{}_{ac;e}g_{bd}+H^{e}{}_{bd;e}g_{ac}-H^{e}{}_{ad;e}g_{bc}-H^{e}{}_{bc;e}g_{ad}.} == Wave equation == The Lanczos potential tensor satisfies a wave equation ◻ H a b c = J a b c − 2 R c d H a b d + R a d H b c d + R b d H a c d + ( H d b e g a c − H d a e g b c ) R d e + 1 2 R H a b c , {\displaystyle {\begin{aligned}\Box H_{abc}=&\;J_{abc}\\&{}-2{R_{c}}^{d}H_{abd}+{R_{a}}^{d}H_{bcd}+{R_{b}}^{d}H_{acd}\\&{}+\left(H_{dbe}g_{ac}-H_{dae}g_{bc}\right)R^{de}+{\frac {1}{2}}RH_{abc},\end{aligned}}} where ◻ {\displaystyle \Box } is the d'Alembert operator and J a b c = R c a ; b − R c b ; a − 1 6 ( g c a R ; b − g c b R ; a ) {\displaystyle J_{abc}=R_{ca;b}-R_{cb;a}-{\frac {1}{6}}\left(g_{ca}R_{;b}-g_{cb}R_{;a}\right)} is known as the Cotton tensor. Since the Cotton tensor depends only on covariant derivatives of the Ricci tensor, it can perhaps be interpreted as a kind of matter current. The additional self-coupling terms have no direct electromagnetic equivalent. These self-coupling terms, however, do not affect the vacuum solutions, where the Ricci tensor vanishes and the curvature is described entirely by the Weyl tensor. Thus in vacuum, the Einstein field equations are equivalent to the homogeneous wave equation ◻ H a b c = 0 , {\displaystyle \Box H_{abc}=0,} in perfect analogy to the vacuum wave equation ◻ A a = 0 {\displaystyle \Box A_{a}=0} of the electromagnetic four-potential. This shows a formal similarity between gravitational waves and electromagnetic waves, with the Lanczos tensor well-suited for studying gravitational waves. In the weak field approximation where g a b = η a b + h a b {\displaystyle g_{ab}=\eta _{ab}+h_{ab}} , a convenient form for the Lanczos tensor in the Lanczos gauge is 4 H a b c ≈ h a c , b − h b c , a − 1 6 ( η a c h d d , b − η b c h d d , a ) . {\displaystyle 4H_{abc}\approx h_{ac,b}-h_{bc,a}-{\frac {1}{6}}(\eta _{ac}{h^{d}}_{d,b}-\eta _{bc}{h^{d}}_{d,a}).} == Example == The most basic nontrivial case for expressing the Lanczos tensor is, of course, for the Schwarzschild metric. The simplest, explicit component representation in natural units for the Lanczos tensor in this case is H t r t = G M r 2 {\displaystyle H_{trt}={\frac {GM}{r^{2}}}} with all other components vanishing up to symmetries. This form, however, is not in the Lanczos gauge. The nonvanishing terms of the Lanczos tensor in the Lanczos gauge are H t r t = 2 G M 3 r 2 {\displaystyle H_{trt}={\frac {2GM}{3r^{2}}}} H r θ θ = − G M 3 ( 1 − 2 G M / r ) {\displaystyle H_{r\theta \theta }={\frac {-GM}{3(1-2GM/r)}}} H r ϕ ϕ = − G M sin 2 ⁡ θ 3 ( 1 − 2 G M / r ) {\displaystyle H_{r\phi \phi }={\frac {-GM\sin ^{2}\theta }{3(1-2GM/r)}}} It is further possible to show, even in this simple case, that the Lanczos tensor cannot in general be reduced to a linear combination of the spin coefficients of the Newman–Penrose formalism, which attests to the Lanczos tensor's fundamental nature. Similar calculations have been used to construct arbitrary Petrov type D solutions. == See also == Bach tensor Ricci calculus Schouten tensor tetradic Palatini action Self-dual Palatini action == References == == External links == Peter O'Donnell, Introduction To 2-Spinors In General Relativity. World Scientific, 2003.
Wikipedia/Lanczos_tensor
In the physics of gauge theories, gauge fixing (also called choosing a gauge) denotes a mathematical procedure for coping with redundant degrees of freedom in field variables. By definition, a gauge theory represents each physically distinct configuration of the system as an equivalence class of detailed local field configurations. Any two detailed configurations in the same equivalence class are related by a certain transformation, equivalent to a shear along unphysical axes in configuration space. Most of the quantitative physical predictions of a gauge theory can only be obtained under a coherent prescription for suppressing or ignoring these unphysical degrees of freedom. Although the unphysical axes in the space of detailed configurations are a fundamental property of the physical model, there is no special set of directions "perpendicular" to them. Hence there is an enormous amount of freedom involved in taking a "cross section" representing each physical configuration by a particular detailed configuration (or even a weighted distribution of them). Judicious gauge fixing can simplify calculations immensely, but becomes progressively harder as the physical model becomes more realistic; its application to quantum field theory is fraught with complications related to renormalization, especially when the computation is continued to higher orders. Historically, the search for logically consistent and computationally tractable gauge fixing procedures, and efforts to demonstrate their equivalence in the face of a bewildering variety of technical difficulties, has been a major driver of mathematical physics from the late nineteenth century to the present. == Gauge freedom == The archetypical gauge theory is the Heaviside–Gibbs formulation of continuum electrodynamics in terms of an electromagnetic four-potential, which is presented here in space/time asymmetric Heaviside notation. The electric field E and magnetic field B of Maxwell's equations contain only "physical" degrees of freedom, in the sense that every mathematical degree of freedom in an electromagnetic field configuration has a separately measurable effect on the motions of test charges in the vicinity. These "field strength" variables can be expressed in terms of the electric scalar potential φ {\displaystyle \varphi } and the magnetic vector potential A through the relations: E = − ∇ φ − ∂ A ∂ t , B = ∇ × A . {\displaystyle {\mathbf {E} }=-\nabla \varphi -{\frac {\partial {\mathbf {A} }}{\partial t}}\,,\quad {\mathbf {B} }=\nabla \times {\mathbf {A} }.} If the transformation is made, then B remains unchanged, since (with the identity ∇ × ∇ ψ = 0 {\displaystyle \nabla \times \nabla \psi =0} ) B = ∇ × ( A + ∇ ψ ) = ∇ × A . {\displaystyle {\mathbf {B} }=\nabla \times ({\mathbf {A} }+\nabla \psi )=\nabla \times {\mathbf {A} }.} However, this transformation changes E according to E = − ∇ φ − ∂ A ∂ t − ∇ ∂ ψ ∂ t = − ∇ ( φ + ∂ ψ ∂ t ) − ∂ A ∂ t . {\displaystyle \mathbf {E} =-\nabla \varphi -{\frac {\partial {\mathbf {A} }}{\partial t}}-\nabla {\frac {\partial {\psi }}{\partial t}}=-\nabla \left(\varphi +{\frac {\partial {\psi }}{\partial t}}\right)-{\frac {\partial {\mathbf {A} }}{\partial t}}.} If another change is made then E also remains the same. Hence, the E and B fields are unchanged if one takes any function ψ(r, t) and simultaneously transforms A and φ via the transformations (1) and (2). A particular choice of the scalar and vector potentials is a gauge (more precisely, gauge potential) and a scalar function ψ used to change the gauge is called a gauge function. The existence of arbitrary numbers of gauge functions ψ(r, t) corresponds to the U(1) gauge freedom of this theory. Gauge fixing can be done in many ways, some of which we exhibit below. Although classical electromagnetism is now often spoken of as a gauge theory, it was not originally conceived in these terms. The motion of a classical point charge is affected only by the electric and magnetic field strengths at that point, and the potentials can be treated as a mere mathematical device for simplifying some proofs and calculations. Not until the advent of quantum field theory could it be said that the potentials themselves are part of the physical configuration of a system. The earliest consequence to be accurately predicted and experimentally verified was the Aharonov–Bohm effect, which has no classical counterpart. Nevertheless, gauge freedom is still true in these theories. For example, the Aharonov–Bohm effect depends on a line integral of A around a closed loop, and this integral is not changed by A → A + ∇ ψ . {\displaystyle \mathbf {A} \rightarrow \mathbf {A} +\nabla \psi \,.} Gauge fixing in non-abelian gauge theories, such as Yang–Mills theory and general relativity, is a rather more complicated topic; for details see Gribov ambiguity, Faddeev–Popov ghost, and frame bundle. === An illustration === As an illustration of gauge fixing, one may look at a cylindrical rod and attempt to tell whether it is twisted. If the rod is perfectly cylindrical, then the circular symmetry of the cross section makes it impossible to tell whether or not it is twisted. However, if there were a straight line drawn along the length of the rod, then one could easily say whether or not there is a twist by looking at the state of the line. Drawing a line is gauge fixing. Drawing the line spoils the gauge symmetry, i.e., the circular symmetry U(1) of the cross section at each point of the rod. The line is the equivalent of a gauge function; it need not be straight. Almost any line is a valid gauge fixing, i.e., there is a large gauge freedom. In summary, to tell whether the rod is twisted, the gauge must be known. Physical quantities, such as the energy of the torsion, do not depend on the gauge, i.e., they are gauge invariant. == Coulomb gauge == The Coulomb gauge (also known as the transverse gauge) is used in quantum chemistry and condensed matter physics and is defined by the gauge condition (more precisely, gauge fixing condition) ∇ ⋅ A ( r , t ) = 0 . {\displaystyle \nabla \cdot {\mathbf {A} }(\mathbf {r} ,t)=0\,.} It is particularly useful for "semi-classical" calculations in quantum mechanics, in which the vector potential is quantized but the Coulomb interaction is not. The Coulomb gauge has a number of properties: == Lorenz gauge == The Lorenz gauge is given, in SI units, by: ∇ ⋅ A + 1 c 2 ∂ φ ∂ t = 0 {\displaystyle \nabla \cdot {\mathbf {A} }+{\frac {1}{c^{2}}}{\frac {\partial \varphi }{\partial t}}=0} and in Gaussian units by: ∇ ⋅ A + 1 c ∂ φ ∂ t = 0. {\displaystyle \nabla \cdot {\mathbf {A} }+{\frac {1}{c}}{\frac {\partial \varphi }{\partial t}}=0.} This may be rewritten as: ∂ μ A μ = 0. {\displaystyle \partial _{\mu }A^{\mu }=0.} where A μ = [ 1 c φ , A ] {\displaystyle A^{\mu }=\left[\,{\tfrac {1}{c}}\varphi ,\,\mathbf {A} \,\right]} is the electromagnetic four-potential, ∂μ the 4-gradient [using the metric signature (+, −, −, −)]. It is unique among the constraint gauges in retaining manifest Lorentz invariance. Note, however, that this gauge was originally named after the Danish physicist Ludvig Lorenz and not after Hendrik Lorentz; it is often misspelled "Lorentz gauge". (Neither was the first to use it in calculations; it was introduced in 1888 by George Francis FitzGerald.) The Lorenz gauge leads to the following inhomogeneous wave equations for the potentials: 1 c 2 ∂ 2 φ ∂ t 2 − ∇ 2 φ = ρ ε 0 {\displaystyle {\frac {1}{c^{2}}}{\frac {\partial ^{2}\varphi }{\partial t^{2}}}-\nabla ^{2}{\varphi }={\frac {\rho }{\varepsilon _{0}}}} 1 c 2 ∂ 2 A ∂ t 2 − ∇ 2 A = μ 0 J {\displaystyle {\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {A} }{\partial t^{2}}}-\nabla ^{2}{\mathbf {A} }=\mu _{0}\mathbf {J} } It can be seen from these equations that, in the absence of current and charge, the solutions are potentials which propagate at the speed of light. The Lorenz gauge is incomplete in some sense: there remains a subspace of gauge transformations which can also preserve the constraint. These remaining degrees of freedom correspond to gauge functions which satisfy the wave equation ∂ 2 ψ ∂ t 2 = c 2 ∇ 2 ψ {\displaystyle {\frac {\partial ^{2}\psi }{\partial t^{2}}}=c^{2}\nabla ^{2}\psi } These remaining gauge degrees of freedom propagate at the speed of light. To obtain a fully fixed gauge, one must add boundary conditions along the light cone of the experimental region. Maxwell's equations in the Lorenz gauge simplify to ∂ μ ∂ μ A ν = μ 0 j ν {\displaystyle \partial _{\mu }\partial ^{\mu }A^{\nu }=\mu _{0}j^{\nu }} where j ν = [ c ρ , j ] {\displaystyle j^{\nu }=\left[\,c\,\rho ,\,\mathbf {j} \,\right]} is the four-current. Two solutions of these equations for the same current configuration differ by a solution of the vacuum wave equation ∂ μ ∂ μ A ν = 0. {\displaystyle \partial _{\mu }\partial ^{\mu }A^{\nu }=0.} In this form it is clear that the components of the potential separately satisfy the Klein–Gordon equation, and hence that the Lorenz gauge condition allows transversely, longitudinally, and "time-like" polarized waves in the four-potential. The transverse polarizations correspond to classical radiation, i.e., transversely polarized waves in the field strength. To suppress the "unphysical" longitudinal and time-like polarization states, which are not observed in experiments at classical distance scales, one must also employ auxiliary constraints known as Ward identities. Classically, these identities are equivalent to the continuity equation ∂ μ j μ = 0. {\displaystyle \partial _{\mu }j^{\mu }=0.} Many of the differences between classical and quantum electrodynamics can be accounted for by the role that the longitudinal and time-like polarizations play in interactions between charged particles at microscopic distances. == Rξ gauges == The Rξ gauges are a generalization of the Lorenz gauge applicable to theories expressed in terms of an action principle with Lagrangian density L {\displaystyle {\mathcal {L}}} . Instead of fixing the gauge by constraining the gauge field a priori, via an auxiliary equation, one adds a gauge breaking term to the "physical" (gauge invariant) Lagrangian δ L = − ( ∂ μ A μ ) 2 2 ξ {\displaystyle \delta {\mathcal {L}}=-{\frac {\left(\partial _{\mu }A^{\mu }\right)^{2}}{2\xi }}} The choice of the parameter ξ determines the choice of gauge. The Rξ Landau gauge is classically equivalent to Lorenz gauge: it is obtained in the limit ξ → 0 but postpones taking that limit until after the theory has been quantized. It improves the rigor of certain existence and equivalence proofs. Most quantum field theory computations are simplest in the Feynman–'t Hooft gauge, in which ξ = 1; a few are more tractable in other Rξ gauges, such as the Yennie gauge ξ = 3 (named afer Donald R. Yennie). An equivalent formulation of Rξ gauge uses an auxiliary field, a scalar field B with no independent dynamics: δ L = B ∂ μ A μ + ξ 2 B 2 {\displaystyle \delta {\mathcal {L}}=B\,\partial _{\mu }A^{\mu }+{\frac {\xi }{2}}B^{2}} The auxiliary field, sometimes called a Nakanishi–Lautrup field, can be eliminated by "completing the square" to obtain the previous form. From a mathematical perspective the auxiliary field is a variety of Goldstone boson, and its use has advantages when identifying the asymptotic states of the theory, and especially when generalizing beyond QED. Historically, the use of Rξ gauges was a significant technical advance in extending quantum electrodynamics computations beyond one-loop order. In addition to retaining manifest Lorentz invariance, the Rξ prescription breaks the symmetry under local gauge transformations while preserving the ratio of functional measures of any two physically distinct gauge configurations. This permits a change of variables in which infinitesimal perturbations along "physical" directions in configuration space are entirely uncoupled from those along "unphysical" directions, allowing the latter to be absorbed into the physically meaningless normalization of the functional integral. When ξ is finite, each physical configuration (orbit of the group of gauge transformations) is represented not by a single solution of a constraint equation but by a Gaussian distribution centered on the extremum of the gauge breaking term. In terms of the Feynman rules of the gauge-fixed theory, this appears as a contribution to the photon propagator for internal lines from virtual photons of unphysical polarization. The photon propagator, which is the multiplicative factor corresponding to an internal photon in the Feynman diagram expansion of a QED calculation, contains a factor gμν corresponding to the Minkowski metric. An expansion of this factor as a sum over photon polarizations involves terms containing all four possible polarizations. Transversely polarized radiation can be expressed mathematically as a sum over either a linearly or circularly polarized basis. Similarly, one can combine the longitudinal and time-like gauge polarizations to obtain "forward" and "backward" polarizations; these are a form of light-cone coordinates in which the metric is off-diagonal. An expansion of the gμν factor in terms of circularly polarized (spin ±1) and light-cone coordinates is called a spin sum. Spin sums can be very helpful both in simplifying expressions and in obtaining a physical understanding of the experimental effects associated with different terms in a theoretical calculation. Richard Feynman used arguments along approximately these lines largely to justify calculation procedures that produced consistent, finite, high precision results for important observable parameters such as the anomalous magnetic moment of the electron. Although his arguments sometimes lacked mathematical rigor even by physicists' standards and glossed over details such as the derivation of Ward–Takahashi identities of the quantum theory, his calculations worked, and Freeman Dyson soon demonstrated that his method was substantially equivalent to those of Julian Schwinger and Sin-Itiro Tomonaga, with whom Feynman shared the 1965 Nobel Prize in Physics. Forward and backward polarized radiation can be omitted in the asymptotic states of a quantum field theory (see Ward–Takahashi identity). For this reason, and because their appearance in spin sums can be seen as a mere mathematical device in QED (much like the electromagnetic four-potential in classical electrodynamics), they are often spoken of as "unphysical". But unlike the constraint-based gauge fixing procedures above, the Rξ gauge generalizes well to non-abelian gauge groups such as the SU(3) of QCD. The couplings between physical and unphysical perturbation axes do not entirely disappear under the corresponding change of variables; to obtain correct results, one must account for the non-trivial Jacobian of the embedding of gauge freedom axes within the space of detailed configurations. This leads to the explicit appearance of forward and backward polarized gauge bosons in Feynman diagrams, along with Faddeev–Popov ghosts, which are even more "unphysical" in that they violate the spin–statistics theorem. The relationship between these entities, and the reasons why they do not appear as particles in the quantum mechanical sense, becomes more evident in the BRST formalism of quantization. == Maximal abelian gauge == In any non-abelian gauge theory, any maximal abelian gauge is an incomplete gauge which fixes the gauge freedom outside of the maximal abelian subgroup. Examples are For SU(2) gauge theory in D dimensions, the maximal abelian subgroup is a U(1) subgroup. If this is chosen to be the one generated by the Pauli matrix σ3, then the maximal abelian gauge is that which maximizes the function ∫ d D x [ ( A μ 1 ) 2 + ( A μ 2 ) 2 ] , {\displaystyle \int d^{D}x\left[\left(A_{\mu }^{1}\right)^{2}+\left(A_{\mu }^{2}\right)^{2}\right]\,,} where A μ = A μ a σ a . {\displaystyle {\mathbf {A} }_{\mu }=A_{\mu }^{a}\sigma _{a}\,.} For SU(3) gauge theory in D dimensions, the maximal abelian subgroup is a U(1)×U(1) subgroup. If this is chosen to be the one generated by the Gell-Mann matrices λ3 and λ8, then the maximal abelian gauge is that which maximizes the function ∫ d D x [ ( A μ 1 ) 2 + ( A μ 2 ) 2 + ( A μ 4 ) 2 + ( A μ 5 ) 2 + ( A μ 6 ) 2 + ( A μ 7 ) 2 ] , {\displaystyle \int d^{D}x\left[\left(A_{\mu }^{1}\right)^{2}+\left(A_{\mu }^{2}\right)^{2}+\left(A_{\mu }^{4}\right)^{2}+\left(A_{\mu }^{5}\right)^{2}+\left(A_{\mu }^{6}\right)^{2}+\left(A_{\mu }^{7}\right)^{2}\right]\,,} where A μ = A μ a λ a {\displaystyle {\mathbf {A} }_{\mu }=A_{\mu }^{a}\lambda _{a}} This applies regularly in higher algebras (of groups in the algebras), for example the Clifford Algebra and as it is regularly. == Less commonly used gauges == Various other gauges, which can be beneficial in specific situations have appeared in the literature. === Weyl gauge === The Weyl gauge (also known as the Hamiltonian or temporal gauge) is an incomplete gauge obtained by the choice φ = 0 {\displaystyle \varphi =0} It is named after Hermann Weyl. It eliminates the negative-norm ghost, lacks manifest Lorentz invariance, and requires longitudinal photons and a constraint on states. === Multipolar gauge === The gauge condition of the multipolar gauge (also known as the line gauge, point gauge or Poincaré gauge (named after Henri Poincaré)) is: r ⋅ A = 0. {\displaystyle \mathbf {r} \cdot \mathbf {A} =0.} This is another gauge in which the potentials can be expressed in a simple way in terms of the instantaneous fields A ( r , t ) = − r × ∫ 0 1 B ( u r , t ) u d u {\displaystyle \mathbf {A} (\mathbf {r} ,t)=-\mathbf {r} \times \int _{0}^{1}\mathbf {B} (u\mathbf {r} ,t)u\,du} φ ( r , t ) = − r ⋅ ∫ 0 1 E ( u r , t ) d u . {\displaystyle \varphi (\mathbf {r} ,t)=-\mathbf {r} \cdot \int _{0}^{1}\mathbf {E} (u\mathbf {r} ,t)du.} === Fock–Schwinger gauge === The gauge condition of the Fock–Schwinger gauge (named after Vladimir Fock and Julian Schwinger; sometimes also called the relativistic Poincaré gauge) is: x μ A μ = 0 {\displaystyle x^{\mu }A_{\mu }=0} where xμ is the position four-vector. === Dirac gauge === The nonlinear Dirac gauge condition (named after Paul Dirac) is: A μ A μ = k 2 {\displaystyle A_{\mu }A^{\mu }=k^{2}} == References == == Further reading == Landau, Lev; Lifshitz, Evgeny (2007). The classical theory of fields. Amsterdam: Elsevier Butterworth Heinemann. ISBN 978-0-7506-2768-9. Jackson, J. D. (1999). Classical Electrodynamics (3rd ed.). New York: Wiley. ISBN 0-471-30932-X.
Wikipedia/Gauge_transformation
Classical electromagnetism or classical electrodynamics is a branch of physics focused on the study of interactions between electric charges and currents using an extension of the classical Newtonian model. It is, therefore, a classical field theory. The theory provides a description of electromagnetic phenomena whenever the relevant length scales and field strengths are large enough that quantum mechanical effects are negligible. For small distances and low field strengths, such interactions are better described by quantum electrodynamics which is a quantum field theory. == History == The physical phenomena that electromagnetism describes have been studied as separate fields since antiquity. For example, there were many advances in the field of optics centuries before light was understood to be an electromagnetic wave. However, the theory of electromagnetism, as it is currently understood, grew out of Michael Faraday's experiments suggesting the existence of an electromagnetic field and James Clerk Maxwell's use of differential equations to describe it in his A Treatise on Electricity and Magnetism (1873). The development of electromagnetism in Europe included the development of methods to measure voltage, current, capacitance, and resistance. Detailed historical accounts are given by Wolfgang Pauli, E. T. Whittaker, Abraham Pais, and Bruce J. Hunt. == Lorentz force == The electromagnetic field exerts the following force (often called the Lorentz force) on charged particles: F = q ( E + v × B ) {\displaystyle \mathbf {F} =q(\mathbf {E} +\mathbf {v} \times \mathbf {B} )} where all boldfaced quantities are vectors: F is the force that a particle with charge q experiences, E is the electric field at the location of the particle, v is the velocity of the particle, B is the magnetic field at the location of the particle. The above equation illustrates that the Lorentz force is the sum of two vectors. One is the cross product of the velocity and magnetic field vectors. Based on the properties of the cross product, this produces a vector that is perpendicular to both the velocity and magnetic field vectors. The other vector is in the same direction as the electric field. The sum of these two vectors is the Lorentz force. Although the equation appears to suggest that the electric and magnetic fields are independent, the equation can be rewritten in term of four-current (instead of charge) and a single electromagnetic tensor that represents the combined field ( F μ ν {\displaystyle F^{\mu \nu }} ): f α = F α β J β . {\displaystyle f_{\alpha }=F_{\alpha \beta }J^{\beta }.\!} == Electric field == The electric field E is defined such that, on a stationary charge: F = q 0 E {\displaystyle \mathbf {F} =q_{0}\mathbf {E} } where q0 is what is known as a test charge and F is the force on that charge. The size of the charge does not really matter, as long as it is small enough not to influence the electric field by its mere presence. What is plain from this definition, though, is that the unit of E is N/C (newtons per coulomb). This unit is equal to V/m (volts per meter); see below. In electrostatics, where charges are not moving, around a distribution of point charges, the forces determined from Coulomb's law may be summed. The result after dividing by q0 is: E ( r ) = 1 4 π ε 0 ∑ i = 1 n q i ( r − r i ) | r − r i | 3 {\displaystyle \mathbf {E(r)} ={\frac {1}{4\pi \varepsilon _{0}}}\sum _{i=1}^{n}{\frac {q_{i}\left(\mathbf {r} -\mathbf {r} _{i}\right)}{\left|\mathbf {r} -\mathbf {r} _{i}\right|^{3}}}} where n is the number of charges, qi is the amount of charge associated with the ith charge, ri is the position of the ith charge, r is the position where the electric field is being determined, and ε0 is the electric constant. If the field is instead produced by a continuous distribution of charge, the summation becomes an integral: E ( r ) = 1 4 π ε 0 ∫ ρ ( r ′ ) ( r − r ′ ) | r − r ′ | 3 d 3 r ′ {\displaystyle \mathbf {E(r)} ={\frac {1}{4\pi \varepsilon _{0}}}\int {\frac {\rho (\mathbf {r'} )\left(\mathbf {r} -\mathbf {r'} \right)}{\left|\mathbf {r} -\mathbf {r'} \right|^{3}}}\mathrm {d^{3}} \mathbf {r'} } where ρ ( r ′ ) {\displaystyle \rho (\mathbf {r'} )} is the charge density and r − r ′ {\displaystyle \mathbf {r} -\mathbf {r'} } is the vector that points from the volume element d 3 r ′ {\displaystyle \mathrm {d^{3}} \mathbf {r'} } to the point in space where E is being determined. Both of the above equations are cumbersome, especially if one wants to determine E as a function of position. A scalar function called the electric potential can help. Electric potential, also called voltage (the units for which are the volt), is defined by the line integral φ ( r ) = − ∫ C E ⋅ d l {\displaystyle \varphi \mathbf {(r)} =-\int _{C}\mathbf {E} \cdot \mathrm {d} \mathbf {l} } where φ ( r ) {\displaystyle \varphi ({\textbf {r}})} is the electric potential, and C is the path over which the integral is being taken. Unfortunately, this definition has a caveat. From Maxwell's equations, it is clear that ∇ × E is not always zero, and hence the scalar potential alone is insufficient to define the electric field exactly. As a result, one must add a correction factor, which is generally done by subtracting the time derivative of the A vector potential described below. Whenever the charges are quasistatic, however, this condition will be essentially met. From the definition of charge, one can easily show that the electric potential of a point charge as a function of position is: φ ( r ) = 1 4 π ε 0 ∑ i = 1 n q i | r − r i | {\displaystyle \varphi \mathbf {(r)} ={\frac {1}{4\pi \varepsilon _{0}}}\sum _{i=1}^{n}{\frac {q_{i}}{\left|\mathbf {r} -\mathbf {r} _{i}\right|}}} where q is the point charge's charge, r is the position at which the potential is being determined, and ri is the position of each point charge. The potential for a continuous distribution of charge is: φ ( r ) = 1 4 π ε 0 ∫ ρ ( r ′ ) | r − r ′ | d 3 r ′ {\displaystyle \varphi \mathbf {(r)} ={\frac {1}{4\pi \varepsilon _{0}}}\int {\frac {\rho (\mathbf {r'} )}{|\mathbf {r} -\mathbf {r'} |}}\,\mathrm {d^{3}} \mathbf {r'} } where ρ ( r ′ ) {\displaystyle \rho (\mathbf {r'} )} is the charge density, and r − r ′ {\displaystyle \mathbf {r} -\mathbf {r'} } is the distance from the volume element d 3 r ′ {\displaystyle \mathrm {d^{3}} \mathbf {r'} } to point in space where φ is being determined. The scalar φ will add to other potentials as a scalar. This makes it relatively easy to break complex problems down into simple parts and add their potentials. Taking the definition of φ backwards, we see that the electric field is just the negative gradient (the del operator) of the potential. Or: E ( r ) = − ∇ φ ( r ) . {\displaystyle \mathbf {E(r)} =-\nabla \varphi \mathbf {(r)} .} From this formula it is clear that E can be expressed in V/m (volts per meter). == Electromagnetic waves == A changing electromagnetic field propagates away from its origin in the form of a wave. These waves travel in vacuum at the speed of light and exist in a wide spectrum of wavelengths. Examples of the dynamic fields of electromagnetic radiation (in order of increasing frequency): radio waves, microwaves, light (infrared, visible light and ultraviolet), x-rays and gamma rays. In the field of particle physics this electromagnetic radiation is the manifestation of the electromagnetic interaction between charged particles. == General field equations == As simple and satisfying as Coulomb's equation may be, it is not entirely correct in the context of classical electromagnetism. Problems arise because changes in charge distributions require a non-zero amount of time to be "felt" elsewhere (required by special relativity). For the fields of general charge distributions, the retarded potentials can be computed and differentiated accordingly to yield Jefimenko's equations. Retarded potentials can also be derived for point charges, and the equations are known as the Liénard–Wiechert potentials. The scalar potential is: φ = 1 4 π ε 0 q | r − r q ( t r e t ) | − v q ( t r e t ) c ⋅ ( r − r q ( t r e t ) ) {\displaystyle \varphi ={\frac {1}{4\pi \varepsilon _{0}}}{\frac {q}{\left|\mathbf {r} -\mathbf {r} _{q}(t_{\rm {ret}})\right|-{\frac {\mathbf {v} _{q}(t_{\rm {ret}})}{c}}\cdot (\mathbf {r} -\mathbf {r} _{q}(t_{\rm {ret}}))}}} where q {\displaystyle q} is the point charge's charge and r {\displaystyle {\textbf {r}}} is the position. r q {\displaystyle {\textbf {r}}_{q}} and v q {\displaystyle {\textbf {v}}_{q}} are the position and velocity of the charge, respectively, as a function of retarded time. The vector potential is similar: A = μ 0 4 π q v q ( t r e t ) | r − r q ( t r e t ) | − v q ( t r e t ) c ⋅ ( r − r q ( t r e t ) ) . {\displaystyle \mathbf {A} ={\frac {\mu _{0}}{4\pi }}{\frac {q\mathbf {v} _{q}(t_{\rm {ret}})}{\left|\mathbf {r} -\mathbf {r} _{q}(t_{\rm {ret}})\right|-{\frac {\mathbf {v} _{q}(t_{\rm {ret}})}{c}}\cdot (\mathbf {r} -\mathbf {r} _{q}(t_{\rm {ret}}))}}.} These can then be differentiated accordingly to obtain the complete field equations for a moving point particle. == Models == Branches of classical electromagnetism such as optics, electrical and electronic engineering consist of a collection of relevant mathematical models of different degrees of simplification and idealization to enhance the understanding of specific electrodynamics phenomena. An electrodynamics phenomenon is determined by the particular fields, specific densities of electric charges and currents, and the particular transmission medium. Since there are infinitely many of them, in modeling there is a need for some typical, representative (a) electrical charges and currents, e.g. moving pointlike charges and electric and magnetic dipoles, electric currents in a conductor etc.; (b) electromagnetic fields, e.g. voltages, the Liénard–Wiechert potentials, the monochromatic plane waves, optical rays, radio waves, microwaves, infrared radiation, visible light, ultraviolet radiation, X-rays, gamma rays etc.; (c) transmission media, e.g. electronic components, antennas, electromagnetic waveguides, flat mirrors, mirrors with curved surfaces convex lenses, concave lenses; resistors, inductors, capacitors, switches; wires, electric and optical cables, transmission lines, integrated circuits etc.; all of which have only few variable characteristics. == See also == Mathematical descriptions of the electromagnetic field Weber electrodynamics Wheeler–Feynman absorber theory == Further reading == Fundamental physical aspects of classical electrodynamics are presented in many textbooks. For the undergraduate level, textbooks like The Feynman Lectures on Physics, Electricity and Magnetism, and Introduction to Electrodynamics are considered as classic references and for the graduate level, textbooks like Classical Electricity and Magnetism, Classical Electrodynamics, and Course of Theoretical Physics are considered as classic references. == References ==
Wikipedia/Classical_electrodynamics
Particle physics or high-energy physics is the study of fundamental particles and forces that constitute matter and radiation. The field also studies combinations of elementary particles up to the scale of protons and neutrons, while the study of combinations of protons and neutrons is called nuclear physics. The fundamental particles in the universe are classified in the Standard Model as fermions (matter particles) and bosons (force-carrying particles). There are three generations of fermions, although ordinary matter is made only from the first fermion generation. The first generation consists of up and down quarks which form protons and neutrons, and electrons and electron neutrinos. The three fundamental interactions known to be mediated by bosons are electromagnetism, the weak interaction, and the strong interaction. Quarks cannot exist on their own but form hadrons. Hadrons that contain an odd number of quarks are called baryons and those that contain an even number are called mesons. Two baryons, the proton and the neutron, make up most of the mass of ordinary matter. Mesons are unstable and the longest-lived last for only a few hundredths of a microsecond. They occur after collisions between particles made of quarks, such as fast-moving protons and neutrons in cosmic rays. Mesons are also produced in cyclotrons or other particle accelerators. Particles have corresponding antiparticles with the same mass but with opposite electric charges. For example, the antiparticle of the electron is the positron. The electron has a negative electric charge, the positron has a positive charge. These antiparticles can theoretically form a corresponding form of matter called antimatter. Some particles, such as the photon, are their own antiparticle. These elementary particles are excitations of the quantum fields that also govern their interactions. The dominant theory explaining these fundamental particles and fields, along with their dynamics, is called the Standard Model. The reconciliation of gravity to the current particle physics theory is not solved; many theories have addressed this problem, such as loop quantum gravity, string theory and supersymmetry theory. Experimental particle physics is the study of these particles in radioactive processes and in particle accelerators such as the Large Hadron Collider. Theoretical particle physics is the study of these particles in the context of cosmology and quantum theory. The two are closely interrelated: the Higgs boson was postulated theoretically before being confirmed by experiments. == History == The idea that all matter is fundamentally composed of elementary particles dates from at least the 6th century BC. In the 19th century, John Dalton, through his work on stoichiometry, concluded that each element of nature was composed of a single, unique type of particle. The word atom, after the Greek word atomos meaning "indivisible", has since then denoted the smallest particle of a chemical element, but physicists later discovered that atoms are not, in fact, the fundamental particles of nature, but are conglomerates of even smaller particles, such as the electron. The early 20th century explorations of nuclear physics and quantum physics led to proofs of nuclear fission in 1939 by Lise Meitner (based on experiments by Otto Hahn), and nuclear fusion by Hans Bethe in that same year; both discoveries also led to the development of nuclear weapons. Bethe's 1947 calculation of the Lamb shift is credited with having "opened the way to the modern era of particle physics". Throughout the 1950s and 1960s, a bewildering variety of particles was found in collisions of particles from beams of increasingly high energy. It was referred to informally as the "particle zoo". Important discoveries such as the CP violation by James Cronin and Val Fitch brought new questions to matter-antimatter imbalance. After the formulation of the Standard Model during the 1970s, physicists clarified the origin of the particle zoo. The large number of particles was explained as combinations of a (relatively) small number of more fundamental particles and framed in the context of quantum field theories. This reclassification marked the beginning of modern particle physics. == Standard Model == The current state of the classification of all elementary particles is explained by the Standard Model, which gained widespread acceptance in the mid-1970s after experimental confirmation of the existence of quarks. It describes the strong, weak, and electromagnetic fundamental interactions, using mediating gauge bosons. The species of gauge bosons are eight gluons, W−, W+ and Z bosons, and the photon. The Standard Model also contains 24 fundamental fermions (12 particles and their associated anti-particles), which are the constituents of all matter. Finally, the Standard Model also predicted the existence of a type of boson known as the Higgs boson. On 4 July 2012, physicists with the Large Hadron Collider at CERN announced they had found a new particle that behaves similarly to what is expected from the Higgs boson. The Standard Model, as currently formulated, has 61 elementary particles. Those elementary particles can combine to form composite particles, accounting for the hundreds of other species of particles that have been discovered since the 1960s. The Standard Model has been found to agree with almost all the experimental tests conducted to date. However, most particle physicists believe that it is an incomplete description of nature and that a more fundamental theory awaits discovery (See Theory of Everything). In recent years, measurements of neutrino mass have provided the first experimental deviations from the Standard Model, since neutrinos do not have mass in the Standard Model. == Subatomic particles == Modern particle physics research is focused on subatomic particles, including atomic constituents, such as electrons, protons, and neutrons (protons and neutrons are composite particles called baryons, made of quarks), that are produced by radioactive and scattering processes; such particles are photons, neutrinos, and muons, as well as a wide range of exotic particles. All particles and their interactions observed to date can be described almost entirely by the Standard Model. Dynamics of particles are also governed by quantum mechanics; they exhibit wave–particle duality, displaying particle-like behaviour under certain experimental conditions and wave-like behaviour in others. In more technical terms, they are described by quantum state vectors in a Hilbert space, which is also treated in quantum field theory. Following the convention of particle physicists, the term elementary particles is applied to those particles that are, according to current understanding, presumed to be indivisible and not composed of other particles. === Quarks and leptons === Ordinary matter is made from first-generation quarks (up, down) and leptons (electron, electron neutrino). Collectively, quarks and leptons are called fermions, because they have a quantum spin of half-integers (−1/2, 1/2, 3/2, etc.). This causes the fermions to obey the Pauli exclusion principle, where no two particles may occupy the same quantum state. Quarks have fractional elementary electric charge (−1/3 or 2/3) and leptons have whole-numbered electric charge (0 or -1). Quarks also have color charge, which is labeled arbitrarily with no correlation to actual light color as red, green and blue. Because the interactions between the quarks store energy which can convert to other particles when the quarks are far apart enough, quarks cannot be observed independently. This is called color confinement. There are three known generations of quarks (up and down, strange and charm, top and bottom) and leptons (electron and its neutrino, muon and its neutrino, tau and its neutrino), with strong indirect evidence that a fourth generation of fermions does not exist. === Bosons === Bosons are the mediators or carriers of fundamental interactions, such as electromagnetism, the weak interaction, and the strong interaction. Electromagnetism is mediated by the photon, the quanta of light.: 29–30  The weak interaction is mediated by the W and Z bosons. The strong interaction is mediated by the gluon, which can link quarks together to form composite particles. Due to the aforementioned color confinement, gluons are never observed independently. The Higgs boson gives mass to the W and Z bosons via the Higgs mechanism – the gluon and photon are expected to be massless. All bosons have an integer quantum spin (0 and 1) and can have the same quantum state. === Antiparticles and color charge === Most aforementioned particles have corresponding antiparticles, which compose antimatter. Normal particles have positive lepton or baryon number, and antiparticles have these numbers negative. Most properties of corresponding antiparticles and particles are the same, with a few gets reversed; the electron's antiparticle, positron, has an opposite charge. To differentiate between antiparticles and particles, a plus or negative sign is added in superscript. For example, the electron and the positron are denoted e− and e+. However, in the case that the particle has a charge of 0 (equal to that of the antiparticle), the antiparticle is denoted with a line above the symbol. As such, an electron neutrino is νe, whereas its antineutrino is νe. When a particle and an antiparticle interact with each other, they are annihilated and convert to other particles. Some particles, such as the photon or gluon, have no antiparticles. Quarks and gluons additionally have color charges, which influences the strong interaction. Quark's color charges are called red, green and blue (though the particle itself have no physical color), and in antiquarks are called antired, antigreen and antiblue. The gluon can have eight color charges, which are the result of quarks' interactions to form composite particles (gauge symmetry SU(3)). === Composite === The neutrons and protons in the atomic nuclei are baryons – the neutron is composed of two down quarks and one up quark, and the proton is composed of two up quarks and one down quark. A baryon is composed of three quarks, and a meson is composed of two quarks (one normal, one anti). Baryons and mesons are collectively called hadrons. Quarks inside hadrons are governed by the strong interaction, thus are subjected to quantum chromodynamics (color charges). The bounded quarks must have their color charge to be neutral, or "white" for analogy with mixing the primary colors. More exotic hadrons can have other types, arrangement or number of quarks (tetraquark, pentaquark). An atom is made from protons, neutrons and electrons. By modifying the particles inside a normal atom, exotic atoms can be formed. A simple example would be the hydrogen-4.1, which has one of its electrons replaced with a muon. === Hypothetical === The graviton is a hypothetical particle that can mediate the gravitational interaction, but it has not been detected or completely reconciled with current theories. Many other hypothetical particles have been proposed to address the limitations of the Standard Model. Notably, supersymmetric particles aim to solve the hierarchy problem, axions address the strong CP problem, and various other particles are proposed to explain the origins of dark matter and dark energy. == Experimental laboratories == The world's major particle physics laboratories are: Brookhaven National Laboratory (Long Island, New York, United States). Its main facility is the Relativistic Heavy Ion Collider (RHIC), which collides heavy ions such as gold ions and polarized protons. It is the world's first heavy ion collider, and the world's only polarized proton collider. Budker Institute of Nuclear Physics (Novosibirsk, Russia). Its main projects are now the electron-positron colliders VEPP-2000, operated since 2006, and VEPP-4, started experiments in 1994. Earlier facilities include the first electron–electron beam–beam collider VEP-1, which conducted experiments from 1964 to 1968; the electron-positron colliders VEPP-2, operated from 1965 to 1974; and, its successor VEPP-2M, performed experiments from 1974 to 2000. CERN (European Organization for Nuclear Research) (Franco-Swiss border, near Geneva, Switzerland). Its main project is now the Large Hadron Collider (LHC), which had its first beam circulation on 10 September 2008, and is now the world's most energetic collider of protons. It also became the most energetic collider of heavy ions after it began colliding lead ions. Earlier facilities include the Large Electron–Positron Collider (LEP), which was stopped on 2 November 2000 and then dismantled to give way for LHC; and the Super Proton Synchrotron, which is being reused as a pre-accelerator for the LHC and for fixed-target experiments. DESY (Deutsches Elektronen-Synchrotron) (Hamburg, Germany). Its main facility was the Hadron Elektron Ring Anlage (HERA), which collided electrons and positrons with protons. The accelerator complex is now focused on the production of synchrotron radiation with PETRA III, FLASH and the European XFEL. Fermi National Accelerator Laboratory (Fermilab) (Batavia, Illinois, United States). Its main facility until 2011 was the Tevatron, which collided protons and antiprotons and was the highest-energy particle collider on earth until the Large Hadron Collider surpassed it on 29 November 2009. Institute of High Energy Physics (IHEP) (Beijing, China). IHEP manages a number of China's major particle physics facilities, including the Beijing Electron–Positron Collider II(BEPC II), the Beijing Spectrometer (BES), the Beijing Synchrotron Radiation Facility (BSRF), the International Cosmic-Ray Observatory at Yangbajing in Tibet, the Daya Bay Reactor Neutrino Experiment, the China Spallation Neutron Source, the Hard X-ray Modulation Telescope (HXMT), and the Accelerator-driven Sub-critical System (ADS) as well as the Jiangmen Underground Neutrino Observatory (JUNO). KEK (Tsukuba, Japan). It is the home of a number of experiments such as the K2K experiment and its successor T2K experiment, a neutrino oscillation experiment and Belle II, an experiment measuring the CP violation of B mesons. SLAC National Accelerator Laboratory (Menlo Park, California, United States). Its 2-mile-long linear particle accelerator began operating in 1962 and was the basis for numerous electron and positron collision experiments until 2008. Since then the linear accelerator is being used for the Linac Coherent Light Source X-ray laser as well as advanced accelerator design research. SLAC staff continue to participate in developing and building many particle detectors around the world. == Theory == Theoretical particle physics attempts to develop the models, theoretical framework, and mathematical tools to understand current experiments and make predictions for future experiments (see also theoretical physics). There are several major interrelated efforts being made in theoretical particle physics today. One important branch attempts to better understand the Standard Model and its tests. Theorists make quantitative predictions of observables at collider and astronomical experiments, which along with experimental measurements is used to extract the parameters of the Standard Model with less uncertainty. This work probes the limits of the Standard Model and therefore expands scientific understanding of nature's building blocks. Those efforts are made challenging by the difficulty of calculating high precision quantities in quantum chromodynamics. Some theorists working in this area use the tools of perturbative quantum field theory and effective field theory, referring to themselves as phenomenologists. Others make use of lattice field theory and call themselves lattice theorists. Another major effort is in model building where model builders develop ideas for what physics may lie beyond the Standard Model (at higher energies or smaller distances). This work is often motivated by the hierarchy problem and is constrained by existing experimental data. It may involve work on supersymmetry, alternatives to the Higgs mechanism, extra spatial dimensions (such as the Randall–Sundrum models), Preon theory, combinations of these, or other ideas. Vanishing-dimensions theory is a particle physics theory suggesting that systems with higher energy have a smaller number of dimensions. A third major effort in theoretical particle physics is string theory. String theorists attempt to construct a unified description of quantum mechanics and general relativity by building a theory based on small strings, and branes rather than particles. If the theory is successful, it may be considered a "Theory of Everything", or "TOE". There are also other areas of work in theoretical particle physics ranging from particle cosmology to loop quantum gravity. == Practical applications == In principle, all physics (and practical applications developed therefrom) can be derived from the study of fundamental particles. In practice, even if "particle physics" is taken to mean only "high-energy atom smashers", many technologies have been developed during these pioneering investigations that later find wide uses in society. Particle accelerators are used to produce medical isotopes for research and treatment (for example, isotopes used in PET imaging), or used directly in external beam radiotherapy. The development of superconductors has been pushed forward by their use in particle physics. The World Wide Web and touchscreen technology were initially developed at CERN. Additional applications are found in medicine, national security, industry, computing, science, and workforce development, illustrating a long and growing list of beneficial practical applications with contributions from particle physics. == Future == Major efforts to look for physics beyond the Standard Model include the Future Circular Collider proposed for CERN and the Particle Physics Project Prioritization Panel (P5) in the US that will update the 2014 P5 study that recommended the Deep Underground Neutrino Experiment, among other experiments. == See also == == References == == External links ==
Wikipedia/Elementary_particle_physics
Quantum chaos is a branch of physics focused on how chaotic classical dynamical systems can be described in terms of quantum theory. The primary question that quantum chaos seeks to answer is: "What is the relationship between quantum mechanics and classical chaos?" The correspondence principle states that classical mechanics is the classical limit of quantum mechanics, specifically in the limit as the ratio of the Planck constant to the action of the system tends to zero. If this is true, then there must be quantum mechanisms underlying classical chaos (although this may not be a fruitful way of examining classical chaos). If quantum mechanics does not demonstrate an exponential sensitivity to initial conditions, how can exponential sensitivity to initial conditions arise in classical chaos, which must be the correspondence principle limit of quantum mechanics? In seeking to address the basic question of quantum chaos, several approaches have been employed: Development of methods for solving quantum problems where the perturbation cannot be considered small in perturbation theory and where quantum numbers are large. Correlating statistical descriptions of eigenvalues (energy levels) with the classical behavior of the same Hamiltonian (system). Study of probability distribution of individual eigenstates (see scars and quantum ergodicity). Semiclassical methods such as periodic-orbit theory connecting the classical trajectories of the dynamical system with quantum features. Direct application of the correspondence principle. == History == During the first half of the twentieth century, chaotic behavior in mechanics was recognized (as in the three-body problem in celestial mechanics), but not well understood. The foundations of modern quantum mechanics were laid in that period, essentially leaving aside the issue of the quantum-classical correspondence in systems whose classical limit exhibit chaos. == Approaches == Questions related to the correspondence principle arise in many different branches of physics, ranging from nuclear to atomic, molecular and solid-state physics, and even to acoustics, microwaves and optics. However, classical-quantum correspondence in chaos theory is not always possible. Thus, some versions of the classical butterfly effect do not have counterparts in quantum mechanics. Important observations often associated with classically chaotic quantum systems are spectral level repulsion, dynamical localization in time evolution (e.g. ionization rates of atoms), and enhanced stationary wave intensities in regions of space where classical dynamics exhibits only unstable trajectories (as in scattering). In the semiclassical approach of quantum chaos, phenomena are identified in spectroscopy by analyzing the statistical distribution of spectral lines and by connecting spectral periodicities with classical orbits. Other phenomena show up in the time evolution of a quantum system, or in its response to various types of external forces. In some contexts, such as acoustics or microwaves, wave patterns are directly observable and exhibit irregular amplitude distributions. Quantum chaos typically deals with systems whose properties need to be calculated using either numerical techniques or approximation schemes (see e.g. Dyson series). Simple and exact solutions are precluded by the fact that the system's constituents either influence each other in a complex way, or depend on temporally varying external forces. == Quantum mechanics in non-perturbative regimes == For conservative systems, the goal of quantum mechanics in non-perturbative regimes is to find the eigenvalues and eigenvectors of a Hamiltonian of the form H = H s + ε H n s , {\displaystyle H=H_{s}+\varepsilon H_{ns},\,} where H s {\displaystyle H_{s}} is separable in some coordinate system, H n s {\displaystyle H_{ns}} is non-separable in the coordinate system in which H s {\displaystyle H_{s}} is separated, and ϵ {\displaystyle \epsilon } is a parameter which cannot be considered small. Physicists have historically approached problems of this nature by trying to find the coordinate system in which the non-separable Hamiltonian is smallest and then treating the non-separable Hamiltonian as a perturbation. Finding constants of motion so that this separation can be performed can be a difficult (sometimes impossible) analytical task. Solving the classical problem can give valuable insight into solving the quantum problem. If there are regular classical solutions of the same Hamiltonian, then there are (at least) approximate constants of motion, and by solving the classical problem, we gain clues how to find them. Other approaches have been developed in recent years. One is to express the Hamiltonian in different coordinate systems in different regions of space, minimizing the non-separable part of the Hamiltonian in each region. Wavefunctions are obtained in these regions, and eigenvalues are obtained by matching boundary conditions. Another approach is numerical matrix diagonalization. If the Hamiltonian matrix is computed in any complete basis, eigenvalues and eigenvectors are obtained by diagonalizing the matrix. However, all complete basis sets are infinite, and we need to truncate the basis and still obtain accurate results. These techniques boil down to choosing a truncated basis from which accurate wavefunctions can be constructed. The computational time required to diagonalize a matrix scales as N 3 {\displaystyle N^{3}} , where N {\displaystyle N} is the dimension of the matrix, so it is important to choose the smallest basis possible from which the relevant wavefunctions can be constructed. It is also convenient to choose a basis in which the matrix is sparse and/or the matrix elements are given by simple algebraic expressions because computing matrix elements can also be a computational burden. A given Hamiltonian shares the same constants of motion for both classical and quantum dynamics. Quantum systems can also have additional quantum numbers corresponding to discrete symmetries (such as parity conservation from reflection symmetry). However, if we merely find quantum solutions of a Hamiltonian which is not approachable by perturbation theory, we may learn a great deal about quantum solutions, but we have learned little about quantum chaos. Nevertheless, learning how to solve such quantum problems is an important part of answering the question of quantum chaos. == Correlating statistical descriptions of quantum mechanics with classical behaviour == Statistical measures of quantum chaos were born out of a desire to quantify spectral features of complex systems. Random matrix theory was developed in an attempt to characterize spectra of complex nuclei. The remarkable result is that the statistical properties of many systems with unknown Hamiltonians can be predicted using random matrices of the proper symmetry class. Furthermore, random matrix theory also correctly predicts statistical properties of the eigenvalues of many chaotic systems with known Hamiltonians. This makes it useful as a tool for characterizing spectra which require large numerical efforts to compute. A number of statistical measures are available for quantifying spectral features in a simple way. It is of great interest whether or not there are universal statistical behaviors of classically chaotic systems. The statistical tests mentioned here are universal, at least to systems with few degrees of freedom (Berry and Tabor have put forward strong arguments for a Poisson distribution in the case of regular motion and Heusler et al. present a semiclassical explanation of the so-called Bohigas–Giannoni–Schmit conjecture which asserts universality of spectral fluctuations in chaotic dynamics). The nearest-neighbor distribution (NND) of energy levels is relatively simple to interpret and it has been widely used to describe quantum chaos. Qualitative observations of level repulsions can be quantified and related to the classical dynamics using the NND, which is believed to be an important signature of classical dynamics in quantum systems. It is thought that regular classical dynamics is manifested by a Poisson distribution of energy levels: P ( s ) = e − s . {\displaystyle P(s)=e^{-s}.\ } In addition, systems which display chaotic classical motion are expected to be characterized by the statistics of random matrix eigenvalue ensembles. For systems invariant under time reversal, the energy-level statistics of a number of chaotic systems have been shown to be in good agreement with the predictions of the Gaussian orthogonal ensemble (GOE) of random matrices, and it has been suggested that this phenomenon is generic for all chaotic systems with this symmetry. If the normalized spacing between two energy levels is s {\displaystyle s} , the normalized distribution of spacings is well approximated by P ( s ) = π 2 s e − π s 2 / 4 . {\displaystyle P(s)={\frac {\pi }{2}}se^{-\pi s^{2}/4}.} Many Hamiltonian systems which are classically integrable (non-chaotic) have been found to have quantum solutions that yield nearest neighbor distributions which follow the Poisson distributions. Similarly, many systems which exhibit classical chaos have been found with quantum solutions yielding a Wigner-Dyson distribution, thus supporting the ideas above. One notable exception is diamagnetic lithium which, though exhibiting classical chaos, demonstrates Wigner (chaotic) statistics for the even-parity energy levels and nearly Poisson (regular) statistics for the odd-parity energy level distribution. == Semiclassical methods == === Periodic orbit theory === Periodic-orbit theory gives a recipe for computing spectra from the periodic orbits of a system. In contrast to the Einstein–Brillouin–Keller method of action quantization, which applies only to integrable or near-integrable systems and computes individual eigenvalues from each trajectory, periodic-orbit theory is applicable to both integrable and non-integrable systems and asserts that each periodic orbit produces a sinusoidal fluctuation in the density of states. The principal result of this development is an expression for the density of states which is the trace of the semiclassical Green's function and is given by the Gutzwiller trace formula: g c ( E ) = ∑ k T k ∑ n = 1 ∞ 1 2 sinh ⁡ ( χ n k / 2 ) e i ( n S k − α n k π / 2 ) . {\displaystyle g_{c}(E)=\sum _{k}T_{k}\sum _{n=1}^{\infty }{\frac {1}{2\sinh {(\chi _{nk}/2)}}}\,e^{i(nS_{k}-\alpha _{nk}\pi /2)}.} Recently there was a generalization of this formula for arbitrary matrix Hamiltonians that involves a Berry phase-like term stemming from spin or other internal degrees of freedom. The index k {\displaystyle k} distinguishes the primitive periodic orbits: the shortest period orbits of a given set of initial conditions. T k {\displaystyle T_{k}} is the period of the primitive periodic orbit and S k {\displaystyle S_{k}} is its classical action. Each primitive orbit retraces itself, leading to a new orbit with action n S k {\displaystyle nS_{k}} and a period which is an integral multiple n {\displaystyle n} of the primitive period. Hence, every repetition of a periodic orbit is another periodic orbit. These repetitions are separately classified by the intermediate sum over the indices n {\displaystyle n} . α n k {\displaystyle \alpha _{nk}} is the orbit's Maslov index. The amplitude factor, 1 / sinh ⁡ ( χ n k / 2 ) {\displaystyle 1/\sinh {(\chi _{nk}/2)}} , represents the square root of the density of neighboring orbits. Neighboring trajectories of an unstable periodic orbit diverge exponentially in time from the periodic orbit. The quantity χ n k {\displaystyle \chi _{nk}} characterizes the instability of the orbit. A stable orbit moves on a torus in phase space, and neighboring trajectories wind around it. For stable orbits, sinh ⁡ ( χ n k / 2 ) {\displaystyle \sinh {(\chi _{nk}/2)}} becomes sin ⁡ ( χ n k / 2 ) {\displaystyle \sin {(\chi _{nk}/2)}} , where χ n k {\displaystyle \chi _{nk}} is the winding number of the periodic orbit. χ n k = 2 π m {\displaystyle \chi _{nk}=2\pi m} , where m {\displaystyle m} is the number of times that neighboring orbits intersect the periodic orbit in one period. This presents a difficulty because sin ⁡ ( χ n k / 2 ) = 0 {\displaystyle \sin {(\chi _{nk}/2)}=0} at a classical bifurcation. This causes that orbit's contribution to the energy density to diverge. This also occurs in the context of photo-absorption spectrum. Using the trace formula to compute a spectrum requires summing over all of the periodic orbits of a system. This presents several difficulties for chaotic systems: 1) The number of periodic orbits proliferates exponentially as a function of action. 2) There are an infinite number of periodic orbits, and the convergence properties of periodic-orbit theory are unknown. This difficulty is also present when applying periodic-orbit theory to regular systems. 3) Long-period orbits are difficult to compute because most trajectories are unstable and sensitive to roundoff errors and details of the numerical integration. Gutzwiller applied the trace formula to approach the anisotropic Kepler problem (a single particle in a 1 / r {\displaystyle 1/r} potential with an anisotropic mass tensor) semiclassically. He found agreement with quantum computations for low lying (up to n = 6 {\displaystyle n=6} ) states for small anisotropies by using only a small set of easily computed periodic orbits, but the agreement was poor for large anisotropies. The figures above use an inverted approach to testing periodic-orbit theory. The trace formula asserts that each periodic orbit contributes a sinusoidal term to the spectrum. Rather than dealing with the computational difficulties surrounding long-period orbits to try to find the density of states (energy levels), one can use standard quantum mechanical perturbation theory to compute eigenvalues (energy levels) and use the Fourier transform to look for the periodic modulations of the spectrum which are the signature of periodic orbits. Interpreting the spectrum then amounts to finding the orbits which correspond to peaks in the Fourier transform. ==== Rough sketch on how to arrive at the Gutzwiller trace formula ==== Start with the semiclassical approximation of the time-dependent Green's function (the Van Vleck propagator). Realize that for caustics the description diverges and use the insight by Maslov (approximately Fourier transforming to momentum space (stationary phase approximation with h a small parameter) to avoid such points and afterwards transforming back to position space can cure such a divergence, however gives a phase factor). Transform the Greens function to energy space to get the energy dependent Greens function (again approximate Fourier transform using the stationary phase approximation). New divergences might pop up that need to be cured using the same method as step 3 Use d ( E ) = − 1 π ℑ ( Tr ⁡ ( G ( x , x ′ , E ) ) {\displaystyle d(E)=-{\frac {1}{\pi }}\Im (\operatorname {Tr} (G(x,x^{\prime },E))} (tracing over positions) and calculate it again in stationary phase approximation to get an approximation for the density of states d ( E ) {\displaystyle d(E)} . Note: Taking the trace tells you that only closed orbits contribute, the stationary phase approximation gives you restrictive conditions each time you make it. In step 4 it restricts you to orbits where initial and final momentum are the same i.e. periodic orbits. Often it is nice to choose a coordinate system parallel to the direction of movement, as it is done in many books. === Closed orbit theory === Closed-orbit theory was developed by J.B. Delos, M.L. Du, J. Gao, and J. Shaw. It is similar to periodic-orbit theory, except that closed-orbit theory is applicable only to atomic and molecular spectra and yields the oscillator strength density (observable photo-absorption spectrum) from a specified initial state whereas periodic-orbit theory yields the density of states. Only orbits that begin and end at the nucleus are important in closed-orbit theory. Physically, these are associated with the outgoing waves that are generated when a tightly bound electron is excited to a high-lying state. For Rydberg atoms and molecules, every orbit which is closed at the nucleus is also a periodic orbit whose period is equal to either the closure time or twice the closure time. According to closed-orbit theory, the average oscillator strength density at constant ϵ {\displaystyle \epsilon } is given by a smooth background plus an oscillatory sum of the form f ( w ) = ∑ k ∑ n = 1 ∞ D n k i sin ⁡ ( 2 π n w S k ~ − ϕ n k ) . {\displaystyle f(w)=\sum _{k}\sum _{n=1}^{\infty }D_{\it {nk}}^{i}\sin(2\pi nw{\tilde {S_{k}}}-\phi _{\it {nk}}).} ϕ n k {\displaystyle \phi _{\it {nk}}} is a phase that depends on the Maslov index and other details of the orbits. D n k i {\displaystyle D_{\it {nk}}^{i}} is the recurrence amplitude of a closed orbit for a given initial state (labeled i {\displaystyle i} ). It contains information about the stability of the orbit, its initial and final directions, and the matrix element of the dipole operator between the initial state and a zero-energy Coulomb wave. For scaling systems such as Rydberg atoms in strong fields, the Fourier transform of an oscillator strength spectrum computed at fixed ϵ {\displaystyle \epsilon } as a function of w {\displaystyle w} is called a recurrence spectrum, because it gives peaks which correspond to the scaled action of closed orbits and whose heights correspond to D n k i {\displaystyle D_{\it {nk}}^{i}} . Closed-orbit theory has found broad agreement with a number of chaotic systems, including diamagnetic hydrogen, hydrogen in parallel electric and magnetic fields, diamagnetic lithium, lithium in an electric field, the H − {\displaystyle H^{-}} ion in crossed and parallel electric and magnetic fields, barium in an electric field, and helium in an electric field. === One-dimensional systems and potential === For the case of one-dimensional system with the boundary condition y ( 0 ) = 0 {\displaystyle y(0)=0} the density of states obtained from the Gutzwiller formula is related to the inverse of the potential of the classical system by d 1 / 2 d x 1 / 2 V − 1 ( x ) = 2 π d N ( x ) d x {\displaystyle {\frac {d^{1/2}}{dx^{1/2}}}V^{-1}(x)=2{\sqrt {\pi }}{\frac {dN(x)}{dx}}} here d N ( x ) d x {\displaystyle {\frac {dN(x)}{dx}}} is the density of states and V(x) is the classical potential of the particle, the half derivative of the inverse of the potential is related to the density of states as in the Wu–Sprung potential. == Recent directions == One open question remains understanding quantum chaos in systems that have finite-dimensional local Hilbert spaces for which standard semiclassical limits do not apply. Recent works allowed for studying analytically such quantum many-body systems. The traditional topics in quantum chaos concerns spectral statistics (universal and non-universal features), and the study of eigenfunctions of various chaotic Hamiltonian. For example, before the existence of scars was reported, eigenstates of a classically chaotic system were conjectured to fill the available phase space evenly, up to random fluctuations and energy conservation (Quantum ergodicity). However, a quantum eigenstate of a classically chaotic system can be scarred: the probability density of the eigenstate is enhanced in the neighborhood of a periodic orbit, above the classical, statistically expected density along the orbit (scars). In particular, scars are both a striking visual example of classical-quantum correspondence away from the usual classical limit, and a useful example of a quantum suppression of chaos. For example, this is evident in the perturbation-induced quantum scarring: More specifically, in quantum dots perturbed by local potential bumps (impurities), some of the eigenstates are strongly scarred along periodic orbits of unperturbed classical counterpart. Further studies concern the parametric ( R {\displaystyle R} ) dependence of the Hamiltonian, as reflected in e.g. the statistics of avoided crossings, and the associated mixing as reflected in the (parametric) local density of states (LDOS). There is vast literature on wavepacket dynamics, including the study of fluctuations, recurrences, quantum irreversibility issues etc. Special place is reserved to the study of the dynamics of quantized maps: the standard map and the kicked rotator are considered to be prototype problems. Works are also focused in the study of driven chaotic systems, where the Hamiltonian H ( x , p ; R ( t ) ) {\displaystyle H(x,p;R(t))} is time dependent, in particular in the adiabatic and in the linear response regimes. There is also significant effort focused on formulating ideas of quantum chaos for strongly-interacting many-body quantum systems far from semi-classical regimes as well as a large effort in quantum chaotic scattering. == Berry–Tabor conjecture == In 1977, Berry and Tabor made a still open "generic" mathematical conjecture which, stated roughly, is: In the "generic" case for the quantum dynamics of a geodesic flow on a compact Riemann surface, the quantum energy eigenvalues behave like a sequence of independent random variables provided that the underlying classical dynamics is completely integrable. == See also == Scar (physics) Statistical mechanics == References == == Further resources == Martin C. Gutzwiller (1971). "Periodic Orbits and Classical Quantization Conditions". Journal of Mathematical Physics. 12 (3): 343–358. Bibcode:1971JMP....12..343G. doi:10.1063/1.1665596. Gutzwiller, M. C. (1990). Chaos in classical and quantum mechanics. Interdisciplinary applied mathematics. New York: Springer-Verlag. ISBN 978-0-387-97173-5. Stöckmann, Hans-Jürgen (1999). Quantum chaos: An introduction. Cambridge: Cambridge university press. ISBN 978-0-521-59284-0. Eugene Paul Wigner; Dirac, P. A. M. (1951). "On the statistical distribution of the widths and spacings of nuclear resonance levels". Mathematical Proceedings of the Cambridge Philosophical Society. 47 (4): 790. Bibcode:1951PCPS...47..790W. doi:10.1017/S0305004100027237. S2CID 120852535. Haake, Fritz (2001). Quantum signatures of chaos. Springer series in synergetics (2nd rev. and enlarged ed.). Berlin Heidelberg Paris [etc.]: Springer. ISBN 978-3-540-67723-9. Berggren, Karl-Fredrik; °Aberg, Sven, eds. (2001). Quantum chaos Y2K: proceedings of Nobel Symposium 116, Bäckaskog Castle, Sweden, June 13 - 17, 2000. Stockholm, Sweden: Physica Scripta, the Royal Swedish Academy of Sciences. ISBN 978-981-02-4711-9. Reichl, Linda E. (2004). The transition to chaos: conservative classical systems and quantum manifestations. Institute for nonlinear science (2. [new] ed.). New York Heidelberg: Springer. ISBN 978-0-387-98788-0. == External links == Quantum Chaos by Martin Gutzwiller (1992 and 2008, Scientific American) Quantum Chaos Martin Gutzwiller Scholarpedia 2(12):3146. doi:10.4249/scholarpedia.3146 Category:Quantum Chaos Scholarpedia What is... Quantum Chaos by Ze'ev Rudnick (January 2008, Notices of the American Mathematical Society) Brian Hayes, "The Spectrum of Riemannium"; American Scientist Volume 91, Number 4, July–August, 2003 pp. 296–300. Discusses relation to the Riemann zeta function. Eigenfunctions in chaotic quantum systems by Arnd Bäcker. ChaosBook.org
Wikipedia/Chaos_(physics)
The BF model or BF theory is a topological field, which when quantized, becomes a topological quantum field theory. BF stands for background field B and F, as can be seen below, are also the variables appearing in the Lagrangian of the theory, which is helpful as a mnemonic device. We have a 4-dimensional differentiable manifold M, a gauge group G, which has as "dynamical" fields a 2-form B taking values in the adjoint representation of G, and a connection form A for G. The action is given by S = ∫ M K [ B ∧ F ] {\displaystyle S=\int _{M}K[\mathbf {B} \wedge \mathbf {F} ]} where K is an invariant nondegenerate bilinear form over g {\displaystyle {\mathfrak {g}}} (if G is semisimple, the Killing form will do) and F is the curvature form F ≡ d A + A ∧ A {\displaystyle \mathbf {F} \equiv d\mathbf {A} +\mathbf {A} \wedge \mathbf {A} } This action is diffeomorphically invariant and gauge invariant. Its Euler–Lagrange equations are F = 0 {\displaystyle \mathbf {F} =0} (no curvature) and d A B = 0 {\displaystyle d_{\mathbf {A} }\mathbf {B} =0} (the covariant exterior derivative of B is zero). In fact, it is always possible to gauge away any local degrees of freedom, which is why it is called a topological field theory. However, if M is topologically nontrivial, A and B can have nontrivial solutions globally. In fact, BF theory can be used to formulate discrete gauge theory. One can add additional twist terms allowed by group cohomology theory such as Dijkgraaf–Witten topological gauge theory. There are many kinds of modified BF theories as topological field theories, which give rise to link invariants in 3 dimensions, 4 dimensions, and other general dimensions. == See also == Background field method Barrett–Crane model Dual graviton Plebanski action Spin foam Tetradic Palatini action == References == == External links == http://math.ucr.edu/home/baez/qg-fall2000/qg2.2.html
Wikipedia/BF_theory
A gauge theory is a type of theory in physics. The word gauge means a measurement, a thickness, an in-between distance (as in railroad tracks), or a resulting number of units per certain parameter (a number of loops in an inch of fabric or a number of lead balls in a pound of ammunition). Modern theories describe physical forces in terms of fields, e.g., the electromagnetic field, the gravitational field, and fields that describe forces between the elementary particles. A general feature of these field theories is that the fundamental fields cannot be directly measured; however, some associated quantities can be measured, such as charges, energies, and velocities. For example, say you cannot measure the diameter of a lead ball, but you can determine how many lead balls, which are equal in every way, are required to make a pound. Using the number of balls, the density of lead, and the formula for calculating the volume of a sphere from its diameter, one could indirectly determine the diameter of a single lead ball. In field theories, different configurations of the unobservable fields can result in identical observable quantities. A transformation from one such field configuration to another is called a gauge transformation; the lack of change in the measurable quantities, despite the field being transformed, is a property called gauge invariance. For example, if you could measure the color of lead balls and discover that when you change the color, you still fit the same number of balls in a pound, the property of "color" would show gauge invariance. Since any kind of invariance under a field transformation is considered a symmetry, gauge invariance is sometimes called gauge symmetry. Generally, any theory that has the property of gauge invariance is considered a gauge theory. For example, in electromagnetism the electric field E and the magnetic field B are observable, while the potentials V ("voltage") and A (the vector potential) are not. Under a gauge transformation in which a constant is added to V, no observable change occurs in E or B. With the advent of quantum mechanics in the 1920s, and with successive advances in quantum field theory, the importance of gauge transformations has steadily grown. Gauge theories constrain the laws of physics, because all the changes induced by a gauge transformation have to cancel each other out when written in terms of observable quantities. Over the course of the 20th century, physicists gradually realized that all forces (fundamental interactions) arise from the constraints imposed by local gauge symmetries, in which case the transformations vary from point to point in space and time. Perturbative quantum field theory (usually employed for scattering theory) describes forces in terms of force-mediating particles called gauge bosons. The nature of these particles is determined by the nature of the gauge transformations. The culmination of these efforts is the Standard Model, a quantum field theory that accurately predicts all of the fundamental interactions except gravity. == History and importance == The earliest field theory having a gauge symmetry was James Clerk Maxwell's formulation, in 1864–65, of electrodynamics in "A Dynamical Theory of the Electromagnetic Field". The importance of this symmetry remained unnoticed in the earliest formulations. Similarly unnoticed, David Hilbert had derived Einstein's equations of general relativity by postulating a symmetry under any change of coordinates, just as Einstein was completing his work. Later Hermann Weyl, inspired by success in Einstein's general relativity, conjectured (incorrectly, as it turned out) in 1919 that invariance under the change of scale or "gauge" (a term inspired by the various track gauges of railroads) might also be a local symmetry of electromagnetism.: 5, 12  Although Weyl's choice of the gauge was incorrect, the name "gauge" stuck to the approach. After the development of quantum mechanics, Weyl, Vladimir Fock and Fritz London modified their gauge choice by replacing the scale factor with a change of wave phase, and applying it successfully to electromagnetism. Gauge symmetry was generalized mathematically in 1954 by Chen Ning Yang and Robert Mills in an attempt to describe the strong nuclear forces. This idea, dubbed Yang–Mills theory, later found application in the quantum field theory of the weak force, and its unification with electromagnetism in the electroweak theory. The importance of gauge theories for physics stems from their tremendous success in providing a unified framework to describe the quantum-mechanical behavior of electromagnetism, the weak force and the strong force. This gauge theory, known as the Standard Model, accurately describes experimental predictions regarding three of the four fundamental forces of nature. == In classical physics == === Electromagnetism === Historically, the first example of gauge symmetry to be discovered was classical electromagnetism. A static electric field can be described in terms of an electric potential (voltage, V) that is defined at every point in space, and in practical work it is conventional to take the Earth as a physical reference that defines the zero level of the potential, or ground. But only differences in potential are physically measurable, which is the reason that a voltmeter must have two probes, and can only report the voltage difference between them. Thus one could choose to define all voltage differences relative to some other standard, rather than the Earth, resulting in the addition of a constant offset. If the potential V is a solution to Maxwell's equations then, after this gauge transformation, the new potential V → V + C is also a solution to Maxwell's equations and no experiment can distinguish between these two solutions. In other words, the laws of physics governing electricity and magnetism (that is, Maxwell equations) are invariant under gauge transformation. Maxwell's equations have a gauge symmetry. Generalizing from static electricity to electromagnetism, we have a second potential, the magnetic vector potential A, which can also undergo gauge transformations. These transformations may be local. That is, rather than adding a constant onto V, one can add a function that takes on different values at different points in space and time. If A is also changed in certain corresponding ways, then the same E (electric) and B (magnetic) fields result. The detailed mathematical relationship between the fields E and B and the potentials V and A is given in the article Gauge fixing, along with the precise statement of the nature of the gauge transformation. The relevant point here is that the fields remain the same under the gauge transformation, and therefore Maxwell's equations are still satisfied. Gauge symmetry is closely related to charge conservation. Suppose that there existed some process by which one could briefly violate conservation of charge by creating a charge q at a certain point in space, 1, moving it to some other point 2, and then destroying it. We might imagine that this process was consistent with conservation of energy. We could posit a rule stating that creating the charge required an input of energy E1 = qV1 and destroying it released E2 = qV2, which would seem natural since qV measures the extra energy stored in the electric field because of the existence of a charge at a certain point. Outside of the interval during which the particle exists, conservation of energy would be satisfied, because the net energy released by creation and destruction of the particle, qV2 − qV1, would be equal to the work done in moving the particle from 1 to 2, qV2 − qV1. But although this scenario salvages conservation of energy, it violates gauge symmetry. Gauge symmetry requires that the laws of physics be invariant under the transformation V → V + C, which implies that no experiment should be able to measure the absolute potential, without reference to some external standard such as an electrical ground. But the proposed rules E1 = qV1 and E2 = qV2 for the energies of creation and destruction would allow an experimenter to determine the absolute potential, simply by comparing the energy input required to create the charge q at a particular point in space in the case where the potential is V and V + C respectively. The conclusion is that if gauge symmetry holds, and energy is conserved, then charge must be conserved. === General relativity === As discussed above, the gauge transformations for classical (i.e., non-quantum mechanical) general relativity are arbitrary coordinate transformations. Technically, the transformations must be invertible, and both the transformation and its inverse must be smooth, in the sense of being differentiable an arbitrary number of times. ==== An example of a symmetry in a physical theory: translation invariance ==== Some global symmetries under changes of coordinate predate both general relativity and the concept of a gauge. For example, Galileo and Newton introduced the notion of translation invariance, an advancement from the Aristotelian concept that different places in space, such as the earth versus the heavens, obeyed different physical rules. Suppose, for example, that one observer examines the properties of a hydrogen atom on Earth, the other—on the Moon (or any other place in the universe), the observer will find that their hydrogen atoms exhibit completely identical properties. Again, if one observer had examined a hydrogen atom today and the other—100 years ago (or any other time in the past or in the future), the two experiments would again produce completely identical results. The invariance of the properties of a hydrogen atom with respect to the time and place where these properties were investigated is called translation invariance. Recalling our two observers from different ages: the time in their experiments is shifted by 100 years. If the time when the older observer did the experiment was t, the time of the modern experiment is t + 100 years. Both observers discover the same laws of physics. Because light from hydrogen atoms in distant galaxies may reach the earth after having traveled across space for billions of years, in effect one can do such observations covering periods of time almost all the way back to the Big Bang, and they show that the laws of physics have always been the same. In other words, if in the theory we change the time t to t + 100 years (or indeed any other time shift) the theoretical predictions do not change. ==== Another example of a symmetry: invariance of Einstein's field equation under arbitrary coordinate transformations ==== In Einstein's general relativity, coordinates like x, y, z, and t are not only "relative" in the global sense of translations like t → t + C, rotations, etc., but become completely arbitrary, so that, for example, one can define an entirely new time-like coordinate according to some arbitrary rule such as t → t + t3/t02, where t0 has dimensions of time, and yet the Einstein equations will have the same form. Invariance of the form of an equation under an arbitrary coordinate transformation is customarily referred to as general covariance, and equations with this property are referred to as written in the covariant form. General covariance is a special case of gauge invariance. Maxwell's equations can also be expressed in a generally covariant form, which is as invariant under general coordinate transformation as the Einstein field equation. == In quantum mechanics == === Quantum electrodynamics === Until the advent of quantum mechanics, the only well known example of gauge symmetry was in electromagnetism, and the general significance of the concept was not fully understood. For example, it was not clear whether it was the fields E and B or the potentials V and A that were the fundamental quantities; if the former, then the gauge transformations could be considered as nothing more than a mathematical trick. === Aharonov–Bohm experiment === In quantum mechanics, a particle such as an electron is also described as a wave. For example, if the double-slit experiment is performed with electrons, then a wave-like interference pattern is observed. The electron has the highest probability of being detected at locations where the parts of the wave passing through the two slits are in phase with one another, resulting in constructive interference. The frequency, f, of the electron wave is related to the kinetic energy of an individual electron particle via the quantum-mechanical relation E = hf. If there are no electric or magnetic fields present in this experiment, then the electron's energy is constant, and, for example, there will be a high probability of detecting the electron along the central axis of the experiment, where by symmetry the two parts of the wave are in phase. But now suppose that the electrons in the experiment are subject to electric or magnetic fields. For example, if an electric field were imposed on one side of the axis but not on the other, the results of the experiment would be affected. The part of the electron wave passing through that side oscillates at a different rate, since its energy has had −eV added to it, where −e is the charge of the electron and V the electrical potential. The results of the experiment will be different, because phase relationships between the two parts of the electron wave have changed, and therefore the locations of constructive and destructive interference will be shifted to one side or the other. It is the electric potential that occurs here, not the electric field, and this is a manifestation of the fact that it is the potentials and not the fields that are of fundamental significance in quantum mechanics. ==== Explanation with potentials ==== It is even possible to have cases in which an experiment's results differ when the potentials are changed, even if no charged particle is ever exposed to a different field. One such example is the Aharonov–Bohm effect, shown in the figure. In this example, turning on the solenoid only causes a magnetic field B to exist within the solenoid. But the solenoid has been positioned so that the electron cannot possibly pass through its interior. If one believed that the fields were the fundamental quantities, then one would expect that the results of the experiment would be unchanged. In reality, the results are different, because turning on the solenoid changed the vector potential A in the region that the electrons do pass through. Now that it has been established that it is the potentials V and A that are fundamental, and not the fields E and B, we can see that the gauge transformations, which change V and A, have real physical significance, rather than being merely mathematical artifacts. ==== Gauge invariance: independence of results of gauge for the potentials ==== Note that in these experiments, the only quantity that affects the result is the difference in phase between the two parts of the electron wave. Suppose we imagine the two parts of the electron wave as tiny clocks, each with a single hand that sweeps around in a circle, keeping track of its own phase. Although this cartoon ignores some technical details, it retains the physical phenomena that are important here. If both clocks are sped up by the same amount, the phase relationship between them is unchanged, and the results of experiments are the same. Not only that, but it is not even necessary to change the speed of each clock by a fixed amount. We could change the angle of the hand on each clock by a varying amount θ, where θ could depend on both the position in space and on time. This would have no effect on the result of the experiment, since the final observation of the location of the electron occurs at a single place and time, so that the phase shift in each electron's "clock" would be the same, and the two effects would cancel out. This is another example of a gauge transformation: it is local, and it does not change the results of experiments. === Summary === In summary, gauge symmetry attains its full importance in the context of quantum mechanics. In the application of quantum mechanics to electromagnetism, i.e., quantum electrodynamics, gauge symmetry applies to both electromagnetic waves and electron waves. These two gauge symmetries are in fact intimately related. If a gauge transformation θ is applied to the electron waves, for example, then one must also apply a corresponding transformation to the potentials that describe the electromagnetic waves. Gauge symmetry is required in order to make quantum electrodynamics a renormalizable theory, i.e., one in which the calculated predictions of all physically measurable quantities are finite. === Types of gauge symmetries === The description of the electrons in the subsection above as little clocks is in effect a statement of the mathematical rules according to which the phases of electrons are to be added and subtracted: they are to be treated as ordinary numbers, except that in the case where the result of the calculation falls outside the range of 0≤θ<360°, we force it to "wrap around" into the allowed range, which covers a circle. Another way of putting this is that a phase angle of, say, 5° is considered to be completely equivalent to an angle of 365°. Experiments have verified this testable statement about the interference patterns formed by electron waves. Except for the "wrap-around" property, the algebraic properties of this mathematical structure are exactly the same as those of the ordinary real numbers. In mathematical terminology, electron phases form an Abelian group under addition, called the circle group or U(1). "Abelian" means that addition is commutative, so that θ + φ = φ + θ. "Group" means that addition is associative, has an identity element, namely "0", and for every phase there exists an inverse such that the sum of a phase and its inverse is 0. Other examples of abelian groups are the integers under addition, 0, and negation, and the nonzero fractions under product, 1, and reciprocal. As a way of visualizing the choice of a gauge, consider whether it is possible to tell if a cylinder has been twisted. If the cylinder has no bumps, marks, or scratches on it, we cannot tell. We could, however, draw an arbitrary curve along the cylinder, defined by some function θ(x), where x measures distance along the axis of the cylinder. Once this arbitrary choice (the choice of gauge) has been made, it becomes possible to detect it if someone later twists the cylinder. In 1954, Chen Ning Yang and Robert Mills proposed to generalize these ideas to noncommutative groups. A noncommutative gauge group can describe a field that, unlike the electromagnetic field, interacts with itself. For example, general relativity states that gravitational fields have energy, and special relativity concludes that energy is equivalent to mass. Hence a gravitational field induces a further gravitational field. The nuclear forces also have this self-interacting property. === Gauge bosons === Surprisingly, gauge symmetry can give a deeper explanation for the existence of interactions, such as the electric and nuclear interactions. This arises from a type of gauge symmetry relating to the fact that all particles of a given type are experimentally indistinguishable from one another. Imagine that Alice and Betty are identical twins, labeled at birth by bracelets reading A and B. Because the girls are identical, nobody would be able to tell if they had been switched at birth; the labels A and B are arbitrary, and can be interchanged. Such a permanent interchanging of their identities is like a global gauge symmetry. There is also a corresponding local gauge symmetry, which describes the fact that from one moment to the next, Alice and Betty could swap roles while nobody was looking, and nobody would be able to tell. If we observe that Mom's favorite vase is broken, we can only infer that the blame belongs to one twin or the other, but we cannot tell whether the blame is 100% Alice's and 0% Betty's, or vice versa. If Alice and Betty are in fact quantum-mechanical particles rather than people, then they also have wave properties, including the property of superposition, which allows waves to be added, subtracted, and mixed arbitrarily. It follows that we are not even restricted to complete swaps of identity. For example, if we observe that a certain amount of energy exists in a certain location in space, there is no experiment that can tell us whether that energy is 100% A's and 0% B's, 0% A's and 100% B's, or 20% A's and 80% B's, or some other mixture. The fact that the symmetry is local means that we cannot even count on these proportions to remain fixed as the particles propagate through space. The details of how this is represented mathematically depend on technical issues relating to the spins of the particles, but for our present purposes we consider a spinless particle, for which it turns out that the mixing can be specified by some arbitrary choice of gauge θ(x), where an angle θ = 0° represents 100% A and 0% B, θ = 90° means 0% A and 100% B, and intermediate angles represent mixtures. According to the principles of quantum mechanics, particles do not actually have trajectories through space. Motion can only be described in terms of waves, and the momentum p of an individual particle is related to its wavelength λ by p = h/λ. In terms of empirical measurements, the wavelength can only be determined by observing a change in the wave between one point in space and another nearby point (mathematically, by differentiation). A wave with a shorter wavelength oscillates more rapidly, and therefore changes more rapidly between nearby points. Now suppose that we arbitrarily fix a gauge at one point in space, by saying that the energy at that location is 20% A's and 80% B's. We then measure the two waves at some other, nearby point, in order to determine their wavelengths. But there are two entirely different reasons that the waves could have changed. They could have changed because they were oscillating with a certain wavelength, or they could have changed because the gauge function changed from a 20–80 mixture to, say, 21–79. If we ignore the second possibility, the resulting theory does not work; strange discrepancies in momentum will show up, violating the principle of conservation of momentum. Something in the theory must be changed. Again there are technical issues relating to spin, but in several important cases, including electrically charged particles and particles interacting via nuclear forces, the solution to the problem is to impute physical reality to the gauge function θ(x). We say that if the function θ oscillates, it represents a new type of quantum-mechanical wave, and this new wave has its own momentum p = h/λ, which turns out to patch up the discrepancies that otherwise would have broken conservation of momentum. In the context of electromagnetism, the particles A and B would be charged particles such as electrons, and the quantum mechanical wave represented by θ would be the electromagnetic field. (Here we ignore the technical issues raised by the fact that electrons actually have spin 1/2, not spin zero. This oversimplification is the reason that the gauge field θ comes out to be a scalar, whereas the electromagnetic field is actually represented by a vector consisting of V and A.) The result is that we have an explanation for the presence of electromagnetic interactions: if we try to construct a gauge-symmetric theory of identical, non-interacting particles, the result is not self-consistent, and can only be repaired by adding electric and magnetic fields that cause the particles to interact. Although the function θ(x) describes a wave, the laws of quantum mechanics require that it also have particle properties. In the case of electromagnetism, the particle corresponding to electromagnetic waves is the photon. In general, such particles are called gauge bosons, where the term "boson" refers to a particle with integer spin. In the simplest versions of the theory gauge bosons are massless, but it is also possible to construct versions in which they have mass. This is the case for the gauge bosons that carry the weak interaction: the force responsible for nuclear decay. == References == == Further reading == These books are intended for general readers and employ the barest minimum of mathematics. 't Hooft, Gerard: "Gauge Theories of the Force between Elementary Particles", Scientific American, 242(6):104–138 (June 1980). "Press Release: The 1999 Nobel Prize in Physics". Nobelprize.org. Nobel Media AB 2013. 20 Aug 2013. Schumm, Bruce (2004) Deep Down Things. Johns Hopkins University Press. A serious attempt by a physicist to explain gauge theory and the Standard Model. Feynman, Richard (2006) QED: The Strange Theory of Light and Matter. Princeton University Press. A nontechnical description of quantum field theory (not specifically about gauge theory).
Wikipedia/Introduction_to_gauge_theory
The cross-correlation matrix of two random vectors is a matrix containing as elements the cross-correlations of all pairs of elements of the random vectors. The cross-correlation matrix is used in various digital signal processing algorithms. == Definition == For two random vectors X = ( X 1 , … , X m ) T {\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{m})^{\rm {T}}} and Y = ( Y 1 , … , Y n ) T {\displaystyle \mathbf {Y} =(Y_{1},\ldots ,Y_{n})^{\rm {T}}} , each containing random elements whose expected value and variance exist, the cross-correlation matrix of X {\displaystyle \mathbf {X} } and Y {\displaystyle \mathbf {Y} } is defined by: p.337  and has dimensions m × n {\displaystyle m\times n} . Written component-wise: R X Y = [ E ⁡ [ X 1 Y 1 ] E ⁡ [ X 1 Y 2 ] ⋯ E ⁡ [ X 1 Y n ] E ⁡ [ X 2 Y 1 ] E ⁡ [ X 2 Y 2 ] ⋯ E ⁡ [ X 2 Y n ] ⋮ ⋮ ⋱ ⋮ E ⁡ [ X m Y 1 ] E ⁡ [ X m Y 2 ] ⋯ E ⁡ [ X m Y n ] ] {\displaystyle \operatorname {R} _{\mathbf {X} \mathbf {Y} }={\begin{bmatrix}\operatorname {E} [X_{1}Y_{1}]&\operatorname {E} [X_{1}Y_{2}]&\cdots &\operatorname {E} [X_{1}Y_{n}]\\\\\operatorname {E} [X_{2}Y_{1}]&\operatorname {E} [X_{2}Y_{2}]&\cdots &\operatorname {E} [X_{2}Y_{n}]\\\\\vdots &\vdots &\ddots &\vdots \\\\\operatorname {E} [X_{m}Y_{1}]&\operatorname {E} [X_{m}Y_{2}]&\cdots &\operatorname {E} [X_{m}Y_{n}]\\\\\end{bmatrix}}} The random vectors X {\displaystyle \mathbf {X} } and Y {\displaystyle \mathbf {Y} } need not have the same dimension, and either might be a scalar value. == Example == For example, if X = ( X 1 , X 2 , X 3 ) T {\displaystyle \mathbf {X} =\left(X_{1},X_{2},X_{3}\right)^{\rm {T}}} and Y = ( Y 1 , Y 2 ) T {\displaystyle \mathbf {Y} =\left(Y_{1},Y_{2}\right)^{\rm {T}}} are random vectors, then R X Y {\displaystyle \operatorname {R} _{\mathbf {X} \mathbf {Y} }} is a 3 × 2 {\displaystyle 3\times 2} matrix whose ( i , j ) {\displaystyle (i,j)} -th entry is E ⁡ [ X i Y j ] {\displaystyle \operatorname {E} [X_{i}Y_{j}]} . == Complex random vectors == If Z = ( Z 1 , … , Z m ) T {\displaystyle \mathbf {Z} =(Z_{1},\ldots ,Z_{m})^{\rm {T}}} and W = ( W 1 , … , W n ) T {\displaystyle \mathbf {W} =(W_{1},\ldots ,W_{n})^{\rm {T}}} are complex random vectors, each containing random variables whose expected value and variance exist, the cross-correlation matrix of Z {\displaystyle \mathbf {Z} } and W {\displaystyle \mathbf {W} } is defined by R Z W ≜ E ⁡ [ Z W H ] {\displaystyle \operatorname {R} _{\mathbf {Z} \mathbf {W} }\triangleq \ \operatorname {E} [\mathbf {Z} \mathbf {W} ^{\rm {H}}]} where H {\displaystyle {}^{\rm {H}}} denotes Hermitian transposition. == Uncorrelatedness == Two random vectors X = ( X 1 , … , X m ) T {\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{m})^{\rm {T}}} and Y = ( Y 1 , … , Y n ) T {\displaystyle \mathbf {Y} =(Y_{1},\ldots ,Y_{n})^{\rm {T}}} are called uncorrelated if E ⁡ [ X Y T ] = E ⁡ [ X ] E ⁡ [ Y ] T . {\displaystyle \operatorname {E} [\mathbf {X} \mathbf {Y} ^{\rm {T}}]=\operatorname {E} [\mathbf {X} ]\operatorname {E} [\mathbf {Y} ]^{\rm {T}}.} They are uncorrelated if and only if their cross-covariance matrix K X Y {\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {Y} }} matrix is zero. In the case of two complex random vectors Z {\displaystyle \mathbf {Z} } and W {\displaystyle \mathbf {W} } they are called uncorrelated if E ⁡ [ Z W H ] = E ⁡ [ Z ] E ⁡ [ W ] H {\displaystyle \operatorname {E} [\mathbf {Z} \mathbf {W} ^{\rm {H}}]=\operatorname {E} [\mathbf {Z} ]\operatorname {E} [\mathbf {W} ]^{\rm {H}}} and E ⁡ [ Z W T ] = E ⁡ [ Z ] E ⁡ [ W ] T . {\displaystyle \operatorname {E} [\mathbf {Z} \mathbf {W} ^{\rm {T}}]=\operatorname {E} [\mathbf {Z} ]\operatorname {E} [\mathbf {W} ]^{\rm {T}}.} == Properties == === Relation to the cross-covariance matrix === The cross-correlation is related to the cross-covariance matrix as follows: K X Y = E ⁡ [ ( X − E ⁡ [ X ] ) ( Y − E ⁡ [ Y ] ) T ] = R X Y − E ⁡ [ X ] E ⁡ [ Y ] T {\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {Y} }=\operatorname {E} [(\mathbf {X} -\operatorname {E} [\mathbf {X} ])(\mathbf {Y} -\operatorname {E} [\mathbf {Y} ])^{\rm {T}}]=\operatorname {R} _{\mathbf {X} \mathbf {Y} }-\operatorname {E} [\mathbf {X} ]\operatorname {E} [\mathbf {Y} ]^{\rm {T}}} Respectively for complex random vectors: K Z W = E ⁡ [ ( Z − E ⁡ [ Z ] ) ( W − E ⁡ [ W ] ) H ] = R Z W − E ⁡ [ Z ] E ⁡ [ W ] H {\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {W} }=\operatorname {E} [(\mathbf {Z} -\operatorname {E} [\mathbf {Z} ])(\mathbf {W} -\operatorname {E} [\mathbf {W} ])^{\rm {H}}]=\operatorname {R} _{\mathbf {Z} \mathbf {W} }-\operatorname {E} [\mathbf {Z} ]\operatorname {E} [\mathbf {W} ]^{\rm {H}}} == See also == Autocorrelation Correlation does not imply causation Covariance function Pearson product-moment correlation coefficient Correlation function (astronomy) Correlation function (statistical mechanics) Correlation function (quantum field theory) Mutual information Rate distortion theory Radial distribution function == References == == Further reading == Hayes, Monson H., Statistical Digital Signal Processing and Modeling, John Wiley & Sons, Inc., 1996. ISBN 0-471-59431-8. Solomon W. Golomb, and Guang Gong. Signal design for good correlation: for wireless communication, cryptography, and radar. Cambridge University Press, 2005. M. Soltanalian. Signal Design for Active Sensing and Communications. Uppsala Dissertations from the Faculty of Science and Technology (printed by Elanders Sverige AB), 2014.
Wikipedia/Correlation_functions
In high-energy physics, nonlinear electrodynamics (NED or NLED) refers to a family of generalizations of Maxwell electrodynamics which describe electromagnetic fields that exhibit nonlinear dynamics. For a theory to describe the electromagnetic field (a U(1) gauge field), its action must be gauge invariant; in the case of U ( 1 ) {\displaystyle U(1)} , for the theory to not have Faddeev-Popov ghosts, this constraint dictates that the Lagrangian of a nonlinear electrodynamics must be a function of only s ≡ − 1 4 F α β F α β {\displaystyle s\equiv -{\frac {1}{4}}F_{\alpha \beta }F^{\alpha \beta }} (the Maxwell Lagrangian) and p ≡ − 1 8 ϵ α β γ δ F α β F γ δ {\displaystyle p\equiv -{\frac {1}{8}}\epsilon ^{\alpha \beta \gamma \delta }F_{\alpha \beta }F_{\gamma \delta }} (where ϵ {\displaystyle \epsilon } is the Levi-Civita tensor). Notable NED models include the Born-Infeld model, the Euler-Heisenberg Lagrangian, and the CP-violating U ( 1 ) {\displaystyle U(1)} Chern-Simons theory L = s + θ p {\displaystyle {\mathcal {L}}=s+\theta p} . == References ==
Wikipedia/Nonlinear_electrodynamics
In quantum field theory, the energy that a particle has as a result of changes that it causes in its environment defines its self-energy Σ {\displaystyle \Sigma } . The self-energy represents the contribution to the particle's energy, or effective mass, due to interactions between the particle and its environment. In electrostatics, the energy required to assemble the charge distribution takes the form of self-energy by bringing in the constituent charges from infinity, where the electric force goes to zero. In a condensed matter context, self-energy is used to describe interaction induced renormalization of quasiparticle mass (dispersions) and lifetime. Self-energy is especially used to describe electron-electron interactions in Fermi liquids. Another example of self-energy is found in the context of phonon softening due to electron-phonon coupling. == Characteristics == Mathematically, this energy is equal to the so-called on mass shell value of the proper self-energy operator (or proper mass operator) in the momentum-energy representation (more precisely, to ℏ {\displaystyle \hbar } times this value). In this, or other representations (such as the space-time representation), the self-energy is pictorially (and economically) represented by means of Feynman diagrams, such as the one shown below. In this particular diagram, the three arrowed straight lines represent particles, or particle propagators, and the wavy line a particle-particle interaction; removing (or amputating) the left-most and the right-most straight lines in the diagram shown below (these so-called external lines correspond to prescribed values for, for instance, momentum and energy, or four-momentum), one retains a contribution to the self-energy operator (in, for instance, the momentum-energy representation). Using a small number of simple rules, each Feynman diagram can be readily expressed in its corresponding algebraic form. In general, the on-the-mass-shell value of the self-energy operator in the momentum-energy representation is complex. In such cases, it is the real part of this self-energy that is identified with the physical self-energy (referred to above as particle's "self-energy"); the inverse of the imaginary part is a measure for the lifetime of the particle under investigation. For clarity, elementary excitations, or dressed particles (see quasi-particle), in interacting systems are distinct from stable particles in vacuum; their state functions consist of complicated superpositions of the eigenstates of the underlying many-particle system, which only momentarily, if at all, behave like those specific to isolated particles; the above-mentioned lifetime is the time over which a dressed particle behaves as if it were a single particle with well-defined momentum and energy. The self-energy operator (often denoted by Σ {\displaystyle \Sigma _{}^{}} , and less frequently by M {\displaystyle M_{}^{}} ) is related to the bare and dressed propagators (often denoted by G 0 {\displaystyle G_{0}^{}} and G {\displaystyle G_{}^{}} respectively) via the Dyson equation (named after Freeman Dyson): G = G 0 + G 0 Σ G . {\displaystyle G=G_{0}^{}+G_{0}\Sigma G.} Multiplying on the left by the inverse G 0 − 1 {\displaystyle G_{0}^{-1}} of the operator G 0 {\displaystyle G_{0}} and on the right by G − 1 {\displaystyle G^{-1}} yields Σ = G 0 − 1 − G − 1 . {\displaystyle \Sigma =G_{0}^{-1}-G^{-1}.} The photon and gluon do not get a mass through renormalization because gauge symmetry protects them from getting a mass. This is a consequence of the Ward identity. The W-boson and the Z-boson get their masses through the Higgs mechanism; they do undergo mass renormalization through the renormalization of the electroweak theory. Neutral particles with internal quantum numbers can mix with each other through virtual pair production. The primary example of this phenomenon is the mixing of neutral kaons. Under appropriate simplifying assumptions this can be described without quantum field theory. == Other uses == In chemistry, the self-energy or Born energy of an ion is the energy associated with the field of the ion itself. In solid state and condensed-matter physics self-energies and a myriad of related quasiparticle properties are calculated by Green's function methods and Green's function (many-body theory) of interacting low-energy excitations on the basis of electronic band structure calculations. Self-energies also find extensive application in the calculation of particle transport through open quantum systems and the embedding of sub-regions into larger systems (for example the surface of a semi-infinite crystal). == See also == Quantum field theory QED vacuum Renormalization Self-force GW approximation Wheeler–Feynman absorber theory == References == A. L. Fetter, and J. D. Walecka, Quantum Theory of Many-Particle Systems (McGraw-Hill, New York, 1971); (Dover, New York, 2003) J. W. Negele, and H. Orland, Quantum Many-Particle Systems (Westview Press, Boulder, 1998) A. A. Abrikosov, L. P. Gorkov and I. E. Dzyaloshinski (1963): Methods of Quantum Field Theory in Statistical Physics Englewood Cliffs: Prentice-Hall. Alexei M. Tsvelik (2007). Quantum Field Theory in Condensed Matter Physics (2nd ed.). Cambridge University Press. ISBN 978-0-521-52980-8. A. N. Vasil'ev The Field Theoretic Renormalization Group in Critical Behavior Theory and Stochastic Dynamics (Routledge Chapman & Hall 2004); ISBN 0-415-31002-4; ISBN 978-0-415-31002-4 John E. Inglesfield (2015). The Embedding Method for Electronic Structure. IOP Publishing. ISBN 978-0-7503-1042-0.
Wikipedia/Electron_self-energy
In electromagnetism, the electromagnetic tensor or electromagnetic field tensor (sometimes called the field strength tensor, Faraday tensor or Maxwell bivector) is a mathematical object that describes the electromagnetic field in spacetime. The field tensor was developed by Arnold Sommerfeld after the four-dimensional tensor formulation of special relativity was introduced by Hermann Minkowski.: 22  The tensor allows related physical laws to be written concisely, and allows for the quantization of the electromagnetic field by the Lagrangian formulation described below. == Definition == The electromagnetic tensor, conventionally labelled F, is defined as the exterior derivative of the electromagnetic four-potential, A, a differential 1-form: F = d e f d A . {\displaystyle F\ {\stackrel {\mathrm {def} }{=}}\ \mathrm {d} A.} Therefore, F is a differential 2-form— an antisymmetric rank-2 tensor field—on Minkowski space. In component form, F μ ν = ∂ μ A ν − ∂ ν A μ . {\displaystyle F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }.} where ∂ {\displaystyle \partial } is the four-gradient and A {\displaystyle A} is the four-potential. SI units for Maxwell's equations and the particle physicist's sign convention for the signature of Minkowski space (+ − − −), will be used throughout this article. === Relationship with the classical fields === The Faraday differential 2-form is given by F = ( E x / c ) d x ∧ d t + ( E y / c ) d y ∧ d t + ( E z / c ) d z ∧ d t + B x d y ∧ d z + B y d z ∧ d x + B z d x ∧ d y , {\displaystyle F=(E_{x}/c)\ dx\wedge dt+(E_{y}/c)\ dy\wedge dt+(E_{z}/c)\ dz\wedge dt+B_{x}\ dy\wedge dz+B_{y}\ dz\wedge dx+B_{z}\ dx\wedge dy,} where d t {\displaystyle dt} is the time element times the speed of light c {\displaystyle c} . This is the exterior derivative of its 1-form antiderivative A = A x d x + A y d y + A z d z − ( ϕ / c ) d t {\displaystyle A=A_{x}\ dx+A_{y}\ dy+A_{z}\ dz-(\phi /c)\ dt} , where ϕ ( x → , t ) {\displaystyle \phi ({\vec {x}},t)} has − ∇ → ϕ = E → {\displaystyle -{\vec {\nabla }}\phi ={\vec {E}}} ( ϕ {\displaystyle \phi } is a scalar potential for the irrotational/conservative vector field E → {\displaystyle {\vec {E}}} ) and A → ( x → , t ) {\displaystyle {\vec {A}}({\vec {x}},t)} has ∇ → × A → = B → {\displaystyle {\vec {\nabla }}\times {\vec {A}}={\vec {B}}} ( A → {\displaystyle {\vec {A}}} is a vector potential for the solenoidal vector field B → {\displaystyle {\vec {B}}} ). Note that { d F = 0 ⋆ d ⋆ F = J {\displaystyle {\begin{cases}dF=0\\{\star }d{\star }F=J\end{cases}}} where d {\displaystyle d} is the exterior derivative, ⋆ {\displaystyle {\star }} is the Hodge star, J = − J x d x − J y d y − J z d z + ρ d t {\displaystyle J=-J_{x}\ dx-J_{y}\ dy-J_{z}\ dz+\rho \ dt} (where J → {\displaystyle {\vec {J}}} is the electric current density, and ρ {\displaystyle \rho } is the electric charge density) is the 4-current density 1-form, is the differential forms version of Maxwell's equations. The electric and magnetic fields can be obtained from the components of the electromagnetic tensor. The relationship is simplest in Cartesian coordinates: E i = c F 0 i , {\displaystyle E_{i}=cF_{0i},} where c is the speed of light, and B i = − 1 / 2 ϵ i j k F j k , {\displaystyle B_{i}=-1/2\epsilon _{ijk}F^{jk},} where ϵ i j k {\displaystyle \epsilon _{ijk}} is the Levi-Civita tensor. This gives the fields in a particular reference frame; if the reference frame is changed, the components of the electromagnetic tensor will transform covariantly, and the fields in the new frame will be given by the new components. In contravariant matrix form with metric signature (+,-,-,-), F μ ν = [ 0 − E x / c − E y / c − E z / c E x / c 0 − B z B y E y / c B z 0 − B x E z / c − B y B x 0 ] . {\displaystyle F^{\mu \nu }={\begin{bmatrix}0&-E_{x}/c&-E_{y}/c&-E_{z}/c\\E_{x}/c&0&-B_{z}&B_{y}\\E_{y}/c&B_{z}&0&-B_{x}\\E_{z}/c&-B_{y}&B_{x}&0\end{bmatrix}}.} The covariant form is given by index lowering, F μ ν = η α ν F β α η μ β = [ 0 E x / c E y / c E z / c − E x / c 0 − B z B y − E y / c B z 0 − B x − E z / c − B y B x 0 ] . {\displaystyle F_{\mu \nu }=\eta _{\alpha \nu }F^{\beta \alpha }\eta _{\mu \beta }={\begin{bmatrix}0&E_{x}/c&E_{y}/c&E_{z}/c\\-E_{x}/c&0&-B_{z}&B_{y}\\-E_{y}/c&B_{z}&0&-B_{x}\\-E_{z}/c&-B_{y}&B_{x}&0\end{bmatrix}}.} The Faraday tensor's Hodge dual is G α β = 1 2 ϵ α β γ δ F γ δ = [ 0 − B x − B y − B z B x 0 E z / c − E y / c B y − E z / c 0 E x / c B z E y / c − E x / c 0 ] {\displaystyle {G^{\alpha \beta }={\frac {1}{2}}\epsilon ^{\alpha \beta \gamma \delta }F_{\gamma \delta }={\begin{bmatrix}0&-B_{x}&-B_{y}&-B_{z}\\B_{x}&0&E_{z}/c&-E_{y}/c\\B_{y}&-E_{z}/c&0&E_{x}/c\\B_{z}&E_{y}/c&-E_{x}/c&0\end{bmatrix}}}} From now on in this article, when the electric or magnetic fields are mentioned, a Cartesian coordinate system is assumed, and the electric and magnetic fields are with respect to the coordinate system's reference frame, as in the equations above. === Properties === The matrix form of the field tensor yields the following properties: Antisymmetry: F μ ν = − F ν μ {\displaystyle F^{\mu \nu }=-F^{\nu \mu }} Six independent components: In Cartesian coordinates, these are simply the three spatial components of the electric field (Ex, Ey, Ez) and magnetic field (Bx, By, Bz). Inner product: If one forms an inner product of the field strength tensor a Lorentz invariant is formed F μ ν F μ ν = 2 ( B 2 − E 2 c 2 ) {\displaystyle F_{\mu \nu }F^{\mu \nu }=2\left(B^{2}-{\frac {E^{2}}{c^{2}}}\right)} meaning this number does not change from one frame of reference to another. Pseudoscalar invariant: The product of the tensor F μ ν {\displaystyle F^{\mu \nu }} with its Hodge dual G μ ν {\displaystyle G^{\mu \nu }} gives a Lorentz invariant: G γ δ F γ δ = 1 2 ϵ α β γ δ F α β F γ δ = − 4 c B ⋅ E {\displaystyle G_{\gamma \delta }F^{\gamma \delta }={\frac {1}{2}}\epsilon _{\alpha \beta \gamma \delta }F^{\alpha \beta }F^{\gamma \delta }=-{\frac {4}{c}}\mathbf {B} \cdot \mathbf {E} \,} where ϵ α β γ δ {\displaystyle \epsilon _{\alpha \beta \gamma \delta }} is the rank-4 Levi-Civita symbol. The sign for the above depends on the convention used for the Levi-Civita symbol. The convention used here is ϵ 0123 = − 1 {\displaystyle \epsilon _{0123}=-1} . Determinant: det ( F ) = 1 c 2 ( B ⋅ E ) 2 {\displaystyle \det \left(F\right)={\frac {1}{c^{2}}}\left(\mathbf {B} \cdot \mathbf {E} \right)^{2}} which is proportional to the square of the above invariant. Trace: F = F μ μ = 0 {\displaystyle F={{F}^{\mu }}_{\mu }=0} which is equal to zero. === Significance === This tensor simplifies and reduces Maxwell's equations as four vector calculus equations into two tensor field equations. In electrostatics and electrodynamics, Gauss's law and Ampère's circuital law are respectively: ∇ ⋅ E = ρ ϵ 0 , ∇ × B − 1 c 2 ∂ E ∂ t = μ 0 J {\displaystyle \nabla \cdot \mathbf {E} ={\frac {\rho }{\epsilon _{0}}},\quad \nabla \times \mathbf {B} -{\frac {1}{c^{2}}}{\frac {\partial \mathbf {E} }{\partial t}}=\mu _{0}\mathbf {J} } and reduce to the inhomogeneous Maxwell equation: ∂ α F β α = − μ 0 J β {\displaystyle \partial _{\alpha }F^{\beta \alpha }=-\mu _{0}J^{\beta }} , where J α = ( c ρ , J ) {\displaystyle J^{\alpha }=(c\rho ,\mathbf {J} )} is the four-current. In magnetostatics and magnetodynamics, Gauss's law for magnetism and Maxwell–Faraday equation are respectively: ∇ ⋅ B = 0 , ∂ B ∂ t + ∇ × E = 0 {\displaystyle \nabla \cdot \mathbf {B} =0,\quad {\frac {\partial \mathbf {B} }{\partial t}}+\nabla \times \mathbf {E} =\mathbf {0} } which reduce to the Bianchi identity: ∂ γ F α β + ∂ α F β γ + ∂ β F γ α = 0 {\displaystyle \partial _{\gamma }F_{\alpha \beta }+\partial _{\alpha }F_{\beta \gamma }+\partial _{\beta }F_{\gamma \alpha }=0} or using the index notation with square brackets[note 1] for the antisymmetric part of the tensor: ∂ [ α F β γ ] = 0 {\displaystyle \partial _{[\alpha }F_{\beta \gamma ]}=0} Using the expression relating the Faraday tensor to the four-potential, one can prove that the above antisymmetric quantity turns to zero identically ( ≡ 0 {\displaystyle \equiv 0} ). This tensor equation reproduces the homogeneous Maxwell's equations. == Relativity == The field tensor derives its name from the fact that the electromagnetic field is found to obey the tensor transformation law, this general property of physical laws being recognised after the advent of special relativity. This theory stipulated that all the laws of physics should take the same form in all coordinate systems – this led to the introduction of tensors. The tensor formalism also leads to a mathematically simpler presentation of physical laws. The inhomogeneous Maxwell equation leads to the continuity equation: ∂ α J α = J α , α = 0 {\displaystyle \partial _{\alpha }J^{\alpha }=J^{\alpha }{}_{,\alpha }=0} implying conservation of charge. Maxwell's laws above can be generalised to curved spacetime by simply replacing partial derivatives with covariant derivatives: F [ α β ; γ ] = 0 {\displaystyle F_{[\alpha \beta ;\gamma ]}=0} and F α β ; α = μ 0 J β {\displaystyle F^{\alpha \beta }{}_{;\alpha }=\mu _{0}J^{\beta }} where the semicolon notation represents a covariant derivative, as opposed to a partial derivative. These equations are sometimes referred to as the curved space Maxwell equations. Again, the second equation implies charge conservation (in curved spacetime): J α ; α = 0 {\displaystyle J^{\alpha }{}_{;\alpha }\,=0} The stress-energy tensor of electromagnetism T μ ν = 1 μ 0 [ F μ α F ν α − 1 4 η μ ν F α β F α β ] , {\displaystyle T^{\mu \nu }={\frac {1}{\mu _{0}}}\left[F^{\mu \alpha }F^{\nu }{}_{\alpha }-{\frac {1}{4}}\eta ^{\mu \nu }F_{\alpha \beta }F^{\alpha \beta }\right]\,,} satisfies T α β , β + F α β J β = 0 . {\displaystyle {T^{\alpha \beta }}_{,\beta }+F^{\alpha \beta }J_{\beta }=0\,.} == Lagrangian formulation of classical electromagnetism == Classical electromagnetism and Maxwell's equations can be derived from the action: S = ∫ ( − 1 4 μ 0 F μ ν F μ ν − J μ A μ ) d 4 x {\displaystyle {\mathcal {S}}=\int \left(-{\begin{matrix}{\frac {1}{4\mu _{0}}}\end{matrix}}F_{\mu \nu }F^{\mu \nu }-J^{\mu }A_{\mu }\right)\mathrm {d} ^{4}x\,} where d 4 x {\displaystyle \mathrm {d} ^{4}x} is over space and time. This means the Lagrangian density is L = − 1 4 μ 0 F μ ν F μ ν − J μ A μ = − 1 4 μ 0 ( ∂ μ A ν − ∂ ν A μ ) ( ∂ μ A ν − ∂ ν A μ ) − J μ A μ = − 1 4 μ 0 ( ∂ μ A ν ∂ μ A ν − ∂ ν A μ ∂ μ A ν − ∂ μ A ν ∂ ν A μ + ∂ ν A μ ∂ ν A μ ) − J μ A μ {\displaystyle {\begin{aligned}{\mathcal {L}}&=-{\frac {1}{4\mu _{0}}}F_{\mu \nu }F^{\mu \nu }-J^{\mu }A_{\mu }\\&=-{\frac {1}{4\mu _{0}}}\left(\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }\right)\left(\partial ^{\mu }A^{\nu }-\partial ^{\nu }A^{\mu }\right)-J^{\mu }A_{\mu }\\&=-{\frac {1}{4\mu _{0}}}\left(\partial _{\mu }A_{\nu }\partial ^{\mu }A^{\nu }-\partial _{\nu }A_{\mu }\partial ^{\mu }A^{\nu }-\partial _{\mu }A_{\nu }\partial ^{\nu }A^{\mu }+\partial _{\nu }A_{\mu }\partial ^{\nu }A^{\mu }\right)-J^{\mu }A_{\mu }\\\end{aligned}}} The two middle terms in the parentheses are the same, as are the two outer terms, so the Lagrangian density is L = − 1 2 μ 0 ( ∂ μ A ν ∂ μ A ν − ∂ ν A μ ∂ μ A ν ) − J μ A μ . {\displaystyle {\mathcal {L}}=-{\frac {1}{2\mu _{0}}}\left(\partial _{\mu }A_{\nu }\partial ^{\mu }A^{\nu }-\partial _{\nu }A_{\mu }\partial ^{\mu }A^{\nu }\right)-J^{\mu }A_{\mu }.} Substituting this into the Euler–Lagrange equation of motion for a field: ∂ μ ( ∂ L ∂ ( ∂ μ A ν ) ) − ∂ L ∂ A ν = 0 {\displaystyle \partial _{\mu }\left({\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }A_{\nu })}}\right)-{\frac {\partial {\mathcal {L}}}{\partial A_{\nu }}}=0} So the Euler–Lagrange equation becomes: − ∂ μ 1 μ 0 ( ∂ μ A ν − ∂ ν A μ ) + J ν = 0. {\displaystyle -\partial _{\mu }{\frac {1}{\mu _{0}}}\left(\partial ^{\mu }A^{\nu }-\partial ^{\nu }A^{\mu }\right)+J^{\nu }=0.\,} The quantity in parentheses above is just the field tensor, so this finally simplifies to ∂ μ F μ ν = μ 0 J ν {\displaystyle \partial _{\mu }F^{\mu \nu }=\mu _{0}J^{\nu }} That equation is another way of writing the two inhomogeneous Maxwell's equations (namely, Gauss's law and Ampère's circuital law) using the substitutions: 1 c E i = − F 0 i ϵ i j k B k = − F i j {\displaystyle {\begin{aligned}{\frac {1}{c}}E^{i}&=-F^{0i}\\\epsilon ^{ijk}B_{k}&=-F^{ij}\end{aligned}}} where i, j, k take the values 1, 2, and 3. === Hamiltonian form === The Hamiltonian density can be obtained with the usual relation, H ( ϕ i , π i ) = π i ϕ ˙ i ( ϕ i , π i ) − L . {\displaystyle {\mathcal {H}}(\phi ^{i},\pi _{i})=\pi _{i}{\dot {\phi }}^{i}(\phi ^{i},\pi _{i})-{\mathcal {L}}\,.} Here ϕ i = A i {\displaystyle \phi ^{i}=A^{i}} are the fields and the momentum density of the EM field is π i = T 0 i = 1 μ 0 F 0 α F i α = 1 μ 0 c E × B . {\displaystyle \pi _{i}=T_{0i}={\frac {1}{\mu _{0}}}F_{0}{}^{\alpha }F_{i\alpha }={\frac {1}{\mu _{0}c}}\mathbf {E} \times \mathbf {B} \,.} such that the conserved quantity associated with translation from Noether's theorem is the total momentum P = ∑ α m α x ˙ α + 1 μ 0 c ∫ V d 3 x E × B . {\displaystyle \mathbf {P} =\sum _{\alpha }m_{\alpha }{\dot {\mathbf {x} }}_{\alpha }+{\frac {1}{\mu _{0}c}}\int _{\mathcal {V}}\mathrm {d} ^{3}x\,\mathbf {E} \times \mathbf {B} \,.} The Hamiltonian density for the electromagnetic field is related to the electromagnetic stress-energy tensor T μ ν = 1 μ 0 [ F μ α F ν α − 1 4 η μ ν F α β F α β ] . {\displaystyle T^{\mu \nu }={\frac {1}{\mu _{0}}}\left[F^{\mu \alpha }F^{\nu }{}_{\alpha }-{\frac {1}{4}}\eta ^{\mu \nu }F_{\alpha \beta }F^{\alpha \beta }\right]\,.} as H = T 00 = 1 2 ( ϵ 0 E 2 + 1 μ 0 B 2 ) = 1 8 π ( E 2 + B 2 ) . {\displaystyle {\mathcal {H}}=T_{00}={\frac {1}{2}}\left(\epsilon _{0}\mathbf {E} ^{2}+{\frac {1}{\mu _{0}}}\mathbf {B} ^{2}\right)={\frac {1}{8\pi }}\left(\mathbf {E} ^{2}+\mathbf {B} ^{2}\right)\,.} where we have neglected the energy density of matter, assuming only the EM field, and the last equality assumes the CGS system. The momentum of nonrelativistic charges interarcting with the EM field in the Coulomb gauge ( ∇ ⋅ A = ∇ i A i = 0 {\displaystyle \nabla \cdot \mathbf {A} =\nabla _{i}A^{i}=0} ) is p α = m α x ˙ α + q α c A ( x α ) . {\displaystyle \mathbf {p} _{\alpha }=m_{\alpha }{\dot {\mathbf {x} }}_{\alpha }+{\frac {q_{\alpha }}{c}}\mathbf {A} (\mathbf {x} _{\alpha })\,.} The total Hamiltonian of the matter + EM field system is H = ∫ V d 3 x T 00 = H m a t + H e m . {\displaystyle H=\int _{\mathcal {V}}d^{3}x\,T_{00}=H_{\rm {mat}}+H_{\rm {em}}\,.} where for nonrelativistic point particles in the Coulomb gauge H m a t = ∑ α m α | x ˙ α | 2 + ∑ α < β q α q β | x α − x β | = ∑ α 1 2 m α [ p α − q α c A ( x α ) ] 2 + ∑ α < β q α q β | x α − x β | . {\displaystyle H_{\rm {mat}}=\sum _{\alpha }m_{\alpha }|{\dot {\mathbf {x} }}_{\alpha }|^{2}+\sum _{\alpha <\beta }{\frac {q_{\alpha }q_{\beta }}{|\mathbf {x} _{\alpha }-\mathbf {x} _{\beta }|}}=\sum _{\alpha }{\frac {1}{2m_{\alpha }}}\left[\mathbf {p} _{\alpha }-{\frac {q_{\alpha }}{c}}\mathbf {A} (\mathbf {x} _{\alpha })\right]^{2}+\sum _{\alpha <\beta }{\frac {q_{\alpha }q_{\beta }}{|\mathbf {x} _{\alpha }-\mathbf {x} _{\beta }|}}\,.} where the last term is identically 1 8 π ∫ V d 3 x E ∥ 2 {\displaystyle {\frac {1}{8\pi }}\int _{\mathcal {V}}d^{3}x\mathbf {E} _{\parallel }^{2}} where E ∥ i = ∇ i A 0 {\displaystyle {E}_{\parallel i}={\nabla _{i}}A_{0}} and H e m = 1 8 π ∫ V d 3 x ( E ⊥ 2 + B 2 ) . {\displaystyle H_{\rm {em}}={\frac {1}{8\pi }}\int _{\mathcal {V}}d^{3}x\left(\mathbf {E} _{\perp }^{2}+\mathbf {B} ^{2}\right)\,.} where and E ⊥ i = − 1 c ∂ 0 A i {\displaystyle {E}_{\perp i}=-{\frac {1}{c}}\partial _{0}A_{i}} . === Quantum electrodynamics and field theory === The Lagrangian of quantum electrodynamics extends beyond the classical Lagrangian established in relativity to incorporate the creation and annihilation of photons (and electrons): L = ψ ¯ ( i ℏ c γ α D α − m c 2 ) ψ − 1 4 μ 0 F α β F α β , {\displaystyle {\mathcal {L}}={\bar {\psi }}\left(i\hbar c\,\gamma ^{\alpha }D_{\alpha }-mc^{2}\right)\psi -{\frac {1}{4\mu _{0}}}F_{\alpha \beta }F^{\alpha \beta },} where the first part in the right hand side, containing the Dirac spinor ψ {\displaystyle \psi } , represents the Dirac field. In quantum field theory it is used as the template for the gauge field strength tensor. By being employed in addition to the local interaction Lagrangian it reprises its usual role in QED. == See also == Classification of electromagnetic fields Covariant formulation of classical electromagnetism Electromagnetic stress–energy tensor Gluon field strength tensor Ricci calculus Riemann–Silberstein vector == Notes == == References == Brau, Charles A. (2004). Modern Problems in Classical Electrodynamics. Oxford University Press. ISBN 0-19-514665-4. Jackson, John D. (1999). Classical Electrodynamics. John Wiley & Sons, Inc. ISBN 0-471-30932-X. Peskin, Michael E.; Schroeder, Daniel V. (1995). An Introduction to Quantum Field Theory. Perseus Publishing. ISBN 0-201-50397-2.
Wikipedia/Electromagnetic_field_tensor
In mathematical physics, the Dirac equation in curved spacetime is a generalization of the Dirac equation from flat spacetime (Minkowski space) to curved spacetime, a general Lorentzian manifold. == Mathematical formulation == === Spacetime === In full generality the equation can be defined on M {\displaystyle M} or ( M , g ) {\displaystyle (M,\mathbf {g} )} a pseudo-Riemannian manifold, but for concreteness we restrict to pseudo-Riemannian manifold with signature ( − + + + ) {\displaystyle (-+++)} . The metric is referred to as g {\displaystyle \mathbf {g} } , or g a b {\displaystyle g_{ab}} in abstract index notation. === Frame fields === We use a set of vierbein or frame fields { e μ } = { e 0 , e 1 , e 2 , e 3 } {\displaystyle \{e_{\mu }\}=\{e_{0},e_{1},e_{2},e_{3}\}} , which are a set of vector fields (which are not necessarily defined globally on M {\displaystyle M} ). Their defining equation is g a b e μ a e ν b = η μ ν . {\displaystyle g_{ab}e_{\mu }^{a}e_{\nu }^{b}=\eta _{\mu \nu }.} The vierbein defines a local rest frame, allowing the constant Gamma matrices to act at each spacetime point. In differential-geometric language, the vierbein is equivalent to a section of the frame bundle, and so defines a local trivialization of the frame bundle. === Spin connection === To write down the equation we also need the spin connection, also known as the connection (1-)form. The dual frame fields { e μ } {\displaystyle \{e^{\mu }\}} have defining relation e a μ e ν a = δ μ ν . {\displaystyle e_{a}^{\mu }e_{\nu }^{a}=\delta ^{\mu }{}_{\nu }.} The connection 1-form is then ω μ ν a := e b μ ∇ a e ν b {\displaystyle \omega ^{\mu }{}_{\nu a}:=e_{b}^{\mu }\nabla _{a}e_{\nu }^{b}} where ∇ a {\displaystyle \nabla _{a}} is a covariant derivative, or equivalently a choice of connection on the frame bundle, most often taken to be the Levi-Civita connection. One should be careful not to treat the abstract Latin indices and Greek indices as the same, and further to note that neither of these are coordinate indices: it can be verified that ω μ ν a {\displaystyle \omega ^{\mu }{}_{\nu a}} doesn't transform as a tensor under a change of coordinates. Mathematically, the frame fields { e μ } {\displaystyle \{e_{\mu }\}} define an isomorphism at each point p {\displaystyle p} where they are defined from the tangent space T p M {\displaystyle T_{p}M} to R 1 , 3 {\displaystyle \mathbb {R} ^{1,3}} . Then abstract indices label the tangent space, while greek indices label R 1 , 3 {\displaystyle \mathbb {R} ^{1,3}} . If the frame fields are position dependent then greek indices do not necessarily transform tensorially under a change of coordinates. Raising and lowering indices is done with g a b {\displaystyle g_{ab}} for latin indices and η μ ν {\displaystyle \eta _{\mu \nu }} for greek indices. The connection form can be viewed as a more abstract connection on a principal bundle, specifically on the frame bundle, which is defined on any smooth manifold, but which restricts to an orthonormal frame bundle on pseudo-Riemannian manifolds. The connection form with respect to frame fields { e μ } {\displaystyle \{e_{\mu }\}} defined locally is, in differential-geometric language, the connection with respect to a local trivialization. === Clifford algebra === Just as with the Dirac equation on flat spacetime, we make use of the Clifford algebra, a set of four gamma matrices { γ μ } {\displaystyle \{\gamma ^{\mu }\}} satisfying { γ μ , γ ν } = 2 η μ ν {\displaystyle \{\gamma ^{\mu },\gamma ^{\nu }\}=2\eta ^{\mu \nu }} where { ⋅ , ⋅ } {\displaystyle \{\cdot ,\cdot \}} is the anticommutator. They can be used to construct a representation of the Lorentz algebra: defining σ μ ν = − i 4 [ γ μ , γ ν ] = − i 2 γ μ γ ν + i 2 η μ ν {\displaystyle \sigma ^{\mu \nu }=-{\frac {i}{4}}[\gamma ^{\mu },\gamma ^{\nu }]=-{\frac {i}{2}}\gamma ^{\mu }\gamma ^{\nu }+{\frac {i}{2}}\eta ^{\mu \nu }} , where [ ⋅ , ⋅ ] {\displaystyle [\cdot ,\cdot ]} is the commutator. It can be shown they satisfy the commutation relations of the Lorentz algebra: [ σ μ ν , σ ρ σ ] = ( − i ) ( σ μ σ η ν ρ − σ ν σ η μ ρ + σ ν ρ η μ σ − σ μ ρ η ν σ ) {\displaystyle [\sigma ^{\mu \nu },\sigma ^{\rho \sigma }]=(-i)(\sigma ^{\mu \sigma }\eta ^{\nu \rho }-\sigma ^{\nu \sigma }\eta ^{\mu \rho }+\sigma ^{\nu \rho }\eta ^{\mu \sigma }-\sigma ^{\mu \rho }\eta ^{\nu \sigma })} They therefore are the generators of a representation of the Lorentz algebra s o ( 1 , 3 ) {\displaystyle {\mathfrak {so}}(1,3)} . But they do not generate a representation of the Lorentz group SO ( 1 , 3 ) {\displaystyle {\text{SO}}(1,3)} , just as the Pauli matrices generate a representation of the rotation algebra s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} but not SO ( 3 ) {\displaystyle {\text{SO}}(3)} . They in fact form a representation of Spin ( 1 , 3 ) . {\displaystyle {\text{Spin}}(1,3).} However, it is a standard abuse of terminology to any representations of the Lorentz algebra as representations of the Lorentz group, even if they do not arise as representations of the Lorentz group. The representation space is isomorphic to C 4 {\displaystyle \mathbb {C} ^{4}} as a vector space. In the classification of Lorentz group representations, the representation is labelled ( 1 2 , 0 ) ⊕ ( 0 , 1 2 ) {\displaystyle \left({\frac {1}{2}},0\right)\oplus \left(0,{\frac {1}{2}}\right)} . The abuse of terminology extends to forming this representation at the group level. We can write a finite Lorentz transformation on R 1 , 3 {\displaystyle \mathbb {R} ^{1,3}} as Λ σ ρ = exp ⁡ ( i 2 α μ ν M μ ν ) σ ρ {\displaystyle \Lambda _{\sigma }^{\rho }=\exp \left({\frac {i}{2}}\alpha _{\mu \nu }M^{\mu \nu }\right){}_{\sigma }^{\rho }} where M μ ν {\displaystyle M^{\mu \nu }} is the standard basis for the Lorentz algebra. These generators have components ( M μ ν ) σ ρ = η μ ρ δ σ ν − η ν ρ δ σ μ {\displaystyle (M^{\mu \nu })_{\sigma }^{\rho }=\eta ^{\mu \rho }\delta _{\sigma }^{\nu }-\eta ^{\nu \rho }\delta _{\sigma }^{\mu }} or, with both indices up or both indices down, simply matrices which have + 1 {\displaystyle +1} in the μ , ν {\displaystyle \mu ,\nu } index and − 1 {\displaystyle -1} in the ν , μ {\displaystyle \nu ,\mu } index, and 0 everywhere else. If another representation ρ {\displaystyle \rho } has generators T μ ν = ρ ( M μ ν ) , {\displaystyle T^{\mu \nu }=\rho (M^{\mu \nu }),} then we write ρ ( Λ ) j i = exp ⁡ ( i 2 α μ ν T μ ν ) j i {\displaystyle \rho (\Lambda )_{j}^{i}=\exp \left({\frac {i}{2}}\alpha _{\mu \nu }T^{\mu \nu }\right){}_{j}^{i}} where i , j {\displaystyle i,j} are indices for the representation space. In the case T μ ν = σ μ ν {\displaystyle T^{\mu \nu }=\sigma ^{\mu \nu }} , without being given generator components α μ ν {\displaystyle \alpha _{\mu \nu }} for Λ σ ρ {\displaystyle \Lambda _{\sigma }^{\rho }} , this ρ ( Λ ) {\displaystyle \rho (\Lambda )} is not well defined: there are sets of generator components α μ ν , β μ ν {\displaystyle \alpha _{\mu \nu },\beta _{\mu \nu }} which give the same Λ σ ρ {\displaystyle \Lambda _{\sigma }^{\rho }} but different ρ ( Λ ) j i . {\displaystyle \rho (\Lambda )_{j}^{i}.} === Covariant derivative for fields in a representation of the Lorentz group === Given a coordinate frame ∂ α {\displaystyle {\partial _{\alpha }}} arising from say coordinates { x α } {\displaystyle \{x^{\alpha }\}} , the partial derivative with respect to a general orthonormal frame { e μ } {\displaystyle \{e_{\mu }\}} is defined ∂ μ ψ = e μ α ∂ α ψ , {\displaystyle \partial _{\mu }\psi =e_{\mu }^{\alpha }\partial _{\alpha }\psi ,} and connection components with respect to a general orthonormal frame are ω μ ν ρ = e ρ α ω μ ν α . {\displaystyle \omega ^{\mu }{}_{\nu \rho }=e_{\rho }^{\alpha }\omega ^{\mu }{}_{\nu \alpha }.} These components do not transform tensorially under a change of frame, but do when combined. Also, these are definitions rather than saying that these objects can arise as partial derivatives in some coordinate chart. In general there are non-coordinate orthonormal frames, for which the commutator of vector fields is non-vanishing. It can be checked that under the transformation ψ ↦ ρ ( Λ ) ψ , {\displaystyle \psi \mapsto \rho (\Lambda )\psi ,} if we define the covariant derivative D μ ψ = ∂ μ ψ + 1 2 ( ω ν ρ ) μ σ ν ρ ψ {\displaystyle D_{\mu }\psi =\partial _{\mu }\psi +{\frac {1}{2}}(\omega _{\nu \rho })_{\mu }\sigma ^{\nu \rho }\psi } , then D μ ψ {\displaystyle D_{\mu }\psi } transforms as D μ ψ ↦ ρ ( Λ ) D μ ψ {\displaystyle D_{\mu }\psi \mapsto \rho (\Lambda )D_{\mu }\psi } This generalises to any representation R {\displaystyle R} for the Lorentz group: if v {\displaystyle v} is a vector field for the associated representation, D μ v = ∂ μ v + 1 2 ( ω ν ρ ) μ R ( M ν ρ ) v = ∂ μ v + 1 2 ( ω ν ρ ) μ T ν ρ v . {\displaystyle D_{\mu }v=\partial _{\mu }v+{\frac {1}{2}}(\omega _{\nu \rho })_{\mu }R(M^{\nu \rho })v=\partial _{\mu }v+{\frac {1}{2}}(\omega _{\nu \rho })_{\mu }T^{\nu \rho }v.} When R {\displaystyle R} is the fundamental representation for SO ( 1 , 3 ) {\displaystyle {\text{SO}}(1,3)} , this recovers the familiar covariant derivative for (tangent-)vector fields, of which the Levi-Civita connection is an example. There are some subtleties in what kind of mathematical object the different types of covariant derivative are. The covariant derivative D α ψ {\displaystyle D_{\alpha }\psi } in a coordinate basis is a vector-valued 1-form, which at each point p {\displaystyle p} is an element of E p ⊗ T p ∗ M {\displaystyle E_{p}\otimes T_{p}^{*}M} . The covariant derivative D μ ψ {\displaystyle D_{\mu }\psi } in an orthonormal basis uses the orthonormal frame { e μ } {\displaystyle \{e_{\mu }\}} to identify the vector-valued 1-form with a vector-valued dual vector which at each point p {\displaystyle p} is an element of E p ⊗ R 1 , 3 , {\displaystyle E_{p}\otimes \mathbb {R} ^{1,3},} using that R 1 , 3 ∗ ≅ R 1 , 3 {\displaystyle {\mathbb {R} ^{1,3}}^{*}\cong \mathbb {R} ^{1,3}} canonically. We can then contract this with a gamma matrix 4-vector γ μ {\displaystyle \gamma ^{\mu }} which takes values at p {\displaystyle p} in End ( E p ) ⊗ R 1 , 3 {\displaystyle {\text{End}}(E_{p})\otimes \mathbb {R} ^{1,3}} == Dirac equation on curved spacetime == Recalling the Dirac equation on flat spacetime, ( i γ μ ∂ μ − m ) ψ = 0 , {\displaystyle (i\gamma ^{\mu }\partial _{\mu }-m)\psi =0,} the Dirac equation on curved spacetime can be written down by promoting the partial derivative to a covariant one. In this way, Dirac's equation takes the following form in curved spacetime: where Ψ {\displaystyle \Psi } is a spinor field on spacetime. Mathematically, this is a section of a vector bundle associated to the spin-frame bundle by the representation ( 1 / 2 , 0 ) ⊕ ( 0 , 1 / 2 ) . {\displaystyle (1/2,0)\oplus (0,1/2).} === Recovering the Klein–Gordon equation from the Dirac equation === The modified Klein–Gordon equation obtained by squaring the operator in the Dirac equation, first found by Erwin Schrödinger as cited by Pollock is given by ( 1 − det g D μ ( − det g g μ ν D ν ) − 1 4 R + i e 2 F μ ν s μ ν − m 2 ) Ψ = 0. {\displaystyle \left({\frac {1}{\sqrt {-\det g}}}\,{\cal {D}}_{\mu }\left({\sqrt {-\det g}}\,g^{\mu \nu }{\cal {D}}_{\nu }\right)-{\frac {1}{4}}R+{\frac {ie}{2}}F_{\mu \nu }s^{\mu \nu }-m^{2}\right)\Psi =0.} where R {\displaystyle R} is the Ricci scalar, and F μ ν {\displaystyle F_{\mu \nu }} is the field strength of A μ {\displaystyle A_{\mu }} . An alternative version of the Dirac equation whose Dirac operator remains the square root of the Laplacian is given by the Dirac–Kähler equation; the price to pay is the loss of Lorentz invariance in curved spacetime. Note that here Latin indices denote the "Lorentzian" vierbein labels while Greek indices denote manifold coordinate indices. == Action formulation == We can formulate this theory in terms of an action. If in addition the spacetime ( M , g ) {\displaystyle (M,\mathbf {g} )} is orientable, there is a preferred orientation known as the volume form ϵ {\displaystyle \epsilon } . One can integrate functions against the volume form: ∫ M ϵ f = ∫ M d 4 x − g f {\displaystyle \int _{M}\epsilon f=\int _{M}d^{4}x{\sqrt {-g}}f} The function Ψ ¯ ( i γ μ D μ − m ) Ψ {\displaystyle {\bar {\Psi }}(i\gamma ^{\mu }D_{\mu }-m)\Psi } is integrated against the volume form to obtain the Dirac action == See also == Dirac equation in the algebra of physical space Dirac spinor Maxwell's equations in curved spacetime Two-body Dirac equations == References == M. Arminjon, F. Reifler (2013). "Equivalent forms of Dirac equations in curved spacetimes and generalized de Broglie relations". Brazilian Journal of Physics. 43 (1–2): 64–77. arXiv:1103.3201. Bibcode:2013BrJPh..43...64A. doi:10.1007/s13538-012-0111-0. S2CID 38235437. M.D. Pollock (2010). "on the dirac equation in curved space-time". Acta Physica Polonica B. 41 (8): 1827. J.V. Dongen (2010). Einstein's Unification. Cambridge University Press. p. 117. ISBN 978-0-521-883-467. L. Parker, D. Toms (2009). Quantum Field Theory in Curved Spacetime: Quantized Fields and Gravity. Cambridge University Press. p. 227. ISBN 978-0-521-877-879. S.A. Fulling (1989). Aspects of Quantum Field Theory in Curved Spacetime. Cambridge University Press. ISBN 0-521-377-684.
Wikipedia/Dirac_equation_in_curved_spacetime
Coulomb's inverse-square law, or simply Coulomb's law, is an experimental law of physics that calculates the amount of force between two electrically charged particles at rest. This electric force is conventionally called the electrostatic force or Coulomb force. Although the law was known earlier, it was first published in 1785 by French physicist Charles-Augustin de Coulomb. Coulomb's law was essential to the development of the theory of electromagnetism and maybe even its starting point, as it allowed meaningful discussions of the amount of electric charge in a particle. The law states that the magnitude, or absolute value, of the attractive or repulsive electrostatic force between two point charges is directly proportional to the product of the magnitudes of their charges and inversely proportional to the square of the distance between them. Coulomb discovered that bodies with like electrical charges repel: It follows therefore from these three tests, that the repulsive force that the two balls – [that were] electrified with the same kind of electricity – exert on each other, follows the inverse proportion of the square of the distance. Coulomb also showed that oppositely charged bodies attract according to an inverse-square law: | F | = k e | q 1 | | q 2 | r 2 {\displaystyle |F|=k_{\text{e}}{\frac {|q_{1}||q_{2}|}{r^{2}}}} Here, ke is a constant, q1 and q2 are the quantities of each charge, and the scalar r is the distance between the charges. The force is along the straight line joining the two charges. If the charges have the same sign, the electrostatic force between them makes them repel; if they have different signs, the force between them makes them attract. Being an inverse-square law, the law is similar to Isaac Newton's inverse-square law of universal gravitation, but gravitational forces always make things attract, while electrostatic forces make charges attract or repel. Also, gravitational forces are much weaker than electrostatic forces. Coulomb's law can be used to derive Gauss's law, and vice versa. In the case of a single point charge at rest, the two laws are equivalent, expressing the same physical law in different ways. The law has been tested extensively, and observations have upheld the law on the scale from 10−16 m to 108 m. == History == Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers and pieces of paper. Thales of Miletus made the first recorded description of static electricity around 600 BC, when he noticed that friction could make a piece of amber attract small objects. In 1600, English scientist William Gilbert made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber. He coined the Neo-Latin word electricus ("of amber" or "like amber", from ἤλεκτρον [elektron], the Greek word for "amber") to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words "electric" and "electricity", which made their first appearance in print in Thomas Browne's Pseudodoxia Epidemica of 1646. Early investigators of the 18th century who suspected that the electrical force diminished with distance as the force of gravity did (i.e., as the inverse square of the distance) included Daniel Bernoulli and Alessandro Volta, both of whom measured the force between plates of a capacitor, and Franz Aepinus who supposed the inverse-square law in 1758. Based on experiments with electrically charged spheres, Joseph Priestley of England was among the first to propose that electrical force followed an inverse-square law, similar to Newton's law of universal gravitation. However, he did not generalize or elaborate on this. In 1767, he conjectured that the force between charges varied as the inverse square of the distance. In 1769, Scottish physicist John Robison announced that, according to his measurements, the force of repulsion between two spheres with charges of the same sign varied as x−2.06. In the early 1770s, the dependence of the force between charged bodies upon both distance and charge had already been discovered, but not published, by Henry Cavendish of England. In his notes, Cavendish wrote, "We may therefore conclude that the electric attraction and repulsion must be inversely as some power of the distance between that of the 2 + ⁠1/50⁠th and that of the 2 − ⁠1/50⁠th, and there is no reason to think that it differs at all from the inverse duplicate ratio". Finally, in 1785, the French physicist Charles-Augustin de Coulomb published his first three reports of electricity and magnetism where he stated his law. This publication was essential to the development of the theory of electromagnetism. He used a torsion balance to study the repulsion and attraction forces of charged particles, and determined that the magnitude of the electric force between two point charges is directly proportional to the product of the charges and inversely proportional to the square of the distance between them. The torsion balance consists of a bar suspended from its middle by a thin fiber. The fiber acts as a very weak torsion spring. In Coulomb's experiment, the torsion balance was an insulating rod with a metal-coated ball attached to one end, suspended by a silk thread. The ball was charged with a known charge of static electricity, and a second charged ball of the same polarity was brought near it. The two charged balls repelled one another, twisting the fiber through a certain angle, which could be read from a scale on the instrument. By knowing how much force it took to twist the fiber through a given angle, Coulomb was able to calculate the force between the balls and derive his inverse-square proportionality law. == Mathematical form == Coulomb's law states that the electrostatic force F 1 {\textstyle \mathbf {F} _{1}} experienced by a charge, q 1 {\displaystyle q_{1}} at position r 1 {\displaystyle \mathbf {r} _{1}} , in the vicinity of another charge, q 2 {\displaystyle q_{2}} at position r 2 {\displaystyle \mathbf {r} _{2}} , in a vacuum is equal to F 1 = q 1 q 2 4 π ε 0 r ^ 12 | r 12 | 2 {\displaystyle \mathbf {F} _{1}={\frac {q_{1}q_{2}}{4\pi \varepsilon _{0}}}{{\hat {\mathbf {r} }}_{12} \over {|\mathbf {r} _{12}|}^{2}}} where r 12 = r 1 − r 2 {\textstyle \mathbf {r_{12}=r_{1}-r_{2}} } is the displacement vector between the charges, r ^ 12 {\textstyle {\hat {\mathbf {r} }}_{12}} a unit vector pointing from q 2 {\textstyle q_{2}} to q 1 {\textstyle q_{1}} , and ε 0 {\displaystyle \varepsilon _{0}} the electric constant. Here, r ^ 12 {\textstyle \mathbf {\hat {r}} _{12}} is used for the vector notation. The electrostatic force F 2 {\textstyle \mathbf {F} _{2}} experienced by q 2 {\displaystyle q_{2}} , according to Newton's third law, is F 2 = − F 1 {\textstyle \mathbf {F} _{2}=-\mathbf {F} _{1}} . If both charges have the same sign (like charges) then the product q 1 q 2 {\displaystyle q_{1}q_{2}} is positive and the direction of the force on q 1 {\displaystyle q_{1}} is given by r ^ 12 {\textstyle {\widehat {\mathbf {r} }}_{12}} ; the charges repel each other. If the charges have opposite signs then the product q 1 q 2 {\displaystyle q_{1}q_{2}} is negative and the direction of the force on q 1 {\displaystyle q_{1}} is − r ^ 12 {\textstyle -{\hat {\mathbf {r} }}_{12}} ; the charges attract each other. === System of discrete charges === The law of superposition allows Coulomb's law to be extended to include any number of point charges. The force acting on a point charge due to a system of point charges is simply the vector addition of the individual forces acting alone on that point charge due to each one of the charges. The resulting force vector is parallel to the electric field vector at that point, with that point charge removed. Force F {\textstyle \mathbf {F} } on a small charge q {\displaystyle q} at position r {\displaystyle \mathbf {r} } , due to a system of n {\textstyle n} discrete charges in vacuum is F ( r ) = q 4 π ε 0 ∑ i = 1 n q i r ^ i | r i | 2 , {\displaystyle \mathbf {F} (\mathbf {r} )={q \over 4\pi \varepsilon _{0}}\sum _{i=1}^{n}q_{i}{{\hat {\mathbf {r} }}_{i} \over {|\mathbf {r} _{i}|}^{2}},} where q i {\displaystyle q_{i}} is the magnitude of the ith charge, r i {\textstyle \mathbf {r} _{i}} is the vector from its position to r {\displaystyle \mathbf {r} } and r ^ i {\textstyle {\hat {\mathbf {r} }}_{i}} is the unit vector in the direction of r i {\displaystyle \mathbf {r} _{i}} . === Continuous charge distribution === In this case, the principle of linear superposition is also used. For a continuous charge distribution, an integral over the region containing the charge is equivalent to an infinite summation, treating each infinitesimal element of space as a point charge d q {\displaystyle dq} . The distribution of charge is usually linear, surface or volumetric. For a linear charge distribution (a good approximation for charge in a wire) where λ ( r ′ ) {\displaystyle \lambda (\mathbf {r} ')} gives the charge per unit length at position r ′ {\displaystyle \mathbf {r} '} , and d ℓ ′ {\displaystyle d\ell '} is an infinitesimal element of length, d q ′ = λ ( r ′ ) d ℓ ′ . {\displaystyle dq'=\lambda (\mathbf {r'} )\,d\ell '.} For a surface charge distribution (a good approximation for charge on a plate in a parallel plate capacitor) where σ ( r ′ ) {\displaystyle \sigma (\mathbf {r} ')} gives the charge per unit area at position r ′ {\displaystyle \mathbf {r} '} , and d A ′ {\displaystyle dA'} is an infinitesimal element of area, d q ′ = σ ( r ′ ) d A ′ . {\displaystyle dq'=\sigma (\mathbf {r'} )\,dA'.} For a volume charge distribution (such as charge within a bulk metal) where ρ ( r ′ ) {\displaystyle \rho (\mathbf {r} ')} gives the charge per unit volume at position r ′ {\displaystyle \mathbf {r} '} , and d V ′ {\displaystyle dV'} is an infinitesimal element of volume, d q ′ = ρ ( r ′ ) d V ′ . {\displaystyle dq'=\rho ({\boldsymbol {r'}})\,dV'.} The force on a small test charge q {\displaystyle q} at position r {\displaystyle {\boldsymbol {r}}} in vacuum is given by the integral over the distribution of charge F ( r ) = q 4 π ε 0 ∫ d q ′ r − r ′ | r − r ′ | 3 . {\displaystyle \mathbf {F} (\mathbf {r} )={\frac {q}{4\pi \varepsilon _{0}}}\int dq'{\frac {\mathbf {r} -\mathbf {r'} }{|\mathbf {r} -\mathbf {r'} |^{3}}}.} The "continuous charge" version of Coulomb's law is never supposed to be applied to locations for which | r − r ′ | = 0 {\displaystyle |\mathbf {r} -\mathbf {r'} |=0} because that location would directly overlap with the location of a charged particle (e.g. electron or proton) which is not a valid location to analyze the electric field or potential classically. Charge is always discrete in reality, and the "continuous charge" assumption is just an approximation that is not supposed to allow | r − r ′ | = 0 {\displaystyle |\mathbf {r} -\mathbf {r'} |=0} to be analyzed. == Coulomb constant == The constant of proportionality, 1 4 π ε 0 {\displaystyle {\frac {1}{4\pi \varepsilon _{0}}}} , in Coulomb's law: F 1 = q 1 q 2 4 π ε 0 r ^ 12 | r 12 | 2 {\displaystyle \mathbf {F} _{1}={\frac {q_{1}q_{2}}{4\pi \varepsilon _{0}}}{{\hat {\mathbf {r} }}_{12} \over {|\mathbf {r} _{12}|}^{2}}} is a consequence of historical choices for units.: 4–2  The constant ε 0 {\displaystyle \varepsilon _{0}} is the vacuum electric permittivity. Using the CODATA 2022 recommended value for ε 0 {\displaystyle \varepsilon _{0}} , the Coulomb constant is k e = 1 4 π ε 0 = 8.987 551 7862 ( 14 ) × 10 9 N ⋅ m 2 ⋅ C − 2 {\displaystyle k_{\text{e}}={\frac {1}{4\pi \varepsilon _{0}}}=8.987\ 551\ 7862(14)\times 10^{9}\ \mathrm {N{\cdot }m^{2}{\cdot }C^{-2}} } == Limitations == There are three conditions to be fulfilled for the validity of Coulomb's inverse square law: The charges must have a spherically symmetric distribution (e.g. be point charges, or a charged metal sphere). The charges must not overlap (e.g. they must be distinct point charges). The charges must be stationary with respect to a nonaccelerating frame of reference. The last of these is known as the electrostatic approximation. When movement takes place, an extra factor is introduced, which alters the force produced on the two objects. This extra part of the force is called the magnetic force. For slow movement, the magnetic force is minimal and Coulomb's law can still be considered approximately correct. A more accurate approximation in this case is, however, the Weber force. When the charges are moving more quickly in relation to each other or accelerations occur, Maxwell's equations and Einstein's theory of relativity must be taken into consideration. == Electric field == An electric field is a vector field that associates to each point in space the Coulomb force experienced by a unit test charge. The strength and direction of the Coulomb force F {\textstyle \mathbf {F} } on a charge q t {\textstyle q_{t}} depends on the electric field E {\textstyle \mathbf {E} } established by other charges that it finds itself in, such that F = q t E {\textstyle \mathbf {F} =q_{t}\mathbf {E} } . In the simplest case, the field is considered to be generated solely by a single source point charge. More generally, the field can be generated by a distribution of charges who contribute to the overall by the principle of superposition. If the field is generated by a positive source point charge q {\textstyle q} , the direction of the electric field points along lines directed radially outwards from it, i.e. in the direction that a positive point test charge q t {\textstyle q_{t}} would move if placed in the field. For a negative point source charge, the direction is radially inwards. The magnitude of the electric field E can be derived from Coulomb's law. By choosing one of the point charges to be the source, and the other to be the test charge, it follows from Coulomb's law that the magnitude of the electric field E created by a single source point charge Q at a certain distance from it r in vacuum is given by | E | = k e | q | r 2 {\displaystyle |\mathbf {E} |=k_{\text{e}}{\frac {|q|}{r^{2}}}} A system of n discrete charges q i {\displaystyle q_{i}} stationed at r i = r − r i {\textstyle \mathbf {r} _{i}=\mathbf {r} -\mathbf {r} _{i}} produces an electric field whose magnitude and direction is, by superposition E ( r ) = 1 4 π ε 0 ∑ i = 1 n q i r ^ i | r i | 2 {\displaystyle \mathbf {E} (\mathbf {r} )={1 \over 4\pi \varepsilon _{0}}\sum _{i=1}^{n}q_{i}{{\hat {\mathbf {r} }}_{i} \over {|\mathbf {r} _{i}|}^{2}}} == Atomic forces == Coulomb's law holds even within atoms, correctly describing the force between the positively charged atomic nucleus and each of the negatively charged electrons. This simple law also correctly accounts for the forces that bind atoms together to form molecules and for the forces that bind atoms and molecules together to form solids and liquids. Generally, as the distance between ions increases, the force of attraction, and binding energy, approach zero and ionic bonding is less favorable. As the magnitude of opposing charges increases, energy increases and ionic bonding is more favorable. == Relation to Gauss's law == === Deriving Gauss's law from Coulomb's law === === Deriving Coulomb's law from Gauss's law === Strictly speaking, Coulomb's law cannot be derived from Gauss's law alone, since Gauss's law does not give any information regarding the curl of E (see Helmholtz decomposition and Faraday's law). However, Coulomb's law can be proven from Gauss's law if it is assumed, in addition, that the electric field from a point charge is spherically symmetric (this assumption, like Coulomb's law itself, is exactly true if the charge is stationary, and approximately true if the charge is in motion). == In relativity == Coulomb's law can be used to gain insight into the form of the magnetic field generated by moving charges since by special relativity, in certain cases the magnetic field can be shown to be a transformation of forces caused by the electric field. When no acceleration is involved in a particle's history, Coulomb's law can be assumed on any test particle in its own inertial frame, supported by symmetry arguments in solving Maxwell's equation, shown above. Coulomb's law can be expanded to moving test particles to be of the same form. This assumption is supported by Lorentz force law which, unlike Coulomb's law is not limited to stationary test charges. Considering the charge to be invariant of observer, the electric and magnetic fields of a uniformly moving point charge can hence be derived by the Lorentz transformation of the four force on the test charge in the charge's frame of reference given by Coulomb's law and attributing magnetic and electric fields by their definitions given by the form of Lorentz force. The fields hence found for uniformly moving point charges are given by: E = q 4 π ϵ 0 r 3 1 − β 2 ( 1 − β 2 sin 2 ⁡ θ ) 3 / 2 r {\displaystyle \mathbf {E} ={\frac {q}{4\pi \epsilon _{0}r^{3}}}{\frac {1-\beta ^{2}}{(1-\beta ^{2}\sin ^{2}\theta )^{3/2}}}\mathbf {r} } B = q 4 π ϵ 0 r 3 1 − β 2 ( 1 − β 2 sin 2 ⁡ θ ) 3 / 2 v × r c 2 = v × E c 2 {\displaystyle \mathbf {B} ={\frac {q}{4\pi \epsilon _{0}r^{3}}}{\frac {1-\beta ^{2}}{(1-\beta ^{2}\sin ^{2}\theta )^{3/2}}}{\frac {\mathbf {v} \times \mathbf {r} }{c^{2}}}={\frac {\mathbf {v} \times \mathbf {E} }{c^{2}}}} where q {\displaystyle q} is the charge of the point source, r {\displaystyle \mathbf {r} } is the position vector from the point source to the point in space, v {\displaystyle \mathbf {v} } is the velocity vector of the charged particle, β {\displaystyle \beta } is the ratio of speed of the charged particle divided by the speed of light and θ {\displaystyle \theta } is the angle between r {\displaystyle \mathbf {r} } and v {\displaystyle \mathbf {v} } . This form of solutions need not obey Newton's third law as is the case in the framework of special relativity (yet without violating relativistic-energy momentum conservation). Note that the expression for electric field reduces to Coulomb's law for non-relativistic speeds of the point charge and that the magnetic field in non-relativistic limit (approximating β ≪ 1 {\displaystyle \beta \ll 1} ) can be applied to electric currents to get the Biot–Savart law. These solutions, when expressed in retarded time also correspond to the general solution of Maxwell's equations given by solutions of Liénard–Wiechert potential, due to the validity of Coulomb's law within its specific range of application. Also note that the spherical symmetry for gauss law on stationary charges is not valid for moving charges owing to the breaking of symmetry by the specification of direction of velocity in the problem. Agreement with Maxwell's equations can also be manually verified for the above two equations. == Coulomb potential == === Quantum field theory === The Coulomb potential admits continuum states (with E > 0), describing electron-proton scattering, as well as discrete bound states, representing the hydrogen atom. It can also be derived within the non-relativistic limit between two charged particles, as follows: Under Born approximation, in non-relativistic quantum mechanics, the scattering amplitude A ( | p ⟩ → | p ′ ⟩ ) {\textstyle {\mathcal {A}}(|\mathbf {p} \rangle \to |\mathbf {p} '\rangle )} is: A ( | p ⟩ → | p ′ ⟩ ) − 1 = 2 π δ ( E p − E p ′ ) ( − i ) ∫ d 3 r V ( r ) e − i ( p − p ′ ) r {\displaystyle {\mathcal {A}}(|\mathbf {p} \rangle \to |\mathbf {p} '\rangle )-1=2\pi \delta (E_{p}-E_{p'})(-i)\int d^{3}\mathbf {r} \,V(\mathbf {r} )e^{-i(\mathbf {p} -\mathbf {p} ')\mathbf {r} }} This is to be compared to the: ∫ d 3 k ( 2 π ) 3 e i k r 0 ⟨ p ′ , k | S | p , k ⟩ {\displaystyle \int {\frac {d^{3}k}{(2\pi )^{3}}}e^{ikr_{0}}\langle p',k|S|p,k\rangle } where we look at the (connected) S-matrix entry for two electrons scattering off each other, treating one with "fixed" momentum as the source of the potential, and the other scattering off that potential. Using the Feynman rules to compute the S-matrix element, we obtain in the non-relativistic limit with m 0 ≫ | p | {\displaystyle m_{0}\gg |\mathbf {p} |} ⟨ p ′ , k | S | p , k ⟩ | c o n n = − i e 2 | p − p ′ | 2 − i ε ( 2 m ) 2 δ ( E p , k − E p ′ , k ) ( 2 π ) 4 δ ( p − p ′ ) {\displaystyle \langle p',k|S|p,k\rangle |_{conn}=-i{\frac {e^{2}}{|\mathbf {p} -\mathbf {p} '|^{2}-i\varepsilon }}(2m)^{2}\delta (E_{p,k}-E_{p',k})(2\pi )^{4}\delta (\mathbf {p} -\mathbf {p} ')} Comparing with the QM scattering, we have to discard the ( 2 m ) 2 {\displaystyle (2m)^{2}} as they arise due to differing normalizations of momentum eigenstate in QFT compared to QM and obtain: ∫ V ( r ) e − i ( p − p ′ ) r d 3 r = e 2 | p − p ′ | 2 − i ε {\displaystyle \int V(\mathbf {r} )e^{-i(\mathbf {p} -\mathbf {p} ')\mathbf {r} }d^{3}\mathbf {r} ={\frac {e^{2}}{|\mathbf {p} -\mathbf {p} '|^{2}-i\varepsilon }}} where Fourier transforming both sides, solving the integral and taking ε → 0 {\displaystyle \varepsilon \to 0} at the end will yield V ( r ) = e 2 4 π r {\displaystyle V(r)={\frac {e^{2}}{4\pi r}}} as the Coulomb potential. However, the equivalent results of the classical Born derivations for the Coulomb problem are thought to be strictly accidental. The Coulomb potential, and its derivation, can be seen as a special case of the Yukawa potential, which is the case where the exchanged boson – the photon – has no rest mass. == Verification == It is possible to verify Coulomb's law with a simple experiment. Consider two small spheres of mass m {\displaystyle m} and same-sign charge q {\displaystyle q} , hanging from two ropes of negligible mass of length l {\displaystyle l} . The forces acting on each sphere are three: the weight m g {\displaystyle mg} , the rope tension T {\displaystyle \mathbf {T} } and the electric force F {\displaystyle \mathbf {F} } . In the equilibrium state: and Dividing (1) by (2): Let L 1 {\displaystyle \mathbf {L} _{1}} be the distance between the charged spheres; the repulsion force between them F 1 {\displaystyle \mathbf {F} _{1}} , assuming Coulomb's law is correct, is equal to so: If we now discharge one of the spheres, and we put it in contact with the charged sphere, each one of them acquires a charge q 2 {\textstyle {\frac {q}{2}}} . In the equilibrium state, the distance between the charges will be L 2 < L 1 {\textstyle \mathbf {L} _{2}<\mathbf {L} _{1}} and the repulsion force between them will be: We know that F 2 = m g tan ⁡ θ 2 {\displaystyle \mathbf {F} _{2}=mg\tan \theta _{2}} and: q 2 4 4 π ε 0 L 2 2 = m g tan ⁡ θ 2 {\displaystyle {\frac {\frac {q^{2}}{4}}{4\pi \varepsilon _{0}L_{2}^{2}}}=mg\tan \theta _{2}} Dividing (4) by (5), we get: Measuring the angles θ 1 {\displaystyle \theta _{1}} and θ 2 {\displaystyle \theta _{2}} and the distance between the charges L 1 {\displaystyle \mathbf {L} _{1}} and L 2 {\displaystyle \mathbf {L} _{2}} is sufficient to verify that the equality is true taking into account the experimental error. In practice, angles can be difficult to measure, so if the length of the ropes is sufficiently great, the angles will be small enough to make the following approximation: Using this approximation, the relationship (6) becomes the much simpler expression: In this way, the verification is limited to measuring the distance between the charges and checking that the division approximates the theoretical value. == See also == == References == Spavieri, G., Gillies, G. T., & Rodriguez, M. (2004). Physical implications of Coulomb’s Law. Metrologia, 41(5), S159–S170. doi:10.1088/0026-1394/41/5/s06 == Related reading == Coulomb, Charles Augustin (1788) [1785]. "Premier mémoire sur l'électricité et le magnétisme". Histoire de l'Académie Royale des Sciences. Imprimerie Royale. pp. 569–577. Coulomb, Charles Augustin (1788) [1785]. "Second mémoire sur l'électricité et le magnétisme". Histoire de l'Académie Royale des Sciences. Imprimerie Royale. pp. 578–611. Coulomb, Charles Augustin (1788) [1785]. "Troisième mémoire sur l'électricité et le magnétisme". Histoire de l'Académie Royale des Sciences. Imprimerie Royale. pp. 612–638. Griffiths, David J. (1999). Introduction to Electrodynamics (3rd ed.). Prentice Hall. ISBN 978-0-13-805326-0. Tamm, Igor E. (1979) [1976]. Fundamentals of the Theory of Electricity (9th ed.). Moscow: Mir. pp. 23–27. Tipler, Paul A.; Mosca, Gene (2008). Physics for Scientists and Engineers (6th ed.). New York: W. H. Freeman and Company. ISBN 978-0-7167-8964-2. LCCN 2007010418. Young, Hugh D.; Freedman, Roger A. (2010). Sears and Zemansky's University Physics: With Modern Physics (13th ed.). Addison-Wesley (Pearson). ISBN 978-0-321-69686-1. == External links == Coulomb's Law on Project PHYSNET Electricity and the Atom Archived 2009-02-21 at the Wayback Machine—a chapter from an online textbook A maze game for teaching Coulomb's law—a game created by the Molecular Workbench software Electric Charges, Polarization, Electric Force, Coulomb's Law Walter Lewin, 8.02 Electricity and Magnetism, Spring 2002: Lecture 1 (video). MIT OpenCourseWare. License: Creative Commons Attribution-Noncommercial-Share Alike.
Wikipedia/Coulomb_force_constant
Non-relativistic quantum electrodynamics (NRQED) is a low-energy approximation of quantum electrodynamics which describes the interaction of (non-relativistic, i.e. moving at speeds much smaller than the speed of light) spin one-half particles (e.g., electrons) with the quantized electromagnetic field. NRQED is an effective field theory suitable for calculations in atomic and molecular physics, for example for computing QED corrections to bound energy levels of atoms and molecules. == References == Caswell, W.E.; Lepage, G.P. (1986). "Effective lagrangians for bound state problems in QED, QCD, and other field theories". Physics Letters B. 167 (4). Elsevier BV: 437–442. Bibcode:1986PhLB..167..437C. doi:10.1016/0370-2693(86)91297-9. ISSN 0370-2693. Labelle, Patrick (1992). "Nrqed in Bound States: Applying Renormalization to an Effective Field Theory". arXiv:hep-ph/9209266. Bibcode:1992hep.ph....9266L. {{cite journal}}: Cite journal requires |journal= (help)
Wikipedia/Non-relativistic_quantum_electrodynamics