id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
3,226,702
https://en.wikipedia.org/wiki/Muon%20capture
Muon capture is the capture of a negative muon by a proton, usually resulting in production of a neutron and a neutrino, and sometimes a gamma photon. Muon capture by heavy nuclei often leads to emission of particles; most often neutrons, but charged particles can be emitted as well. Ordinary muon capture (OMC) involves capture of a negative muon from the atomic orbital without emission of a gamma photon:  +  → μ +  Radiative muon capture (RMC) is a radiative version of OMC, where a gamma photon is emitted:  +  → μ +  +  Theoretical motivation for the study of muon capture on the proton is its connection to the proton's induced pseudoscalar form factor gp. Practical application - Nuclear waste disposal Muon capture is being investigated for practical application in radioactive waste disposal, for example in the artificial transmutation of large quantities of long-lived radioactive waste that have been produced globally by fission reactors. Radioactive waste can be transmuted to stable isotopes following irradiation by an incident muon () beam from a compact proton accelerator source. References Nagamine, Kanetada (2016) "Nuclear Waste Disposal Method and its apparatus using muon-nuclear-absorption". (WO2016143144A1) Espacenet (Patent database). Nuclear physics
Muon capture
[ "Physics" ]
275
[ "Particle physics stubs", "Particle physics", "Nuclear physics" ]
1,676,246
https://en.wikipedia.org/wiki/Wollaston%20prism
A Wollaston prism is an optical device, invented by William Hyde Wollaston, that manipulates polarized light. It separates light into two separate linearly polarized outgoing beams with orthogonal polarization. The two beams will be polarized according to the optical axis of the two right angle prisms. The Wollaston prism consists of two orthogonal prisms of birefringent material—typically a uniaxial material such as calcite. These prisms are cemented together on their base (traditionally with Canada balsam) to form two right triangle prisms with perpendicular optic axes. Outgoing light beams diverge from the prism as ordinary and extraordinary rays due to the differences in the indexes of refraction, with the angle of divergence determined by the prisms' wedge angle and the wavelength of the light. Commercial prisms are available with divergence angles from less than 1° to about 45°. See also Other types of polarizing prisms References Polarization (waves) Prisms (optics)
Wollaston prism
[ "Physics" ]
209
[ "Polarization (waves)", "Astrophysics" ]
1,676,668
https://en.wikipedia.org/wiki/Bow%20wave
A bow wave is the wave that forms at the bow of a ship when it moves through the water. As the bow wave spreads out, it defines the outer limits of a ship's wake. A large bow wave slows the ship down, is a risk to smaller boats, and in a harbor can damage shore facilities and moored ships. Therefore, ship hulls are generally designed to produce as small a bow wave as possible. Description The size of the bow wave is a function of the speed of the ship, its draft, surface waves, water depth, and the shape of the bow. A ship with a large draft and a blunt bow will produce a large wave, and ships that plane over the water surface will create smaller bow waves. Bow wave patterns are studied in the field of computational fluid dynamics. The bow wave carries energy away from the ship at the expense of its kinetic energy—it slows the ship. A major goal of naval architecture is therefore to reduce the size of the bow wave and improve the ship's fuel economy. Modern ships are commonly fitted with a bulbous bow to achieve this. A bow wave forms at the head of a swimmer moving through water. The trough of this wave is near the mouth of the swimmer and helps the swimmer to inhale air to breathe just by turning their head. A similar thing occurs when an airplane travels at the speed of sound. The overlapping wave crests disrupt the flow of air over and under the wings. Just as a boat can easily travel faster than the wave it produces, an airplane with sufficient power can travel faster than the speed of sound (supersonic). See also Kelvin wake pattern Shock wave Bow shock (aerodynamics) Bulbous bow References External links Bow waves - Feynman Lectures on Physics Naval architecture Fluid dynamics Water waves
Bow wave
[ "Physics", "Chemistry", "Engineering" ]
366
[ "Naval architecture", "Physical phenomena", "Water waves", "Chemical engineering", "Waves", "Marine engineering", "Piping", "Fluid dynamics" ]
1,677,957
https://en.wikipedia.org/wiki/Flexural%20rigidity
Flexural rigidity is defined as the force couple required to bend a fixed non-rigid structure by one unit of curvature, or as the resistance offered by a structure while undergoing bending. Flexural rigidity of a beam Although the moment and displacement generally result from external loads and may vary along the length of the beam or rod, the flexural rigidity (defined as ) is a property of the beam itself and is generally constant for prismatic members. However, in cases of non-prismatic members, such as the case of the tapered beams or columns or notched stair stringers, the flexural rigidity will vary along the length of the beam as well. The flexural rigidity, moment, and transverse displacement are related by the following equation along the length of the rod, : where is the flexural modulus (in Pa), is the second moment of area (in m4), is the transverse displacement of the beam at x, and is the bending moment at x. The flexural rigidity (stiffness) of the beam is therefore related to both , a material property, and , the physical geometry of the beam. If the material exhibits Isotropic behavior then the Flexural Modulus is equal to the Modulus of Elasticity (Young's Modulus). Flexural rigidity has SI units of Pa·m4 (which also equals N·m2). Flexural rigidity of a plate (e.g. the lithosphere) In the study of geology, lithospheric flexure affects the thin lithospheric plates covering the surface of the Earth when a load or force is applied to them. On a geological timescale, the lithosphere behaves elastically (in first approach) and can therefore bend under loading by mountain chains, volcanoes and other heavy objects. Isostatic depression caused by the weight of ice sheets during the last glacial period is an example of the effects of such loading. The flexure of the plate depends on: The plate elastic thickness (usually referred to as effective elastic thickness of the lithosphere). The elastic properties of the plate The applied load or force As flexural rigidity of the plate is determined by the Young's modulus, Poisson's ratio and cube of the plate's elastic thickness, it is a governing factor in both (1) and (2). Flexural Rigidity = Young's Modulus = elastic thickness (~5–100 km) = Poisson's Ratio Flexural rigidity of a plate has units of Pa·m3, i.e. one dimension of length less than the same property for the rod, as it refers to the moment per unit length per unit of curvature, and not the total moment. I is termed as moment of inertia. J is denoted as 2nd moment of inertia/polar moment of inertia. See also Bending stiffness Lithospheric flexure References Solid mechanics de:Biegesteifigkeit
Flexural rigidity
[ "Physics" ]
620
[ "Solid mechanics", "Mechanics" ]
1,678,541
https://en.wikipedia.org/wiki/Electron%20backscatter%20diffraction
Electron backscatter diffraction (EBSD) is a scanning electron microscopy (SEM) technique used to study the crystallographic structure of materials. EBSD is carried out in a scanning electron microscope equipped with an EBSD detector comprising at least a phosphorescent screen, a compact lens and a low-light camera. In the microscope an incident beam of electrons hits a tilted sample. As backscattered electrons leave the sample, they interact with the atoms and are both elastically diffracted and lose energy, leaving the sample at various scattering angles before reaching the phosphor screen forming Kikuchi patterns (EBSPs). The EBSD spatial resolution depends on many factors, including the nature of the material under study and the sample preparation. They can be indexed to provide information about the material's grain structure, grain orientation, and phase at the micro-scale. EBSD is used for impurities and defect studies, plastic deformation, and statistical analysis for average misorientation, grain size, and crystallographic texture. EBSD can also be combined with energy-dispersive X-ray spectroscopy (EDS), cathodoluminescence (CL), and wavelength-dispersive X-ray spectroscopy (WDS) for advanced phase identification and materials discovery. The change and sharpness of the electron backscatter patterns (EBSPs) provide information about lattice distortion in the diffracting volume. Pattern sharpness can be used to assess the level of plasticity. Changes in the EBSP zone axis position can be used to measure the residual stress and small lattice rotations. EBSD can also provide information about the density of geometrically necessary dislocations (GNDs). However, the lattice distortion is measured relative to a reference pattern (EBSP0). The choice of reference pattern affects the measurement precision; e.g., a reference pattern deformed in tension will directly reduce the tensile strain magnitude derived from a high-resolution map while indirectly influencing the magnitude of other components and the spatial distribution of strain. Furthermore, the choice of EBSP0 slightly affects the GND density distribution and magnitude. Pattern formation and collection Setup geometry and pattern formation For electron backscattering diffraction microscopy, a flat polished crystalline specimen is usually placed inside the microscope chamber. The sample is tilted at ~70° from Scanning electron microscope (SEM) flat specimen positioning and 110° to the electron backscatter diffraction (EBSD) detector. Tilting the sample elongates the interaction volume perpendicular to the tilt axis, allowing more electrons to leave the sample providing better signal. A high-energy electron beam (typically 20 kV) is focused on a small volume and scatters with a spatial resolution of ~20 nm at the specimen surface. The spatial resolution varies with the beam energy, angular width, interaction volume, nature of the material under study, and, in transmission Kikuchi diffraction (TKD), with the specimen thickness; thus, increasing the beam energy increases the interaction volume and decreases the spatial resolution. The EBSD detector is located within the specimen chamber of the SEM at an angle of approximately 90° to the pole piece. The EBSD detector is typically a phosphor screen that is excited by the backscattered electrons. The screen is coupled to lens which focuses the image from the phosphor screen onto a charge-coupled device (CCD) or complementary metal–oxide–semiconductor (CMOS) camera. In this configuration, as the backscattered electrons leave the sample, they interact with the Coulomb potential and also lose energy due to inelastic scattering leading to a range of scattering angles (θhkl). The backscattered electrons form Kikuchi lines – having different intensities – on an electron-sensitive flat film/screen (commonly phosphor), gathered to form a Kikuchi band. These Kikuchi lines are the trace of a hyperbola formed by the intersection of Kossel cones with the plane of the phosphor screen. The width of a Kikuchi band is related to the scattering angles and, thus, to the distance dhkl between lattice planes with Miller indexes h, k, and l. These Kikuchi lines and patterns were named after Seishi Kikuchi, who, together with Shoji Nishikawa, was the first to notice this diffraction pattern in 1928 using transmission electron microscopy (TEM) which is similar in geometry to X-ray Kossel pattern. The systematically arranged Kikuchi bands, which have a range of intensity along their width, intersect around the centre of the regions of interest (ROI), describing the probed volume crystallography. These bands and their intersections form what is known as Kikuchi patterns or electron backscatter patterns (EBSPs). To improve contrast, the patterns' background is corrected by removing anisotropic/inelastic scattering using static background correction or dynamic background correction. EBSD detectors EBSD is conducted using an SEM equipped with an EBSD detector containing at least a phosphor screen, compact lens and low-light Charge-coupled device (CCD) or Complementary metal–oxide–semiconductor (CMOS) camera. , commercially available EBSD systems typically come with one of two different CCD cameras: for fast measurements, the CCD chip has a native resolution of 640×480 pixels; for slower, and more sensitive measurements, the CCD chip resolution can go up to 1600×1200 pixels. The biggest advantage of the high-resolution detectors is their higher sensitivity, and therefore the information within each diffraction pattern can be analysed in more detail. For texture and orientation measurements, the diffraction patterns are binned to reduce their size and computational times. Modern CCD-based EBSD systems can index patterns at a speed of up to 1800 patterns/second. This enables rapid and rich microstructural maps to be generated. Sample preparation The sample should be vacuum stable. It is typically mounted using a conductive compound (e.g. an epoxy thermoset filled with Cu), which minimises image drift and sample charging under electron beam irradiation. EBSP quality is sensitive to surface preparation. Typically the sample is ground using SiC papers from 240 down to 4000 grit, and polished using diamond paste (from 9 to 1 μm) then in 50 nm colloidal silica. Afterwards, it is cleaned in ethanol, rinsed with deionised water, and dried with a hot air blower. This may be followed by ion beam polishing, for final surface preparation. Inside the SEM, the size of the measurement area determines local resolution and measurement time. Usual settings for high-quality EBSPs are 15 nA current, 20 kV beam energy, 18 mm working distance, long exposure time, and minimal CCD pixel binning. The EBSD phosphor screen is set at an 18 mm working distance and a map's step size of less than 0.5 μm for strain and dislocations density analysis. Decomposition of gaseous hydrocarbons and also hydrocarbons on the surface of samples by the electron beam inside the microscope results in carbon deposition, which degrades the quality of EBSPs inside the probed area compared to the EBSPs outside the acquisition window. The gradient of pattern degradation increases moving inside the probed zone with an apparent accumulation of deposited carbon. The black spots from the beam instant-induced carbon deposition also highlight the immediate deposition even if agglomeration did not happen. Depth resolution There is no agreement about the definition of depth resolution. For example, it can be defined as the depth where ~92% of the signal is generated, or defined by pattern quality, or can be as ambiguous as "where useful information is obtained". Even for a given definition, depth resolution increases with electron energy and decreases with the average atomic mass of the elements making up the studied material: for example, it was estimated as 40 nm for Si and 10 nm for Ni at 20 kV energy. Unusually small values were reported for materials whose structure and composition vary along the thickness. For example, coating monocrystalline silicon with a few nm of amorphous chromium reduces the depth resolution to a few nm at 15 kV energy. In contrast, Isabell and David concluded that depth resolution in homogeneous crystals could also extend up to 1 μm due to inelastic scattering (including tangential smearing and channelling effect). A recent comparison between reports on EBSD depth resolution, Koko et al indicated that most publications do not present a rationale for the definition of depth resolution, while not including information on the beam size, tilt angle, beam-to-sample and sample-to-detector distances. These are critical parameters for determining or simulating the depth resolution. The beam current is generally not considered to affect the depth resolution in experiments or simulations. However, it affects the beam spot size and signal-to-noise ratio, and hence, indirectly, the details of the pattern and its depth information. Monte Carlo simulations provide an alternative approach to quantifying the depth resolution for EBSPs formation, which can be estimated using the Bloch wave theory, where backscattered primary electrons – after interacting with the crystal lattice – exit the surface, carrying information about the crystallinity of the volume interacting with the electrons. The backscattered electrons (BSE) energy distribution depends on the material's characteristics and the beam conditions. This BSE wave field is also affected by the thermal diffuse scattering process that causes incoherent and inelastic (energy loss) scattering – after the elastic diffraction events – which does not, yet, have a complete physical description that can be related to mechanisms that constitute EBSP depth resolution. Both the EBSD experiment and simulations typically make two assumptions: that the surface is pristine and has a homogeneous depth resolution; however, neither of them is valid for a deformed sample. Orientation and phase mapping Pattern indexing If the setup geometry is well described, it is possible to relate the bands present in the diffraction pattern to the underlying crystal and crystallographic orientation of the material within the electron interaction volume. Each band can be indexed individually by the Miller indices of the diffracting plane which formed it. In most materials, only three bands/planes intersect and are required to describe a unique solution to the crystal orientation (based on their interplanar angles). Most commercial systems use look-up tables with international crystal databases to index. This crystal orientation relates the orientation of each sampled point to a reference crystal orientation. Indexing is often the first step in the EBSD process after pattern collection. This allows for the identification of the crystal orientation at the single volume of the sample from where the pattern was collected. With EBSD software, pattern bands are typically detected via a mathematical routine using a modified Hough transform, in which every pixel in Hough space denotes a unique line/band in the EBSP. The Hough transform enables band detection, which is difficult to locate by computer in the original EBSP. Once the band locations have been detected, it is possible to relate these locations to the underlying crystal orientation, as angles between bands represent angles between lattice planes. Thus, an orientation solution can be determined when the position/angles between three bands are known. In highly symmetric materials, more than three bands are typically used to obtain and verify the orientation measurement. The diffraction pattern is pre-processed to remove noise, correct for detector distortions, and normalise the intensity. Then, the pre-processed diffraction pattern is compared to a library of reference patterns for the material being studied. The reference patterns are generated based on the material's known crystal structure and the crystal lattice's orientation. The orientation of the crystal lattice that would generate the best match to the measured pattern is determined using a variety of algorithms. There are three leading methods of indexing that are performed by most commercial EBSD software: triplet voting; minimising the 'fit' between the experimental pattern and a computationally determined orientation, and or/and neighbour pattern averaging and re-indexing, NPAR). Indexing then give a unique solution to the single crystal orientation that is related to the other crystal orientations within the field-of-view. Triplet voting involves identifying multiple 'triplets' associated with different solutions to the crystal orientation; each crystal orientation determined from each triplet receives one vote. Should four bands identify the same crystal orientation, then four (four choose three, i.e. ) votes will be cast for that particular solution. Thus the candidate orientation with the highest number of votes will be the most likely solution to the underlying crystal orientation present. The number of votes for the solution chosen compared to the total number of votes describes the confidence in the underlying solution. Care must be taken in interpreting this 'confidence index' as some pseudo-symmetric orientations may result in low confidence for one candidate solution vs another. Minimising the fit involves starting with all possible orientations for a triplet. More bands are included, which reduces the number of candidate orientations. As the number of bands increases, the number of possible orientations converges ultimately to one solution. The 'fit' between the measured orientation and the captured pattern can be determined. Overall, indexing diffraction patterns in EBSD involves a complex set of algorithms and calculations, but is essential for determining the crystallographic structure and orientation of materials at a high spatial resolution. The indexing process is continually evolving, with new algorithms and techniques being developed to improve the accuracy and speed of the process. Afterwards, a confidence index is calculated to determine the quality of the indexing result. The confidence index is based on the match quality between the measured and reference patterns. In addition, it considers factors such as noise level, detector resolution, and sample quality. While this geometric description related to the kinematic solution using the Bragg condition is very powerful and useful for orientation and texture analysis, it only describes the geometry of the crystalline lattice. It ignores many physical processes involved within the diffracting material. To adequately describe finer features within the electron beam scattering pattern (EBSP), one must use a many-beam dynamical model (e.g. the variation in band intensities in an experimental pattern does not fit the kinematic solution related to the structure factor). Pattern centre To relate the orientation of a crystal, much like in X-ray diffraction (XRD), the geometry of the system must be known. In particular, the pattern centre describes the distance of the interaction volume to the detector and the location of the nearest point between the phosphor and the sample, on the phosphor screen. Early work used a single crystal of known orientation being inserted into the SEM chamber, and a particular feature of the EBSP was known to correspond to the pattern centre. Later developments involved exploiting various geometric relationships between the generation of an EBSP and the chamber geometry (shadow casting and phosphor movement). Unfortunately, each of these methods is cumbersome and can be prone to some systematic errors for a general operator. Typically they cannot be easily used in modern SEMs with multiple designated uses. Thus, most commercial EBSD systems use the indexing algorithm combined with an iterative movement of crystal orientation and suggested pattern centre location. Minimising the fit between bands located within experimental patterns and those in look-up tables tends to converge on the pattern centre location to an accuracy of ~0.5–1% of the pattern width. The recent development of AstroEBSD and PCGlobal, open-source MATLAB codes, increased the precision of determining the pattern centre (PC) and – consequently – elastic strains by using a pattern matching approach which simulates the pattern using EMSoft. EBSD mapping The indexing results are used to generate a map of the crystallographic orientation at each point on the surface being studied. Thus, scanning the electron beam in a prescribed fashion (typically in a square or hexagonal grid, correcting for the image foreshortening due to the sample tilt) results in many rich microstructural maps. These maps can spatially describe the crystal orientation of the material being interrogated and can be used to examine microtexture and sample morphology. Some maps describe grain orientation, boundary, and diffraction pattern (image) quality. Various statistical tools can measure the average misorientation, grain size, and crystallographic texture. From this dataset, numerous maps, charts and plots can be generated. The orientation data can be visualised using a variety of techniques, including colour-coding, contour lines, and pole figures. Microscope misalignment, image shift, scan distortion that increases with decreasing magnification, roughness and contamination of the specimen surface, boundary indexing failure and detector quality can lead to uncertainties in determining the crystal orientation. The EBSD signal-to-noise ratio depends on the material and decreases at excessive acquisition speed and beam current, thereby affecting the angular resolution of the measurement. Strain measurement Full-field displacement, elastic strain, and the GND density provide quantifiable information about the material's elastic and plastic behaviour at the microscale. Measuring strain at the microscale requires careful consideration of other key details besides the change in length/shape (e.g., local texture, individual grain orientations). These micro-scale features can be measured using different techniques, e.g., hole drilling, monochromatic or polychromatic energy-dispersive X-ray diffraction or neutron diffraction (ND). EBSD has a high spatial resolution and is relatively sensitive and easy to use compared to other techniques. Strain measurements using EBSD can be performed at a high spatial resolution, allowing researchers to study the local variation in strain within a material. This information can be used to study the deformation and mechanical behaviour of materials, to develop models of material behaviour under different loading conditions, and to optimise the processing and performance of materials. Overall, strain measurement using EBSD is a powerful tool for studying the deformation and mechanical behaviour of materials, and is widely used in materials science and engineering research and development. Earlier trials The change and degradation in electron backscatter patterns (EBSPs) provide information about the diffracting volume. Pattern degradation (i.e., diffuse quality) can be used to assess the level of plasticity through the pattern/image quality (IQ), where IQ is calculated from the sum of the peaks detected when using the conventional Hough transform. Wilkinson first used the changes in high-order Kikuchi line positions to determine the elastic strains, albeit with low precision (0.3% to 1%); however, this approach cannot be used for characterising residual elastic strain in metals as the elastic strain at the yield point is usually around 0.2%. Measuring strain by tracking the change in the higher-order Kikuchi lines is practical when the strain is small, as the band position is sensitive to changes in lattice parameters. In the early 1990s, Troost et al. and Wilkinson et al. used pattern degradation and change in the zone axis position to measure the residual elastic strains and small lattice rotations with a 0.02% precision. High-resolution electron backscatter diffraction (HR-EBSD) Cross-correlation-based, high angular resolution electron backscatter diffraction (HR-EBSD) – introduced by Wilkinson et al. – is an SEM-based technique to map relative elastic strains and rotations, and estimate the geometrically necessary dislocation (GND) density in crystalline materials. HR-EBSD method uses image cross-correlation to measure pattern shifts between regions of interest (ROI) in different electron backscatter diffraction patterns (EBSPs) with sub-pixel precision. As a result, the relative lattice distortion between two points in a crystal can be calculated using pattern shifts from at least four non-collinear ROI. In practice, pattern shifts are measured in more than 20 ROI per EBSP to find a best-fit solution to the deformation gradient tensor, representing the relative lattice distortion. The displacement gradient tensor () (or local lattice distortion) relates the measured geometrical shifts in the pattern between the collected point () and associate (non-coplanar) vector (), and reference point () pattern and associate vector (). Thus, the (pattern shift) vector () can be written as in the equations below, where and are the direction and displacement in th direction, respectively. The shifts are measured in the phosphor (detector) plane (), and the relationship is simplified; thus, eight out of the nine displacement gradient tensor components can be calculated by measuring the shift at four distinct, widely spaced regions on the EBSP. This shift is then corrected to the sample frame (flipped around Y-axis) because EBSP is recorded on the phosphor screen and is inverted as in a mirror. They are then corrected around the X-axis by 24° (i.e., 20° sample tilt plus ≈4° camera tilt and assuming no angular effect from the beam movement). Using infinitesimal strain theory, the deformation gradient is then split into elastic strain (symmetric part, where ), and lattice rotations (asymmetric part, where ), . These measurements do not provide information about the volumetric/hydrostatic strain tensors. By imposing a boundary condition that the stress normal to the surface () is zero (i.e., traction-free surface), and using Hooke's law with anisotropic elastic stiffness constants, the missing ninth degree of freedom can be estimated in this constrained minimisation problem by using a nonlinear solver. Where is the crystal anisotropic stiffness tensor. These two equations are solved to re-calculate the refined elastic deviatoric strain (), including the missing ninth (spherical) strain tensor. An alternative approach that considers the full can be found in. Finally, the stress and strain tensors are linked using the crystal anisotropic stiffness tensor (), and by using the Einstein summation convention with symmetry of stress tensors (). The quality of the produced data can be assessed by taking the geometric mean of all the ROI's correlation intensity/peaks. A value lower than 0.25 should indicate problems with the EBSPs' quality. Furthermore, the geometrically necessary dislocation (GND) density can be estimated from the HR-EBSD measured lattice rotations by relating the rotation axis and angle between neighbour map points to the dislocation types and densities in a material using Nye's tensor. Precision and development The HR-EBSD method can achieve a precision of ±10−4 in components of the displacement gradient tensors (i.e., variations in lattice strain and lattice rotation in radians) by measuring the shifts of zone axes within the pattern image with a resolution of ±0.05 pixels. It was limited to small strains and rotations (>1.5°) until Britton and Wilkinson and Maurice et al. raised the rotation limit to ~11° by using a re-mapping technique that recalculated the strain after transforming the patterns with a rotation matrix () calculated from the first cross-correlation iteration. However, further lattice rotation, typically caused by severe plastic deformations, produced errors in the elastic strain calculations. To address this problem, Ruggles et al. improved the HR-EBSD precision, even at 12° of lattice rotation, using the inverse compositional Gauss–Newton-based (ICGN) method instead of cross-correlation. For simulated patterns, Vermeij and Hoefnagels also established a method that achieves a precision of ±10−5 in the displacement gradient components using a full-field integrated digital image correlation (IDIC) framework instead of dividing the EBSPs into small ROIs. Patterns in IDIC are distortion-corrected to negate the need for re-mapping up to ~14°. These measurements do not provide information about the hydrostatic or volumetric strains, because there is no change in the orientations of lattice planes (crystallographic directions), but only changes in the position and width of the Kikuchi bands. The reference pattern problem In HR-EBSD analysis, the lattice distortion field is calculated relative to a reference pattern or point (EBSP0) per grain in the map, and is dependent on the lattice distortion at the point. The lattice distortion field in each grain is measured with respect to this point; therefore, the absolute lattice distortion at the reference point (relative to the unstrained crystal) is excluded from the HR-EBSD elastic strain and rotation maps. This 'reference pattern problem' is similar to the 'd0 problem' in X-ray diffraction, and affects the nominal magnitude of HR-EBSD stress fields. However, selecting the reference pattern (EBSP0) plays a key role, as severely deformed EBSP0 adds phantom lattice distortions to the map values, thus, decreasing the measurement precision. The local lattice distortion at the EBSP0 influences the resultant HR-EBSD map, e.g., a reference pattern deformed in tension will directly reduce the HR-EBSD map tensile strain magnitude while indirectly influencing the other component magnitude and the strain's spatial distribution. Furthermore, the choice of EBSP0 slightly affects the GND density distribution and magnitude, and choosing a reference pattern with a higher GND density reduces the cross-correlation quality, changes the spatial distribution and induces more errors than choosing a reference pattern with high lattice distortion. Additionally, there is no apparent connection between EBSP0's IQ and EBSP0's local lattice distortion. The use of simulated reference patterns for absolute strain measurement is still an active area of research and scrutiny as difficulties arise from the variation of inelastic electron scattering with depth which limits the accuracy of dynamical diffraction simulation models, and imprecise determination of the pattern centre which leads to phantom strain components which cancel out when using experimentally acquired reference patterns. Other methods assumed that absolute strain at EBSP0 can be determined using crystal plasticity finite-element (CPFE) simulations, which then can be then combined with the HR-EBSD data (e.g., using linear 'top-up' method or displacement integration) to calculate the absolute lattice distortions. In addition, GND density estimation is nominally insensitive to (or negligibly dependent upon) EBSP0 choice, as only neighbour point-to-point differences in the lattice rotation maps are used for GND density calculation. However, this assumes that the absolute lattice distortion of EBSP0 only changes the relative lattice rotation map components by a constant value which vanishes during derivative operations, i.e., lattice distortion distribution is insensitive to EBSP0 choice. Selecting a reference pattern Criteria for EBSP0 selection can be one or a mixture of: Selecting from points with low GND density or low Kernel average misorientation (KAM) based on the Hough measured local grain misorientations; Selecting from points with high image quality (IQ), which may have a low defect density within its electron interaction volume, is therefore assumed to be a low-strained region of a polycrystalline material. However, IQ does not carry a clear physical meaning, and the magnitudes of the measured relative lattice distortion are insensitive to the IQ of EBSP0; EBSP0 can also be manually selected to be far from potential stress concentrations such as grain boundaries, inclusions, or cracks using subjective criteria; Selecting an EBSP0 after examining the empirical relationship between the cross-correlation parameter and angular error, used in an iterative algorithm to identify the optimal reference pattern that maximises the precision of HR-EBSD. These criteria assume these parameters can indicate the strain conditions at the reference point, which can produce an accurate measurements of up to 3.2×10−4 elastic strain. However, experimental measurements point to the inaccuracy of HR-EBSD in determining the out-of-plane shear strain components distribution and magnitude. Applications EBSD is used in a wide range of applications, including materials science and engineering, geology, and biological research. In materials science and engineering, EBSD is used to study the microstructure of metals, ceramics, and polymers, and to develop models of material behaviour. In geology, EBSD is used to study the crystallographic structure of minerals and rocks. In biological research, EBSD is used to study the microstructure of biological tissues and to investigate the structure of biological materials such as bone and teeth. Scattered electron imaging EBSD detectors can have forward scattered electron diodes (FSD) at the bottom, in the middle (MSD) and at the top of the detector. Forward-scattered electron (FSE) imaging involves collecting electrons scattered at small angles from the surface of a sample, which provides information about the surface topography and composition. The FSE signal is also sensitive to the crystallographic orientation of the sample. By analysing the intensity and contrast of the FSE signal, researchers can determine the crystallographic orientation of each pixel in the image. The FSE signal is typically collected simultaneously with the BSE signal in EBSD analysis. The BSE signal is sensitive to the average atomic number of the sample, and is used to generate an image of the surface topography and composition. The FSE signal is superimposed on the BSE image to provide information about the crystallographic orientation. Image generation has a lot of freedom when using the EBSD detector as an imaging device. An image created using a combination of diodes is called virtual or VFSD. It is possible to acquire images at a rate akin to slow scan imaging in the SEM by excessive binning of the EBSD CCD camera. It is possible to suppress or isolate the contrast of interest by creating composite images from simultaneously captured images, which offers a wide range of combinations for assessing various microstructure characteristics. Nevertheless, VFSD images do not include the quantitative information inherent to traditional EBSD maps; they simply offer representations of the microstructure. Integrated EBSD/EDS mapping When simultaneous EDS/EBSD collection can be achieved, the capabilities of both techniques can be enhanced. There are applications where sample chemistry or phase cannot be differentiated via EDS alone because of similar composition, and structure cannot be solved with EBSD alone because of ambiguous structure solutions. To accomplish integrated mapping, the analysis area is scanned, and at each point, Hough peaks and EDS region-of-interest counts are stored. Positions of phases are determined in X-ray maps, and each element's measured EDS intensities are given in charts. The chemical intensity ranges are set for each phase to select the grains. All patterns are then re-indexed off-line. The recorded chemistry determines which phase/crystal-structure file is used to index each point. Each pattern is indexed by only one phase, and maps displaying distinguished phases are generated. The interaction volumes for EDS and EBSD are significantly different (on the order of micrometres compared to tens of nanometres), and the shape of these volumes using a highly tilted sample may have implications on algorithms for phase discrimination. EBSD, when used together with other in-SEM techniques such as cathodoluminescence (CL), wavelength dispersive X-ray spectroscopy (WDS) and/or EDS can provide a deeper insight into the specimen's properties and enhance phase identification. For example, the minerals calcite (limestone) and aragonite (shell) have the same chemical composition – calcium carbonate (CaCO3) therefore EDS/WDS cannot tell them apart, but they have different microcrystalline structures so EBSD can differentiate between them. Integrated EBSD/DIC mapping EBSD and digital image correlation (DIC) can be used together to analyse the microstructure and deformation behaviour of materials. DIC is a method that uses digital image processing techniques to measure deformation and strain fields in materials. By combining EBSD and DIC, researchers can obtain both crystallographic and mechanical information about a material simultaneously. This allows for a more comprehensive understanding of the relationship between microstructure and mechanical behaviour, which is particularly useful in fields such as materials science and engineering. DIC can identify regions of strain localisation in a material, while EBSD can provide information about the microstructure in these regions. By combining these techniques, researchers can gain insights into the mechanisms responsible for the observed strain localisation. For example, EBSD can be used to determine the grain orientations and boundary misorientations before and after deformation. In contrast, DIC can be used to measure the strain fields in the material during deformation. Or EBSD can be used to identify the activation of different slip systems during deformation, while DIC can be used to measure the associated strain fields. By correlating these data, researchers can better understand the role of different deformation mechanisms in the material's mechanical behaviour. Overall, the combination of EBSD and DIC provides a powerful tool for investigating materials' microstructure and deformation behaviour. This approach can be applied to a wide range of materials and deformation conditions and has the potential to yield insights into the fundamental mechanisms underlying mechanical behaviour. 3D EBSD 3D EBSD combines EBSD with serial sectioning methods to create a three-dimensional map of a material's crystallographic structure. The technique involves serially sectioning a sample into thin slices, and then using EBSD to map the crystallographic orientation of each slice. The resulting orientation maps are then combined to generate a 3D map of the crystallographic structure of the material. The serial sectioning can be performed using a variety of methods, including mechanical polishing, focused ion beam (FIB) milling, or ultramicrotomy. The choice of sectioning method depends on the size and shape of the sample, on its chemical composition, reactivity and mechanical properties, as well as the desired resolution and accuracy of the 3D map. 3D EBSD has several advantages over traditional 2D EBSD. First, it provides a complete picture of a material's crystallographic structure, allowing for a more accurate and detailed analysis of the microstructure. Second, it can be used to study complex microstructures, such as those found in composite materials or multi-phase alloys. Third, it can be used to study the evolution of microstructure over time, such as during deformation or heat treatment. However, 3D EBSD also has some limitations. It requires extensive data acquisition and processing, proper alignment between slices, and can be time-consuming and computationally intensive. In addition, the quality of the 3D map depends on the quality of the individual EBSD maps, which can be affected by factors such as sample preparation, data acquisition parameters, and analysis methods. Overall, 3D EBSD is a powerful technique for studying the crystallographic structure of materials in three dimensions, and is widely used in materials science and engineering research and development. Notes References Further reading External links Codes Videos Diffraction Scientific techniques Spectroscopy Scattering
Electron backscatter diffraction
[ "Physics", "Chemistry", "Materials_science" ]
7,407
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Scattering", "Diffraction", "Crystallography", "Particle physics", "Condensed matter physics", "Nuclear physics", "Spectroscopy" ]
1,678,626
https://en.wikipedia.org/wiki/Hyperfunction
In mathematics, hyperfunctions are generalizations of functions, as a 'jump' from one holomorphic function to another at a boundary, and can be thought of informally as distributions of infinite order. Hyperfunctions were introduced by Mikio Sato in 1958 in Japanese, (1959, 1960 in English), building upon earlier work by Laurent Schwartz, Grothendieck and others. Formulation A hyperfunction on the real line can be conceived of as the 'difference' between one holomorphic function defined on the upper half-plane and another on the lower half-plane. That is, a hyperfunction is specified by a pair (f, g), where f is a holomorphic function on the upper half-plane and g is a holomorphic function on the lower half-plane. Informally, the hyperfunction is what the difference would be at the real line itself. This difference is not affected by adding the same holomorphic function to both f and g, so if h is a holomorphic function on the whole complex plane, the hyperfunctions (f, g) and (f + h, g + h) are defined to be equivalent. Definition in one dimension The motivation can be concretely implemented using ideas from sheaf cohomology. Let be the sheaf of holomorphic functions on Define the hyperfunctions on the real line as the first local cohomology group: Concretely, let and be the upper half-plane and lower half-plane respectively. Then so Since the zeroth cohomology group of any sheaf is simply the global sections of that sheaf, we see that a hyperfunction is a pair of holomorphic functions one each on the upper and lower complex halfplane modulo entire holomorphic functions. More generally one can define for any open set as the quotient where is any open set with . One can show that this definition does not depend on the choice of giving another reason to think of hyperfunctions as "boundary values" of holomorphic functions. Examples If f is any holomorphic function on the whole complex plane, then the restriction of f to the real axis is a hyperfunction, represented by either (f, 0) or (0, −f). The Heaviside step function can be represented as where is the principal value of the complex logarithm of . The Dirac delta "function" is represented by This is really a restatement of Cauchy's integral formula. To verify it one can calculate the integration of f just below the real line, and subtract integration of g just above the real line - both from left to right. Note that the hyperfunction can be non-trivial, even if the components are analytic continuation of the same function. Also this can be easily checked by differentiating the Heaviside function. If g is a continuous function (or more generally a distribution) on the real line with support contained in a bounded interval I, then g corresponds to the hyperfunction (f, −f), where f is a holomorphic function on the complement of I defined by This function f jumps in value by g(x) when crossing the real axis at the point x. The formula for f follows from the previous example by writing g as the convolution of itself with the Dirac delta function. Using a partition of unity one can write any continuous function (distribution) as a locally finite sum of functions (distributions) with compact support. This can be exploited to extend the above embedding to an embedding If f is any function that is holomorphic everywhere except for an essential singularity at 0 (for example, e1/z), then is a hyperfunction with support 0 that is not a distribution. If f has a pole of finite order at 0 then is a distribution, so when f has an essential singularity then looks like a "distribution of infinite order" at 0. (Note that distributions always have finite order at any point.) Operations on hyperfunctions Let be any open subset. By definition is a vector space such that addition and multiplication with complex numbers are well-defined. Explicitly: The obvious restriction maps turn into a sheaf (which is in fact flabby). Multiplication with real analytic functions and differentiation are well-defined: With these definitions becomes a D-module and the embedding is a morphism of D-modules. A point is called a holomorphic point of if restricts to a real analytic function in some small neighbourhood of If are two holomorphic points, then integration is well-defined: where are arbitrary curves with The integrals are independent of the choice of these curves because the upper and lower half plane are simply connected. Let be the space of hyperfunctions with compact support. Via the bilinear form one associates to each hyperfunction with compact support a continuous linear function on This induces an identification of the dual space, with A special case worth considering is the case of continuous functions or distributions with compact support: If one considers (or ) as a subset of via the above embedding, then this computes exactly the traditional Lebesgue-integral. Furthermore: If is a distribution with compact support, is a real analytic function, and then Thus this notion of integration gives a precise meaning to formal expressions like which are undefined in the usual sense. Moreover: Because the real analytic functions are dense in is a subspace of . This is an alternative description of the same embedding . If is a real analytic map between open sets of , then composition with is a well-defined operator from to : See also Algebraic analysis Generalized function Distribution (mathematics) Microlocal analysis Pseudo-differential operator Sheaf cohomology References . . . . - It is called SKK. . . . . . . . External links Algebraic analysis Complex analysis Generalized functions Sheaf theory
Hyperfunction
[ "Mathematics" ]
1,243
[ "Topology", "Sheaf theory", "Mathematical structures", "Category theory" ]
1,678,715
https://en.wikipedia.org/wiki/Numerical%20method
In numerical analysis, a numerical method is a mathematical tool designed to solve numerical problems. The implementation of a numerical method with an appropriate convergence check in a programming language is called a numerical algorithm. Mathematical definition Let be a well-posed problem, i.e. is a real or complex functional relationship, defined on the cross-product of an input data set and an output data set , such that exists a locally lipschitz function called resolvent, which has the property that for every root of , . We define numerical method for the approximation of , the sequence of problems with , and for every . The problems of which the method consists need not be well-posed. If they are, the method is said to be stable or well-posed. Consistency Necessary conditions for a numerical method to effectively approximate are that and that behaves like when . So, a numerical method is called consistent if and only if the sequence of functions pointwise converges to on the set of its solutions: When on the method is said to be strictly consistent. Convergence Denote by a sequence of admissible perturbations of for some numerical method (i.e. ) and with the value such that . A condition which the method has to satisfy to be a meaningful tool for solving the problem is convergence: One can easily prove that the point-wise convergence of to implies the convergence of the associated method. See also Numerical methods for ordinary differential equations Numerical methods for partial differential equations References Numerical analysis
Numerical method
[ "Mathematics" ]
297
[ "Mathematical relations", "Computational mathematics", "Approximations", "Numerical analysis" ]
1,678,795
https://en.wikipedia.org/wiki/Boundary%20element%20method
The boundary element method (BEM) is a numerical computational method of solving linear partial differential equations which have been formulated as integral equations (i.e. in boundary integral form), including fluid mechanics, acoustics, electromagnetics (where the technique is known as method of moments or abbreviated as MoM), fracture mechanics, and contact mechanics. Mathematical basis The integral equation may be regarded as an exact solution of the governing partial differential equation. The boundary element method attempts to use the given boundary conditions to fit boundary values into the integral equation, rather than values throughout the space defined by a partial differential equation. Once this is done, in the post-processing stage, the integral equation can then be used again to calculate numerically the solution directly at any desired point in the interior of the solution domain. BEM is applicable to problems for which Green's functions can be calculated. These usually involve fields in linear homogeneous media. This places considerable restrictions on the range and generality of problems to which boundary elements can usefully be applied. Nonlinearities can be included in the formulation, although they will generally introduce volume integrals which then require the volume to be discretised before solution can be attempted, removing one of the most often cited advantages of BEM. A useful technique for treating the volume integral without discretising the volume is the dual-reciprocity method. The technique approximates part of the integrand using radial basis functions (local interpolating functions) and converts the volume integral into boundary integral after collocating at selected points distributed throughout the volume domain (including the boundary). In the dual-reciprocity BEM, although there is no need to discretize the volume into meshes, unknowns at chosen points inside the solution domain are involved in the linear algebraic equations approximating the problem being considered. The Green's function elements connecting pairs of source and field patches defined by the mesh form a matrix, which is solved numerically. Unless the Green's function is well behaved, at least for pairs of patches near each other, the Green's function must be integrated over either or both the source patch and the field patch. The form of the method in which the integrals over the source and field patches are the same is called "Galerkin's method". Galerkin's method is the obvious approach for problems which are symmetrical with respect to exchanging the source and field points. In frequency domain electromagnetics, this is assured by electromagnetic reciprocity. The cost of computation involved in naive Galerkin implementations is typically quite severe. One must loop over each pair of elements (so we get n2 interactions) and for each pair of elements we loop through Gauss points in the elements producing a multiplicative factor proportional to the number of Gauss-points squared. Also, the function evaluations required are typically quite expensive, involving trigonometric/hyperbolic function calls. Nonetheless, the principal source of the computational cost is this double-loop over elements producing a fully populated matrix. The Green's functions, or fundamental solutions, are often problematic to integrate as they are based on a solution of the system equations subject to a singularity load (e.g. the electrical field arising from a point charge). Integrating such singular fields is not easy. For simple element geometries (e.g. planar triangles) analytical integration can be used. For more general elements, it is possible to design purely numerical schemes that adapt to the singularity, but at great computational cost. Of course, when source point and target element (where the integration is done) are far-apart, the local gradient surrounding the point need not be quantified exactly and it becomes possible to integrate easily due to the smooth decay of the fundamental solution. It is this feature that is typically employed in schemes designed to accelerate boundary element problem calculations. Derivation of closed-form Green's functions is of particular interest in boundary element method, especially in electromagnetics. Specifically in the analysis of layered media, derivation of spatial-domain Green's function necessitates the inversion of analytically-derivable spectral-domain Green's function through Sommerfeld path integral. This integral can not be evaluated analytically and its numerical integration is costly due to its oscillatory and slowly-converging behaviour. For a robust analysis, spatial Green's functions are approximated as complex exponentials with methods such as Prony's method or generalized pencil of function, and the integral is evaluated with Sommerfeld identity. This method is known as discrete complex image method. Comparison to other methods The boundary element method is often more efficient than other methods, including finite elements, in terms of computational resources for problems where there is a small surface/volume ratio. Conceptually, it works by constructing a "mesh" over the modelled surface. However, for many problems boundary element methods are significantly less efficient than volume-discretisation methods (finite element method, finite difference method, finite volume method). A good example of application of the boundary element method is efficient calculation of natural frequencies of liquid sloshing in tanks. Boundary element method is one of the most effective methods for numerical simulation of contact problems, in particular for simulation of adhesive contacts. Boundary element formulations typically give rise to fully populated matrices. This means that the storage requirements and computational time will tend to grow according to the square of the problem size. By contrast, finite element matrices are typically banded (elements are only locally connected) and the storage requirements for the system matrices typically grow quite linearly with the problem size. Compression techniques (e.g. multipole expansions or adaptive cross approximation/hierarchical matrices) can be used to ameliorate these problems, though at the cost of added complexity and with a success-rate that depends heavily on the nature of the problem being solved and the geometry involved. See also Analytic element method Computational electromagnetics Meshfree methods Immersed boundary method Stretched grid method Modified radial integration method References Bibliography . . . , available also here. . . (in two volumes). Further reading External links An Online Resource for Boundary Elements What lies beneath the surface? A guide to the Boundary Element Method and Green's functions for the students and professionals An introductory BEM course (with a chapter on Green's functions) Boundary elements for plane crack problems Electromagnetic Modeling web site at Clemson University (includes list of currently available software) Concept Analyst Boundary Element Analysis software Klimpke, Bruce A Hybrid Magnetic Field Solver Using a Combined Finite Element/Boundary Element Field Solver, U.K. Magnetics Society Conference, 2003 which compares FEM and BEM methods as well as hybrid approaches Free software Bembel A 3D, isogeometric, higher-order, open-source BEM software for Laplace, Helmholtz and Maxwell problems utilizing a fast multipole method for compression and reduction of computational cost boundary-element-method.com An open-source BEM software for solving acoustics / Helmholtz and Laplace problems Puma-EM An open-source and high-performance Method of Moments / Multilevel Fast Multipole Method parallel program AcouSTO Acoustics Simulation TOol, a free and open-source parallel BEM solver for the Kirchhoff-Helmholtz Integral Equation (KHIE) FastBEM Free fast multipole boundary element programs for solving 2D/3D potential, elasticity, Stokes flow and acoustic problems ParaFEM Includes the free and open-source parallel BEM solver for elasticity problems described in Gernot Beer, Ian Smith, Christian Duenser, The Boundary Element Method with Programming: For Engineers and Scientists, Springer, (2008) Boundary Element Template Library (BETL) A general purpose C++ software library for the discretisation of boundary integral operators Nemoh An open source hydrodynamics BEM software dedicated to the computation of first-order wave loads on offshore structures (added mass, radiation damping, diffraction forces) Bempp, An open-source BEM software for 3D Laplace, Helmholtz and Maxwell problems MNPBEM, An open-source Matlab toolbox to solve Maxwell's equations for arbitrarily shaped nanostructures Contact Mechanics and Tribology Simulator, Free, BEM based software MultiFEBE, BEM-FEM solver for computational mechanics, allowing coupling of 2D and 3D viscoelastic or poroelastic media with beam and shell structural elements (for dynamic soil-structure interaction problems, for instance). BE-STATIK Free BE-Programs for 2D potential, elasticity and plate bending problems (Kirchhoff). Numerical differential equations Computational fluid dynamics Computational electromagnetics
Boundary element method
[ "Physics", "Chemistry" ]
1,776
[ "Computational fluid dynamics", "Fluid dynamics", "Computational electromagnetics", "Computational physics" ]
1,678,822
https://en.wikipedia.org/wiki/Perceptual%20control%20theory
Perceptual control theory (PCT) is a model of behavior based on the properties of negative feedback control loops. A control loop maintains a sensed variable at or near a reference value by means of the effects of its outputs upon that variable, as mediated by physical properties of the environment. In engineering control theory, reference values are set by a user outside the system. An example is a thermostat. In a living organism, reference values for controlled perceptual variables are endogenously maintained. Biological homeostasis and reflexes are simple, low-level examples. The discovery of mathematical principles of control introduced a way to model a negative feedback loop closed through the environment (circular causation), which spawned perceptual control theory. It differs fundamentally from some models in behavioral and cognitive psychology that model stimuli as causes of behavior (linear causation). PCT research is published in experimental psychology, neuroscience, ethology, anthropology, linguistics, sociology, robotics, developmental psychology, organizational psychology and management, and a number of other fields. PCT has been applied to design and administration of educational systems, and has led to a psychotherapy called the method of levels. Principles and differences from other theories The perceptual control theory is deeply rooted in biological cybernetics, systems biology and control theory and the related concept of feedback loops. Unlike some models in behavioral and cognitive psychology it sets out from the concept of circular causality. It shares, therefore, its theoretical foundation with the concept of plant control, but it is distinct from it by emphasizing the control of the internal representation of the physical world. The plant control theory focuses on neuro-computational processes of movement generation, once a decision for generating the movement has been taken. PCT spotlights the embeddedness of agents in their environment. Therefore, from the perspective of perceptual control, the central problem of motor control consists in finding a sensory input to the system that matches a desired perception. History PCT has roots in physiological insights of Claude Bernard and in 20th century in the research by Walter B. Cannon and in the fields of control systems engineering and cybernetics. Classical negative feedback control was worked out by engineers in the 1930s and 1940s, and further developed by Wiener, Ashby, and others in the early development of the field of cybernetics. Beginning in the 1950s, William T. Powers applied the concepts and methods of engineered control systems to biological control systems, and developed the experimental methodology of PCT. A key insight of PCT is that the controlled variable is not the output of the system (the behavioral actions), but its input, that is, a sensed and transformed function of some state of the environment that the control system's output can affect. Because these sensed and transformed inputs may appear as consciously perceived aspects of the environment, Powers labelled the controlled variable "perception". The theory came to be known as "Perceptual Control Theory" or PCT rather than "Control Theory Applied to Psychology" because control theorists often assert or assume that it is the system's output that is controlled. In PCT it is the internal representation of the state of some variable in the environment—a "perception" in everyday language—that is controlled. The basic principles of PCT were first published by Powers, Clark, and MacFarland as a "general feedback theory of behavior" in 1960, with credits to cybernetic authors Wiener and Ashby. It has been systematically developed since then in the research community that has gathered around it. Initially, it was overshadowed by the cognitive revolution (later supplanted by cognitive science), but has now become better known. Powers and other researchers in the field point to problems of purpose, causation, and teleology at the foundations of psychology which control theory resolves. From Aristotle through William James and John Dewey it has been recognized that behavior is purposeful and not merely reactive, but how to account for this has been problematic because the only evidence for intentions was subjective. As Powers pointed out, behaviorists following Wundt, Thorndike, Watson, and others rejected introspective reports as data for an objective science of psychology. Only observable behavior could be admitted as data. Such behaviorists modeled environmental events (stimuli) as causing behavioral actions (responses). This causal assumption persists in some models in cognitive psychology that interpose cognitive maps and other postulated information processing between stimulus and response but otherwise retain the assumption of linear causation from environment to behavior, which Richard Marken called an "open-loop causal model of behavioral organization" in contrast to PCT's closed-loop model. Another, more specific reason that Powers observed for psychologists' rejecting notions of purpose or intention was that they could not see how a goal (a state that did not yet exist) could cause the behavior that led to it. PCT resolves these philosophical arguments about teleology because it provides a model of the functioning of organisms in which purpose has objective status without recourse to introspection, and in which causation is circular around feedback loops. Example A simple negative feedback control system is a cruise control system for a car. A cruise control system has a sensor which "perceives" speed as the rate of spin of the drive shaft directly connected to the wheels. It also has a driver-adjustable 'goal' specifying a particular speed. The sensed speed is continuously compared against the specified speed by a device (called a "comparator") which subtracts the currently sensed input value from the stored goal value. The difference (the error signal) determines the throttle setting (the accelerator depression), so that the engine output is continuously varied to prevent the speed of the car from increasing or decreasing from that desired speed as environmental conditions change. If the speed of the car starts to drop below the goal-speed, for example when climbing a hill, the small increase in the error signal, amplified, causes engine output to increase, which keeps the error very nearly at zero. If the speed begins to exceed the goal, e.g. when going down a hill, the engine is throttled back so as to act as a brake, so again the speed is kept from departing more than a barely detectable amount from the goal speed (brakes being needed only if the hill is too steep). The result is that the cruise control system maintains a speed close to the goal as the car goes up and down hills, and as other disturbances such as wind affect the car's speed. This is all done without any planning of specific actions, and without any blind reactions to stimuli. Indeed, the cruise control system does not sense disturbances such as wind pressure at all, it only senses the controlled variable, speed. Nor does it control the power generated by the engine, it uses the 'behavior' of engine power as its means to control the sensed speed. The same principles of negative feedback control (including the ability to nullify effects of unpredictable external or internal disturbances) apply to living control systems. Implications of these principle are e.g. intensively studied by biological and medical cybernetics and systems biology. The thesis of PCT is that animals and people do not control their behavior; rather, they vary their behavior as their means for controlling their perceptions, with or without external disturbances. This is harmoniously consistent with the historical and still widespread assumption that behavior is the final result of stimulus inputs and cognitive plans. The methodology of modeling, and PCT as model The principal datum in PCT methodology is the controlled variable. The fundamental step of PCT research, the test for controlled variables, begins with the slow and gentle application of disturbing influences to the state of a variable in the environment which the researcher surmises is already under control by the observed organism. It is essential not to overwhelm the organism's ability to control, since that is what is being investigated. If the organism changes its actions just so as to prevent the disturbing influence from having the expected effect on that variable, that is strong evidence that the experimental action disturbed a controlled variable. It is crucially important to distinguish the perceptions and point of view of the observer from those of the observed organism. It may take a number of variations of the test to isolate just which aspect of the environmental situation is under control, as perceived by the observed organism. PCT employs a black box methodology. The controlled variable as measured by the observer corresponds quantitatively to a reference value for a perception that the organism is controlling. The controlled variable is thus an objective index of the purpose or intention of those particular behavioral actions by the organism—the goal which those actions consistently work to attain despite disturbances. With few exceptions, in the current state of neuroscience this internally maintained reference value is seldom directly observed as such (e.g. as a rate of firing in a neuron), since few researchers trace the relevant electrical and chemical variables by their specific pathways while a living organism is engaging in what we externally observe as behavior. However, when a working negative feedback system simulated on a digital computer performs essentially identically to observed organisms, then the well understood negative feedback structure of the simulation or model (the white box) is understood to demonstrate the unseen negative feedback structure within the organism (the black box). Data for individuals are not aggregated for statistical analysis; instead, a generative model is built which replicates the data observed for individuals with very high fidelity (0.95 or better). To build such a model of a given behavioral situation requires careful measurements of three observed variables: A fourth value, the internally maintained reference r (a variable ′setpoint′), is deduced from the value at which the organism is observed to maintain qi, as determined by the test for controlled variables (described at the beginning of this section). With two variables specified, the controlled input qi and the reference r, a properly designed control system, simulated on a digital computer, produces outputs qo that almost precisely oppose unpredictable disturbances d to the controlled input. Further, the variance from perfect control accords well with that observed for living organisms. Perfect control would result in zero effect of the disturbance, but living organisms are not perfect controllers, and the aim of PCT is to model living organisms. When a computer simulation performs with >95% conformity to experimentally measured values, opposing the effect of unpredictable changes in d by generating (nearly) equal and opposite values of qo, it is understood to model the behavior and the internal control-loop structure of the organism. By extension, the elaboration of the theory constitutes a general model of cognitive process and behavior. With every specific model or simulation of behavior that is constructed and tested against observed data, the general model that is presented in the theory is exposed to potential challenge that could call for revision or could lead to refutation. Mathematics To illustrate the mathematical calculations employed in a PCT simulation, consider a pursuit tracking task in which the participant keeps a mouse cursor aligned with a moving target on a computer monitor. The model assumes that a perceptual signal within the participant represents the magnitude of the input quantity qi. (This has been demonstrated to be a rate of firing in a neuron, at least at the lowest levels.) In the tracking task, the input quantity is the vertical distance between the target position T and the cursor position C, and the random variation of the target position acts as the disturbance d of that input quantity. This suggests that the perceptual signal p quantitatively represents the cursor position C minus the target position T, as expressed in the equation p=C–T. Between the perception of target and cursor and the construction of the signal representing the distance between them there is a delay of τ milliseconds, so that the working perceptual signal at time t represents the target-to-cursor distance at a prior time, t – τ. Consequently, the equation used in the model is 1. p(t) = C(t–τ) – T(t–τ) The negative feedback control system receives a reference signal r which specifies the magnitude of the given perceptual signal which is currently intended or desired. (For the origin of r within the organism, see under "A hierarchy of control", below.) Both r and p are input to a simple neural structure with r excitatory and p inhibitory. This structure is called a "comparator". The effect is to subtract p from r, yielding an error signal e that indicates the magnitude and sign of the difference between the desired magnitude r and the currently input magnitude p of the given perception. The equation representing this in the model is: 2. e = r–p The error signal e must be transformed to the output quantity qo (representing the participant's muscular efforts affecting the mouse position). Experiments have shown that in the best model for the output function, the mouse velocity Vcursor is proportional to the error signal e by a gain factor G (that is, Vcursor = G*e). Thus, when the perceptual signal p is smaller than the reference signal r, the error signal e has a positive sign, and from it the model computes an upward velocity of the cursor that is proportional to the error. The next position of the cursor Cnew is the current position Cold plus the velocity Vcursor times the duration dt of one iteration of the program. By simple algebra, we substitute G*e (as given above) for Vcursor, yielding a third equation: 3. Cnew = Cold + G*e*dt These three simple equations or program steps constitute the simplest form of the model for the tracking task. When these three simultaneous equations are evaluated over and over with similarly distributed random disturbances d of the target position that the human participant experienced, the output positions and velocities of the cursor duplicate the participant's actions in the tracking task above within 4.0% of their peak-to-peak range, in great detail. This simple model can be refined with a damping factor d which reduces the discrepancy between the model and the human participant to 3.6% when the disturbance d is set to maximum difficulty. 3'. Cnew = Cold + [(G*e)–(d*Cold)]*dt Detailed discussion of this model in (Powers 2008) includes both source and executable code, with which the reader can verify how well this simple program simulates real behavior. No consideration is needed of possible nonlinearities such as the Weber-Fechner law, potential noise in the system, continuously varying angles at the joints, and many other factors that could afflict performance if this were a simple linear model. No inverse kinematics or predictive calculations are required. The model simply reduces the discrepancy between input p and reference r continuously as it arises in real time, and that is all that is required—as predicted by the theory. Distinctions from engineering control theory In the artificial systems that are specified by engineering control theory, the reference signal is considered to be an external input to the 'plant'. In engineering control theory, the reference signal or set point is public; in PCT, it is not, but rather must be deduced from the results of the test for controlled variables, as described above in the methodology section. This is because in living systems a reference signal is not an externally accessible input, but instead originates within the system. In the hierarchical model, error output of higher-level control loops, as described in the next section below, evokes the reference signal r from synapse-local memory, and the strength of r is proportional to the (weighted) strength of the error signal or signals from one or more higher-level systems. In engineering control systems, in the case where there are several such reference inputs, a 'Controller' is designed to manipulate those inputs so as to obtain the effect on the output of the system that is desired by the system's designer, and the task of a control theory (so conceived) is to calculate those manipulations so as to avoid instability and oscillation. The designer of a PCT model or simulation specifies no particular desired effect on the output of the system, except that it must be whatever is required to bring the input from the environment (the perceptual signal) into conformity with the reference. In Perceptual Control Theory, the input function for the reference signal is a weighted sum of internally generated signals (in the canonical case, higher-level error signals), and loop stability is determined locally for each loop in the manner sketched in the preceding section on the mathematics of PCT (and elaborated more fully in the referenced literature). The weighted sum is understood to result from reorganization. Engineering control theory is computationally demanding, but as the preceding section shows, PCT is not. For example, contrast the implementation of a model of an inverted pendulum in engineering control theory with the PCT implementation as a hierarchy of five simple control systems. A hierarchy of control Perceptions, in PCT, are constructed and controlled in a hierarchy of levels. For example, visual perception of an object is constructed from differences in light intensity or differences in sensations such as color at its edges. Controlling the shape or location of the object requires altering the perceptions of sensations or intensities (which are controlled by lower-level systems). This organizing principle is applied at all levels, up to the most abstract philosophical and theoretical constructs. The Russian physiologist Nicolas Bernstein independently came to the same conclusion that behavior has to be multiordinal—organized hierarchically, in layers. A simple problem led to this conclusion at about the same time both in PCT and in Bernstein's work. The spinal reflexes act to stabilize limbs against disturbances. Why do they not prevent centers higher in the brain from using those limbs to carry out behavior? Since the brain obviously does use the spinal systems in producing behavior, there must be a principle that allows the higher systems to operate by incorporating the reflexes, not just by overcoming them or turning them off. The answer is that the reference value (setpoint) for a spinal reflex is not static; rather, it is varied by higher-level systems as their means of moving the limbs (servomechanism). This principle applies to higher feedback loops, as each loop presents the same problem to subsystems above it. Whereas an engineered control system has a reference value or setpoint adjusted by some external agency, the reference value for a biological control system cannot be set in this way. The setpoint must come from some internal process. If there is a way for behavior to affect it, any perception may be brought to the state momentarily specified by higher levels and then be maintained in that state against unpredictable disturbances. In a hierarchy of control systems, higher levels adjust the goals of lower levels as their means of approaching their own goals set by still-higher systems. This has important consequences for any proposed external control of an autonomous living control system (organism). At the highest level, reference values (goals) are set by heredity or adaptive processes. Reorganization in evolution, development, and learning If an organism controls inappropriate perceptions, or if it controls some perceptions to inappropriate values, then it is less likely to bring progeny to maturity, and may die. Consequently, by natural selection successive generations of organisms evolve so that they control those perceptions that, when controlled with appropriate setpoints, tend to maintain critical internal variables at optimal levels, or at least within non-lethal limits. Powers called these critical internal variables "intrinsic variables" (Ashby's "essential variables"). The mechanism that influences the development of structures of perceptions to be controlled is termed "reorganization", a process within the individual organism that is subject to natural selection just as is the evolved structure of individuals within a species. This "reorganization system" is proposed to be part of the inherited structure of the organism. It changes the underlying parameters and connectivity of the control hierarchy in a random-walk manner. There is a basic continuous rate of change in intrinsic variables which proceeds at a speed set by the total error (and stops at zero error), punctuated by random changes in direction in a hyperspace with as many dimensions as there are critical variables. This is a more or less direct adaptation of Ashby's "homeostat", first adopted into PCT in the 1960 paper and then changed to use E. coli's method of navigating up gradients of nutrients, as described by Koshland (1980). Reorganization may occur at any level when loss of control at that level causes intrinsic (essential) variables to deviate from genetically determined set points. This is the basic mechanism that is involved in trial-and-error learning, which leads to the acquisition of more systematic kinds of learning processes. Psychotherapy: the method of levels (MOL) The reorganization concept has led to a method of psychotherapy called the method of levels (MOL). Using MOL, the therapist aims to help the patient shift his or her awareness to higher levels of perception in order to resolve conflicts and allow reorganization to take place. Neuroscience Learning Currently, no one theory has been agreed upon to explain the synaptic, neuronal or systemic basis of learning. Prominent since 1973, however, is the idea that long-term potentiation (LTP) of populations of synapses induces learning through both pre- and postsynaptic mechanisms. LTP is a form of Hebbian learning, which proposed that high-frequency, tonic activation of a circuit of neurones increases the efficacy with which they are activated and the size of their response to a given stimulus as compared to the standard neurone (Hebb, 1949). These mechanisms are the principles behind Hebb's famously simple explanation: "Those that fire together, wire together". LTP has received much support since it was first observed by Terje Lømo in 1966 and is still the subject of many modern studies and clinical research. However, there are possible alternative mechanisms underlying LTP, as presented by Enoki, Hu, Hamilton and Fine in 2009, published in the journal Neuron. They concede that LTP is the basis of learning. However, they firstly propose that LTP occurs in individual synapses, and this plasticity is graded (as opposed to in a binary mode) and bidirectional. Secondly, the group suggest that the synaptic changes are expressed solely presynaptically, via changes in the probability of transmitter release. Finally, the team predict that the occurrence of LTP could be age-dependent, as the plasticity of a neonatal brain would be higher than that of a mature one. Therefore, the theories differ, as one proposes an on/off occurrence of LTP by pre- and postsynaptic mechanisms and the other proposes only presynaptic changes, graded ability, and age-dependence. These theories do agree on one element of LTP, namely, that it must occur through physical changes to the synaptic membrane/s, i.e. synaptic plasticity. Perceptual control theory encompasses both of these views. It proposes the mechanism of 'reorganisation' as the basis of learning. Reorganisation occurs within the inherent control system of a human or animal by restructuring the inter- and intraconnections of its hierarchical organisation, akin to the neuroscientific phenomenon of neural plasticity. This reorganisation initially allows the trial-and-error form of learning, which is seen in babies, and then progresses to more structured learning through association, apparent in infants, and finally to systematic learning, covering the adult ability to learn from both internally and externally generated stimuli and events. In this way, PCT provides a valid model for learning that combines the biological mechanisms of LTP with an explanation of the progression and change of mechanisms associated with developmental ability. Powers in 2008 produced a simulation of arm co-ordination. He suggested that in order to move your arm, fourteen control systems that control fourteen joint angles are involved, and they reorganise simultaneously and independently. It was found that for optimum performance, the output functions must be organised in a way so as each control system's output only affects the one environmental variable it is perceiving. In this simulation, the reorganising process is working as it should, and just as Powers suggests that it works in humans, reducing outputs that cause error and increasing those that reduce error. Initially, the disturbances have large effects on the angles of the joints, but over time the joint angles match the reference signals more closely due to the system being reorganised. Powers suggests that in order to achieve coordination of joint angles to produce desired movements, instead of calculating how multiple joint angles must change to produce this movement the brain uses negative feedback systems to generate the joint angles that are required. A single reference signal that is varied in a higher-order system can generate a movement that requires several joint angles to change at the same time. Hierarchical organisation Botvinick in 2008 proposed that one of the founding insights of the cognitive revolution was the recognition of hierarchical structure in human behavior. Despite decades of research, however, the computational mechanisms underlying hierarchically organized behavior are still not fully understood. Bedre, Hoffman, Cooney & D'Esposito in 2009 proposed that the fundamental goal in cognitive neuroscience is to characterize the functional organization of the frontal cortex that supports the control of action. Recent neuroimaging data has supported the hypothesis that the frontal lobes are organized hierarchically, such that control is supported in progressively caudal regions as control moves to more concrete specification of action. However, it is still not clear whether lower-order control processors are differentially affected by impairments in higher-order control when between-level interactions are required to complete a task, or whether there are feedback influences of lower-level on higher-level control. Botvinik in 2008 found that all existing models of hierarchically structured behavior share at least one general assumption – that the hierarchical, part–whole organization of human action is mirrored in the internal or neural representations underlying it. Specifically, the assumption is that there exist representations not only of low-level motor behaviors, but also separable representations of higher-level behavioral units. The latest crop of models provides new insights, but also poses new or refined questions for empirical research, including how abstract action representations emerge through learning, how they interact with different modes of action control, and how they sort out within the prefrontal cortex (PFC). Perceptual control theory (PCT) can provide an explanatory model of neural organisation that deals with the current issues. PCT describes the hierarchical character of behavior as being determined by control of hierarchically organized perception. Control systems in the body and in the internal environment of billions of interconnected neurons within the brain are responsible for keeping perceptual signals within survivable limits in the unpredictably variable environment from which those perceptions are derived. PCT does not propose that there is an internal model within which the brain simulates behavior before issuing commands to execute that behavior. Instead, one of its characteristic features is the principled lack of cerebral organisation of behavior. Rather, behavior is the organism's variable means to reduce the discrepancy between perceptions and reference values which are based on various external and internal inputs. Behavior must constantly adapt and change for an organism to maintain its perceptual goals. In this way, PCT can provide an explanation of abstract learning through spontaneous reorganisation of the hierarchy. PCT proposes that conflict occurs between disparate reference values for a given perception rather than between different responses, and that learning is implemented as trial-and-error changes of the properties of control systems, rather than any specific response being reinforced. In this way, behavior remains adaptive to the environment as it unfolds, rather than relying on learned action patterns that may not fit. Hierarchies of perceptual control have been simulated in computer models and have been shown to provide a close match to behavioral data. For example, Marken conducted an experiment comparing the behavior of a perceptual control hierarchy computer model with that of six healthy volunteers in three experiments. The participants were required to keep the distance between a left line and a centre line equal to that of the centre line and a right line. They were also instructed to keep both distances equal to 2 cm. They had 2 paddles in their hands, one controlling the left line and one controlling the middle line. To do this, they had to resist random disturbances applied to the positions of the lines. As the participants achieved control, they managed to nullify the expected effect of the disturbances by moving their paddles. The correlation between the behavior of subjects and the model in all the experiments approached 0.99. It is proposed that the organization of models of hierarchical control systems such as this informs us about the organization of the human subjects whose behavior it so closely reproduces. Robotics PCT has significant implications for Robotics and Artificial Intelligence. W.T. Powers introduced the application of PCT to robotics in 1978, early in the availability of home computers. The comparatively simple architecture, a hierarchy of perceptual controllers, has no need for complex models of the external world, inverse kinematics, or computation from input-output mappings. Traditional approaches to robotics generally depend upon the computation of actions in a constrained environment. Robots designed this way are inflexible and clumsy, unable to cope with the dynamic nature of the real world. PCT robots inherently resist and counter the chaotic, unpredictable disturbances to their controlled inputs which occur in an unconstrained environment. The PCT robotics architecture has recently been applied to a number of real-world robotic systems including robotic rovers, balancing robot and robot arms. Some commercially available robots which demonstrate good control in a naturalistic environment use a control-theoretic architecture which requires much more intensive computation. For example, Boston Dynamics has said that its robots use historically leveraged model predictive control. Current situation and prospects The preceding explanation of PCT principles provides justification of how this theory can provide a valid explanation of neural organisation and how it can explain some of the current issues of conceptual models. Perceptual control theory currently proposes a hierarchy of 11 levels of perceptions controlled by systems in the human mind and neural architecture. These are: intensity, sensation, configuration, transition, event, relationship, category, sequence, program, principle, and system concept. Diverse perceptual signals at a lower level (e.g. visual perceptions of intensities) are combined in an input function to construct a single perception at the higher level (e.g. visual perception of a color sensation). The perceptions that are constructed and controlled at the lower levels are passed along as the perceptual inputs at the higher levels. The higher levels in turn control by adjusting the reference levels (goals) of the lower levels, in effect telling the lower levels what to perceive. While many computer demonstrations of principles have been developed, the proposed higher levels are difficult to model because too little is known about how the brain works at these levels. Isolated higher-level control processes can be investigated, but models of an extensive hierarchy of control are still only conceptual, or at best rudimentary. Perceptual control theory has not been widely accepted in mainstream psychology, but has been effectively used in a considerable range of domains in human factors, clinical psychology, and psychotherapy (the "Method of Levels"), it is the basis for a considerable body of research in sociology, and it has formed the conceptual foundation for the reference model used by a succession of NATO research study groups. Recent approaches use principles of perceptual control theory to provide new algorithmic foundations for artificial intelligence and machine learning. Selected bibliography Cziko, Gary (1995). Without miracles: Universal selection theory and the second Darwinian revolution. Cambridge, MA: MIT Press (A Bradford Book). Cziko, Gary (2000). The things we do: Using the lessons of Bernard and Darwin to understand the what, how, and why of our behavior. Cambridge, MA: MIT Press (A Bradford Book). Forssell, Dag (Ed.), 2016. Perceptual Control Theory, An Overview of the Third Grand Theory in Psychology: Introductions, Readings, and Resources. Hayward, CA: Living Control Systems Publishing. . Mansell, Warren (Ed.), (2020). The Interdisciplinary Handbook of Perceptual Control Theory: Living Control Systems IV. Cambridge: Academic Press. . Marken, Richard S. (1992) Mind readings: Experimental studies of purpose. Benchmark Publications: New Canaan, CT. Marken, Richard S. (2002) More mind readings: Methods and models in the study of purpose. Chapel Hill, NC: New View. Pfau, Richard H. (2017). Your Behavior: Understanding and Changing the Things You Do. St. Paul, MN: Paragon House. Plooij, F. X. (1984). The behavioral development of free-living chimpanzee babies and infants. Norwood, N.J.: Ablex. Plooij, F. X. (2003). "The trilogy of mind". In M. Heimann (Ed.), Regression periods in human infancy (pp. 185–205). Mahwah, NJ: Erlbaum. Powers, William T. (1973). Behavior: The control of perception. Chicago: Aldine de Gruyter. . [2nd exp. ed. = Powers (2005)]. Powers, William T. (1989). Living control systems. [Selected papers 1960–1988.] New Canaan, CT: Benchmark Publications. . Powers, William T. (1992). Living control systems II. [Selected papers 1959–1990.] New Canaan, CT: Benchmark Publications. Powers, William T. (1998). Making sense of behavior: The meaning of control. New Canaan, CT: Benchmark Publications. . Powers, William T. (2005). Behavior: The control of perception. New Canaan: Benchmark Publications. . [2nd exp. ed. of Powers (1973). Chinese tr. (2004) Guongdong Higher Learning Education Press, Guangzhou, China. .] Powers, William T. (2008). Living Control Systems III: The fact of control. [Mathematical appendix by Dr. Richard Kennaway. Includes computer programs for the reader to demonstrate and experimentally test the theory.] New Canaan, CT: Benchmark Publications. . Powers, William. T., Clark, R. K., and McFarland, R. L. (1960). "A general feedback theory of human behavior [Part 1; Part 2]. Perceptual and Motor Skills 11, 71–88; 309–323. Powers, William T. and Runkel, Philip J. 2011. Dialogue concerning the two chief approaches to a science of life: Word pictures and correlations versus working models. Hayward, CA: Living Control Systems Publishing . Robertson, R. J. & Powers, W.T. (1990). Introduction to modern psychology: the control-theory view. Gravel Switch, KY: Control Systems Group. Robertson, R. J., Goldstein, D.M., Mermel, M., & Musgrave, M. (1999). Testing the self as a control system: Theoretical and methodological issues. Int. J. Human-Computer Studies, 50, 571–580. Runkel, Philip J[ulian]. 1990. Casting Nets and Testing Specimens: Two Grand Methods of Psychology. New York: Praeger. . [Repr. 2007, Hayward, CA: Living Control Systems Publishing .] Runkel, Philip J[ulian]. (2003). People as living things. Hayward, CA: Living Control Systems Publishing Taylor, Martin M. (1999). "Editorial: Perceptual Control Theory and its Application," International Journal of Human-Computer Studies, Vol 50, No. 6, June 1999, pp. 433–444. Sociology McClelland, Kent and Thomas J. Fararo, eds. (2006). Purpose, Meaning, and Action: Control Systems Theories in Sociology. New York: Palgrave Macmillan. McPhail, Clark. 1991. The Myth of the Madding Crowd. New York: Aldine de Gruyter. References External links Articles PCT for the Beginner by William T. Powers (2007) The Dispute Over Control theory by William T. Powers (1993) – requires access approval Demonstrations of perceptual control by Gary Cziko (2006) Audio Interview with William T. Powers on origin and history of PCT (Part One – 20060722 (58.7M) Interview with William T. Powers on origin and history of PCT (Part Two – 20070728 (57.7M) Videos Demonstration of a robot arm with visual servoing and pressure control based on principles of PCT Websites The International Association for Perceptual Control Systems – The IAPCT website. PCTWeb – Warren Mansell's comprehensive website on PCT. Living Control Systems Publishing – resources and books about PCT. Mind Readings – Rick Marken's website on PCT, with many interactive demonstrations. Method of Levels – Timothy Carey's website on the Method of Levels. Perceptual Robots – The PCT methodology and architecture applied to robotics. ResearchGate Project – Recent research products. Systems psychology Control theory Cybernetics Formal sciences Robotics engineering
Perceptual control theory
[ "Mathematics", "Technology", "Engineering" ]
7,754
[ "Computer engineering", "Robotics engineering", "Applied mathematics", "Control theory", "Dynamical systems" ]
1,679,173
https://en.wikipedia.org/wiki/Castner%20process
The Castner process is a process for manufacturing sodium metal by electrolysis of molten sodium hydroxide at approximately 330 °C. Below that temperature, the melt would solidify; above that temperature, the molten sodium would start to dissolve in the melt. History The Castner process for production of sodium metal was introduced in 1888 by Hamilton Castner. At that time (prior to the introduction in the same year of the Hall-Héroult process) the primary use for sodium metal was as a reducing agent to produce aluminium from its purified ores. The Castner process reduced the cost of producing sodium in comparison to the old method of reducing sodium carbonate at high temperature using carbon. This in turn reduced the cost of producing aluminium, although the reduction-by-sodium method still could not compete with Hall-Héroult. The Castner process continued nevertheless due to Castner's finding new markets for sodium. In 1926, the Downs cell replaced the Castner process. Process details The diagram shows a ceramic crucible with a steel cylinder suspended within. Both cathode (C) and anode (A) are made of iron or nickel. The temperature is cooler at the bottom and hotter at the top so that the sodium hydroxide is solid in the neck (B) and liquid in the body of the vessel. Sodium metal forms at the cathode but is less dense than the fused sodium hydroxide electrolyte. Wire gauze (G) confines the sodium metal to accumulating at the top of the collection device (P). The cathode reaction is 2 Na+ + 2 e− → 2Na The anode reaction is 4 OH− → O2 + 2 H2O + 4 e− Despite the elevated temperature, some of the water produced remains dissolved in the electrolyte. This water diffuses throughout the electrolyte and results in the reverse reaction taking place on the electrolyzed sodium metal: 2 Na + 2 H2O → H2 + 2 Na+ + 2 OH− with the hydrogen gas also accumulating at (P). This, of course, reduces the efficiency of the process. See also Downs cell References Chemical processes Electrolysis Metallurgical processes
Castner process
[ "Chemistry", "Materials_science" ]
449
[ "Physical chemistry stubs", "Metallurgical processes", "Metallurgy", "Chemical processes", "Electrochemistry", "nan", "Electrolysis", "Chemical process engineering", "Electrochemistry stubs" ]
1,679,700
https://en.wikipedia.org/wiki/Triangulated%20category
In mathematics, a triangulated category is a category with the additional structure of a "translation functor" and a class of "exact triangles". Prominent examples are the derived category of an abelian category, as well as the stable homotopy category. The exact triangles generalize the short exact sequences in an abelian category, as well as fiber sequences and cofiber sequences in topology. Much of homological algebra is clarified and extended by the language of triangulated categories, an important example being the theory of sheaf cohomology. In the 1960s, a typical use of triangulated categories was to extend properties of sheaves on a space X to complexes of sheaves, viewed as objects of the derived category of sheaves on X. More recently, triangulated categories have become objects of interest in their own right. Many equivalences between triangulated categories of different origins have been proved or conjectured. For example, the homological mirror symmetry conjecture predicts that the derived category of a Calabi–Yau manifold is equivalent to the Fukaya category of its "mirror" symplectic manifold. Shift operator is a decategorified analogue of triangulated category. History Triangulated categories were introduced independently by Dieter Puppe (1962) and Jean-Louis Verdier (1963), although Puppe's axioms were less complete (lacking the octahedral axiom (TR 4)). Puppe was motivated by the stable homotopy category. Verdier's key example was the derived category of an abelian category, which he also defined, developing ideas of Alexander Grothendieck. The early applications of derived categories included coherent duality and Verdier duality, which extends Poincaré duality to singular spaces. Definition A shift or translation functor on a category D is an additive automorphism (or for some authors, an auto-equivalence) from D to D. It is common to write for integers n. A triangle (X, Y, Z, u, v, w) consists of three objects X, Y, and Z, together with morphisms , and . Triangles are generally written in the unravelled form: or for short. A triangulated category is an additive category D with a translation functor and a class of triangles, called exact triangles (or distinguished triangles), satisfying the following properties (TR 1), (TR 2), (TR 3) and (TR 4). (These axioms are not entirely independent, since (TR 3) can be derived from the others.) TR 1 For every object X, the following triangle is exact: For every morphism , there is an object Z (called a cone or cofiber of the morphism u) fitting into an exact triangle The name "cone" comes from the cone of a map of chain complexes, which in turn was inspired by the mapping cone in topology. It follows from the other axioms that an exact triangle (and in particular the object Z) is determined up to isomorphism by the morphism , although not always up to a unique isomorphism. Every triangle isomorphic to an exact triangle is exact. This means that if is an exact triangle, and , , and are isomorphisms, then is also an exact triangle. TR 2 If is an exact triangle, then so are the two rotated triangles and In view of the last triangle, the object Z[−1] is called a fiber of the morphism . The second rotated triangle has a more complex form when and are not isomorphisms but only mutually inverse equivalences of categories, since is a morphism from to , and to obtain a morphism to one must compose with the natural transformation . This leads to complex questions about possible axioms one has to impose on the natural transformations making and into a pair of inverse equivalences. Due to this issue, the assumption that and are mutually inverse isomorphisms is the usual choice in the definition of a triangulated category. TR 3 Given two exact triangles and a map between the first morphisms in each triangle, there exists a morphism between the third objects in each of the two triangles that makes everything commute. That is, in the following diagram (where the two rows are exact triangles and f and g are morphisms such that gu = u′f), there exists a map h (not necessarily unique) making all the squares commute: TR 4: The octahedral axiom Let and be morphisms, and consider the composed morphism . Form exact triangles for each of these three morphisms according to TR 1. The octahedral axiom states (roughly) that the three mapping cones can be made into the vertices of an exact triangle so that "everything commutes." More formally, given exact triangles , there exists an exact triangle such that This axiom is called the "octahedral axiom" because drawing all the objects and morphisms gives the skeleton of an octahedron, four of whose faces are exact triangles. The presentation here is Verdier's own, and appears, complete with octahedral diagram, in . In the following diagram, u and v are the given morphisms, and the primed letters are the cones of various maps (chosen so that every exact triangle has an X, a Y, and a Z letter). Various arrows have been marked with [1] to indicate that they are of "degree 1"; e.g. the map from Z′ to X is in fact from Z′ to X[1]. The octahedral axiom then asserts the existence of maps f and g forming an exact triangle, and so that f and g form commutative triangles in the other faces that contain them: Two different pictures appear in ( also present the first one). The first presents the upper and lower pyramids of the above octahedron and asserts that given a lower pyramid, one can fill in an upper pyramid so that the two paths from Y to Y′, and from Y′ to Y, are equal (this condition is omitted, perhaps erroneously, from Hartshorne's presentation). The triangles marked + are commutative and those marked "d" are exact: The second diagram is a more innovative presentation. Exact triangles are presented linearly, and the diagram emphasizes the fact that the four triangles in the "octahedron" are connected by a series of maps of triangles, where three triangles (namely, those completing the morphisms from X to Y, from Y to Z, and from X to Z) are given and the existence of the fourth is claimed. One passes between the first two by "pivoting" about X, to the third by pivoting about Z, and to the fourth by pivoting about X′. All enclosures in this diagram are commutative (both trigons and the square) but the other commutative square, expressing the equality of the two paths from Y′ to Y, is not evident. All the arrows pointing "off the edge" are degree 1: This last diagram also illustrates a useful intuitive interpretation of the octahedral axiom. In triangulated categories, triangles play the role of exact sequences, and so it is suggestive to think of these objects as "quotients", and . In those terms, the existence of the last triangle expresses on the one hand (looking at the triangle  ), and (looking at the triangle  ). Putting these together, the octahedral axiom asserts the "third isomorphism theorem": If the triangulated category is the derived category D(A) of an abelian category A, and X, Y, Z are objects of A viewed as complexes concentrated in degree 0, and the maps and are monomorphisms in A, then the cones of these morphisms in D(A) are actually isomorphic to the quotients above in A. Finally, formulates the octahedral axiom using a two-dimensional commutative diagram with 4 rows and 4 columns. also give generalizations of the octahedral axiom. Properties Here are some simple consequences of the axioms for a triangulated category D. Given an exact triangle in D, the composition of any two successive morphisms is zero. That is, vu = 0, wv = 0, u[1]w = 0, and so on. Given a morphism , TR 1 guarantees the existence of a cone Z completing an exact triangle. Any two cones of u are isomorphic, but the isomorphism is not always uniquely determined. Every monomorphism in D is the inclusion of a direct summand, , and every epimorphism is a projection . A related point is that one should not talk about "injectivity" or "surjectivity" for morphisms in a triangulated category. Every morphism that is not an isomorphism has a nonzero "cokernel" Z (meaning that there is an exact triangle ) and also a nonzero "kernel", namely Z[−1]. Non-functoriality of the cone construction One of the technical complications with triangulated categories is the fact the cone construction is not functorial. For example, given a ring and the partial map of distinguished trianglesin , there are two maps which complete this diagram. This could be the identity map, or the zero mapboth of which are commutative. The fact there exist two maps is a shadow of the fact that a triangulated category is a tool which encodes homotopy limits and colimit. One solution for this problem was proposed by Grothendieck where not only the derived category is considered, but the derived category of diagrams on this category. Such an object is called a Derivator. Examples Are there better axioms? Some experts suspectpg 190 (see, for example, ) that triangulated categories are not really the "correct" concept. The essential reason is that the cone of a morphism is unique only up to a non-unique isomorphism. In particular, the cone of a morphism does not in general depend functorially on the morphism (note the non-uniqueness in axiom (TR 3), for example). This non-uniqueness is a potential source of errors. The axioms work adequately in practice, however, and there is a great deal of literature devoted to their study. Derivators One alternative proposal is the theory of derivators proposed in Pursuing stacks by Grothendieck in the 80spg 191, and later developed in the 90s in his manuscript on the topic. Essentially, these are a system of homotopy categories given by the diagram categories for a category with a class of weak equivalences . These categories are then related by the morphisms of diagrams . This formalism has the advantage of being able to recover the homotopy limits and colimits, which replaces the cone construction. Stable ∞-categories Another alternative built is the theory of stable ∞-categories. The homotopy category of a stable ∞-category is canonically triangulated, and moreover mapping cones become essentially unique (in a precise homotopical sense). Moreover, a stable ∞-category naturally encodes a whole hierarchy of compatibilities for its homotopy category, at the bottom of which sits the octahedral axiom. Thus, it is strictly stronger to give the data of a stable ∞-category than to give the data of a triangulation of its homotopy category. Nearly all triangulated categories that arise in practice come from stable ∞-categories. A similar (but more special) enrichment of triangulated categories is the notion of a dg-category. In some ways, stable ∞-categories or dg-categories work better than triangulated categories. One example is the notion of an exact functor between triangulated categories, discussed below. For a smooth projective variety X over a field k, the bounded derived category of coherent sheaves comes from a dg-category in a natural way. For varieties X and Y, every functor from the dg-category of X to that of Y comes from a complex of sheaves on by the Fourier–Mukai transform. By contrast, there is an example of an exact functor from to that does not come from a complex of sheaves on . In view of this example, the "right" notion of a morphism between triangulated categories seems to be one that comes from a morphism of underlying dg-categories (or stable ∞-categories). Another advantage of stable ∞-categories or dg-categories over triangulated categories appears in algebraic K-theory. One can define the algebraic K-theory of a stable ∞-category or dg-category C, giving a sequence of abelian groups for integers i. The group has a simple description in terms of the triangulated category associated to C. But an example shows that the higher K-groups of a dg-category are not always determined by the associated triangulated category. Thus a triangulated category has a well-defined group, but in general not higher K-groups. On the other hand, the theory of triangulated categories is simpler than the theory of stable ∞-categories or dg-categories, and in many applications the triangulated structure is sufficient. An example is the proof of the Bloch–Kato conjecture, where many computations were done at the level of triangulated categories, and the additional structure of ∞-categories or dg-categories was not required. Cohomology in triangulated categories Triangulated categories admit a notion of cohomology, and every triangulated category has a large supply of cohomological functors. A cohomological functor F from a triangulated category D to an abelian category A is a functor such that for every exact triangle the sequence in A is exact. Since an exact triangle determines an infinite sequence of exact triangles in both directions, a cohomological functor F actually gives a long exact sequence in the abelian category A: A key example is: for each object B in a triangulated category D, the functors and are cohomological, with values in the category of abelian groups. (To be precise, the latter is a contravariant functor, which can be considered as a functor on the opposite category of D.) That is, an exact triangle determines two long exact sequences of abelian groups: and For particular triangulated categories, these exact sequences yield many of the important exact sequences in sheaf cohomology, group cohomology, and other areas of mathematics. One may also use the notation for integers i, generalizing the Ext functor in an abelian category. In this notation, the first exact sequence above would be written: For an abelian category A, another basic example of a cohomological functor on the derived category D(A) sends a complex X to the object in A. That is, an exact triangle in D(A) determines a long exact sequence in A: using that . Exact functors and equivalences An exact functor (also called triangulated functor) from a triangulated category D to a triangulated category E is an additive functor which, loosely speaking, commutes with translation and sends exact triangles to exact triangles. In more detail, an exact functor comes with a natural isomorphism (where the first denotes the translation functor of D and the second denotes the translation functor of E), such that whenever is an exact triangle in D, is an exact triangle in E. An equivalence of triangulated categories is an exact functor that is also an equivalence of categories. In this case, there is an exact functor such that FG and GF are naturally isomorphic to the respective identity functors. Compactly generated triangulated categories Let D be a triangulated category such that direct sums indexed by an arbitrary set (not necessarily finite) exist in D. An object X in D is called compact if the functor commutes with direct sums. Explicitly, this means that for every family of objects in D indexed by a set S, the natural homomorphism of abelian groups is an isomorphism. This is different from the general notion of a compact object in category theory, which involves all colimits rather than only coproducts. For example, a compact object in the stable homotopy category is a finite spectrum. A compact object in the derived category of a ring, or in the quasi-coherent derived category of a scheme, is a perfect complex. In the case of a smooth projective variety X over a field, the category Perf(X) of perfect complexes can also be viewed as the bounded derived category of coherent sheaves, . A triangulated category D is compactly generated if D has arbitrary (not necessarily finite) direct sums; There is a set S of compact objects in D such that for every nonzero object X in D, there is an object Y in S with a nonzero map for some integer n. Many naturally occurring "large" triangulated categories are compactly generated: The derived category of modules over a ring R is compactly generated by one object, the R-module R. The quasi-coherent derived category of a quasi-compact quasi-separated scheme is compactly generated by one object. The stable homotopy category is compactly generated by one object, the sphere spectrum . Amnon Neeman generalized the Brown representability theorem to any compactly generated triangulated category, as follows. Let D be a compactly generated triangulated category, a cohomological functor which takes coproducts to products. Then H is representable. (That is, there is an object W of D such that for all X.) For another version, let D be a compactly generated triangulated category, T any triangulated category. If an exact functor sends coproducts to coproducts, then F has a right adjoint. The Brown representability theorem can be used to define various functors between triangulated categories. In particular, Neeman used it to simplify and generalize the construction of the exceptional inverse image functor for a morphism f of schemes, the central feature of coherent duality theory. t-structures For every abelian category A, the derived category D(A) is a triangulated category, containing A as a full subcategory (the complexes concentrated in degree zero). Different abelian categories can have equivalent derived categories, so that it is not always possible to reconstruct A from D(A) as a triangulated category. Alexander Beilinson, Joseph Bernstein and Pierre Deligne described this situation by the notion of a t-structure on a triangulated category D. A t-structure on D determines an abelian category inside D, and different t-structures on D may yield different abelian categories. Localizing and thick subcategories Let D be a triangulated category with arbitrary direct sums. A localizing subcategory of D is a strictly full triangulated subcategory that is closed under arbitrary direct sums. To explain the name: if a localizing subcategory S of a compactly generated triangulated category D is generated by a set of objects, then there is a Bousfield localization functor with kernel S. (That is, for every object X in D there is an exact triangle with Y in S and LX in the right orthogonal .) For example, this construction includes the localization of a spectrum at a prime number, or the restriction from a complex of sheaves on a space to an open subset. A parallel notion is more relevant for "small" triangulated categories: a thick subcategory of a triangulated category C is a strictly full triangulated subcategory that is closed under direct summands. (If C is idempotent-complete, a subcategory is thick if and only if it is also idempotent-complete.) A localizing subcategory is thick. So if S is a localizing subcategory of a triangulated category D, then the intersection of S with the subcategory of compact objects is a thick subcategory of . For example, Devinatz–Hopkins–Smith described all thick subcategories of the triangulated category of finite spectra in terms of Morava K-theory. The localizing subcategories of the whole stable homotopy category have not been classified. See also Fourier–Mukai transform Six operations Perverse sheaf D-module Beilinson–Bernstein localization Module spectrum Semiorthogonal decomposition Bridgeland stability condition Notes References Some textbook introductions to triangulated categories are: A concise summary with applications is: Some more advanced references are: External links J. Peter May, The axioms for triangulated categories Homological algebra
Triangulated category
[ "Mathematics" ]
4,398
[ "Fields of abstract algebra", "Mathematical structures", "Category theory", "Homological algebra" ]
1,679,935
https://en.wikipedia.org/wiki/Electric%20power%20quality
Electric power quality is the degree to which the voltage, frequency, and waveform of a power supply system conform to established specifications. Good power quality can be defined as a steady supply voltage that stays within the prescribed range, steady AC frequency close to the rated value, and smooth voltage curve waveform (which resembles a sine wave). In general, it is useful to consider power quality as the compatibility between what comes out of an electric outlet and the load that is plugged into it. The term is used to describe electric power that drives an electrical load and the load's ability to function properly. Without the proper power, an electrical device (or load) may malfunction, fail prematurely or not operate at all. There are many ways in which electric power can be of poor quality, and many more causes of such poor quality power. The electric power industry comprises electricity generation (AC power), electric power transmission and ultimately electric power distribution to an electricity meter located at the premises of the end user of the electric power. The electricity then moves through the wiring system of the end user until it reaches the load. The complexity of the system to move electric energy from the point of production to the point of consumption combined with variations in weather, generation, demand and other factors provide many opportunities for the quality of supply to be compromised. While "power quality" is a convenient term for many, it is the quality of the voltage—rather than power or electric current—that is actually described by the term. Power is simply the flow of energy, and the current demanded by a load is largely uncontrollable. Introduction The quality of electrical power may be described as a set of values of parameters, such as: Continuity of service (whether the electrical power is subject to voltage drops or overages below or above a threshold level thereby causing blackouts or brownouts) Variation in voltage magnitude (see below) Transient voltages and currents Harmonic content in the waveforms for AC power It is often useful to think of power quality as a compatibility problem: is the equipment connected to the grid compatible with the events on the grid, and is the power delivered by the grid, including the events, compatible with the equipment that is connected? Compatibility problems always have at least two solutions: in this case, either clean up the power, or make the equipment more resilient. The tolerance of data-processing equipment to voltage variations is often characterized by the CBEMA curve, which give the duration and magnitude of voltage variations that can be tolerated. Ideally, AC voltage is supplied by a utility as sinusoidal having an amplitude and frequency given by national standards (in the case of mains) or system specifications (in the case of a power feed not directly attached to the mains) with an impedance of zero ohms at all frequencies. Deviations No real-life power source is ideal and generally can deviate in at least the following ways: Voltage Variations in the peak or root mean square (RMS) voltage are both important to different types of equipment. When the RMS voltage exceeds the nominal voltage by 10 to 80% for 0.5 cycle to 1 minute, the event is called a "swell". A "dip" (in British English) or a "sag" (in American English the two terms are equivalent) is the opposite situation: the RMS voltage is below the nominal voltage by 10 to 90% for 0.5 cycle to 1 minute. Random or repetitive variations in the RMS voltage between 90 and 110% of nominal can produce a phenomenon known as "flicker" in lighting equipment. Flicker is rapid visible changes of light level. Definition of the characteristics of voltage fluctuations that produce objectionable light flicker has been the subject of ongoing research. Abrupt, very brief increases in voltage, called "spikes", "impulses", or "surges", generally caused by large inductive loads being turned ON, or more severely by lightning. "Undervoltage" occurs when the nominal voltage drops below 90% for more than 1 minute. The term "brownout" is an apt description for voltage drops somewhere between full power (bright lights) and a blackout (no power – no light). It comes from the noticeable to significant dimming of regular incandescent lights, during system faults or overloading etc., when insufficient power is available to achieve full brightness in (usually) domestic lighting. This term is in common usage has no formal definition but is commonly used to describe a reduction in system voltage by the utility or system operator to decrease demand or to increase system operating margins. "Overvoltage" occurs when the nominal voltage rises above 110% for more than 1 minute. Frequency Variations in the frequency. Nonzero low-frequency impedance (when a load draws more power, the voltage drops). Nonzero high-frequency impedance (when a load demands a large amount of current, then suddenly stops demanding it, there will be a dip or spike in the voltage due to the inductances in the power supply line). Variations in the wave shape – usually described as harmonics at lower frequencies (usually less than 3 kHz) and described as Common Mode Distortion or Interharmonics at higher frequencies. Waveform The oscillation of voltage and current ideally follows the form of a sine or cosine function, however it can alter due to imperfections in the generators or loads. Typically, generators cause voltage distortions and loads cause current distortions. These distortions occur as oscillations more rapid than the nominal frequency, and are referred to as harmonics. The relative contribution of harmonics to the distortion of the ideal waveform is called total harmonic distortion (THD). Low harmonic content in a waveform is ideal because harmonics can cause vibrations, buzzing, equipment distortions, and losses and overheating in transformers. Each of these power quality problems has a different cause. Some problems are a result of the shared infrastructure. For example, a fault on the network may cause a dip that will affect some customers; the higher the level of the fault, the greater the number affected. A problem on one customer’s site may cause a transient that affects all other customers on the same subsystem. Problems, such as harmonics, arise within the customer’s own installation and may propagate onto the network and affect other customers. Harmonic problems can be dealt with by a combination of good design practice and well proven reduction equipment. Power conditioning Power conditioning is modifying the power to improve its quality. An uninterruptible power supply (UPS) can be used to switch off of mains power if there is a transient (temporary) condition on the line. However, cheaper UPS units create poor-quality power themselves, akin to imposing a higher-frequency and lower-amplitude square wave atop the sine wave. High-quality UPS units utilize a double conversion topology which breaks down incoming AC power into DC, charges the batteries, then remanufactures an AC sine wave. This remanufactured sine wave is of higher quality than the original AC power feed. A dynamic voltage regulator (DVR) and static synchronous series compensator (SSSC) are utilized for series voltage-sag compensation. A surge protector or simple capacitor or varistor can protect against most overvoltage conditions, while a lightning arrester protects against severe spikes. Electronic filters can remove harmonics. Smart grids and power quality Modern systems use sensors called phasor measurement units (PMU) distributed throughout their network to monitor power quality and in some cases respond automatically to them. Using such smart grids features of rapid sensing and automated self healing of anomalies in the network promises to bring higher quality power and less downtime while simultaneously supporting power from intermittent power sources and distributed generation, which would if unchecked degrade power quality. Compression algorithm A power quality compression algorithm is an algorithm used in the analysis of power quality. To provide high quality electric power service, it is essential to monitor the quality of the electric signals also termed as power quality (PQ) at different locations along an electrical power network. Electrical utilities carefully monitor waveforms and currents at various network locations constantly, to understand what lead up to any unforeseen events such as a power outage and blackouts. This is particularly critical at sites where the environment and public safety are at risk (institutions such as hospitals, sewage treatment plants, mines, etc.). Challenges Engineers use many kinds of meters, that read and display electrical power waveforms and calculate parameters of the waveforms. They measure, for example: current and voltage RMS phase relationship between waveforms of a multi-phase signal power factor frequency total harmonic distortion (THD) active power (kW) reactive power (kVAr) apparent power (kVA) active energy (kWh) reactive energy (kVArh) apparent energy (kVAh) and many more In order to sufficiently monitor unforeseen events, Ribeiro et al. explains that it is not enough to display these parameters, but to also capture voltage waveform data at all times. This is impracticable due to the large amount of data involved, causing what is known the “bottle effect”. For instance, at a sampling rate of 32 samples per cycle, 1,920 samples are collected per second. For three-phase meters that measure both voltage and current waveforms, the data is 6–8 times as much. More practical solutions developed in recent years store data only when an event occurs (for example, when high levels of power system harmonics are detected) or alternatively to store the RMS value of the electrical signals. This data, however, is not always sufficient to determine the exact nature of problems. Raw data compression Nisenblat et al. proposes the idea of power quality compression algorithm (similar to lossy compression methods) that enables meters to continuously store the waveform of one or more power signals, regardless whether or not an event of interest was identified. This algorithm referred to as PQZip empowers a processor with a memory that is sufficient to store the waveform, under normal power conditions, over a long period of time, of at least a month, two months or even a year. The compression is performed in real time, as the signals are acquired; it calculates a compression decision before all the compressed data is received. For instance should one parameter remain constant, and various others fluctuate, the compression decision retains only what is relevant from the constant data, and retains all the fluctuation data. It then decomposes the waveform of the power signal of numerous components, over various periods of the waveform. It concludes the process by compressing the values of at least some of these components over different periods, separately. This real time compression algorithm, performed independent of the sampling, prevents data gaps and has a typical 1000:1 compression ratio. Aggregated data compression A typical function of a power analyzer is generation of data archive aggregated over given interval. Most typically 10 minute or 1 minute interval is used as specified by the IEC/IEEE PQ standards. A significant archive sizes are created during an operation of such instrument. As Kraus et al. have demonstrated the compression ratio on such archives using Lempel–Ziv–Markov chain algorithm, bzip or other similar lossless compression algorithms can be significant. By using prediction and modeling on the stored time series in the actual power quality archive the efficiency of post processing compression is usually further improved. This combination of simplistic techniques implies savings in both data storage and data acquisition processes. Standards The quality of electricity supplied is set forth in international standards and their local derivatives, adopted by different countries: EN50160 is the European standard for power quality, setting the acceptable limits of distortion for the different parameters defining voltage in AC power. IEEE-519 is the North American guideline for power systems. It is defined as "recommended practice" and, unlike EN50160, this guideline refers to current distortion as well as voltage. IEC 61000-4-30 is the standard defining methods for monitoring power quality. Edition 3 (2015) includes current measurements, unlike earlier editions which related to voltage measurement alone. See also Dynamic voltage restoration Rapid voltage change References Literature Power quality Power engineering de:Versorgungsqualität
Electric power quality
[ "Engineering" ]
2,536
[ "Power engineering", "Electrical engineering", "Energy engineering" ]
1,680,145
https://en.wikipedia.org/wiki/Strain%20%28chemistry%29
In chemistry, a molecule experiences strain when its chemical structure undergoes some stress which raises its internal energy in comparison to a strain-free reference compound. The internal energy of a molecule consists of all the energy stored within it. A strained molecule has an additional amount of internal energy which an unstrained molecule does not. This extra internal energy, or strain energy, can be likened to a compressed spring. Much like a compressed spring must be held in place to prevent release of its potential energy, a molecule can be held in an energetically unfavorable conformation by the bonds within that molecule. Without the bonds holding the conformation in place, the strain energy would be released. Summary Thermodynamics The equilibrium of two molecular conformations is determined by the difference in Gibbs free energy of the two conformations. From this energy difference, the equilibrium constant for the two conformations can be determined. If there is a decrease in Gibbs free energy from one state to another, this transformation is spontaneous and the lower energy state is more stable. A highly strained, higher energy molecular conformation will spontaneously convert to the lower energy molecular conformation. Enthalpy and entropy are related to Gibbs free energy through the equation (at a constant temperature): Enthalpy is typically the more important thermodynamic function for determining a more stable molecular conformation. While there are different types of strain, the strain energy associated with all of them is due to the weakening of bonds within the molecule. Since enthalpy is usually more important, entropy can often be ignored. This isn't always the case; if the difference in enthalpy is small, entropy can have a larger effect on the equilibrium. For example, n-butane has two possible conformations, anti and gauche. The anti conformation is more stable by 0.9 kcal mol−1. We would expect that butane is roughly 82% anti and 18% gauche at room temperature. However, there are two possible gauche conformations and only one anti conformation. Therefore, entropy makes a contribution of 0.4 kcal in favor of the gauche conformation. We find that the actual conformational distribution of butane is 70% anti and 30% gauche at room temperature. Determining molecular strain The standard heat of formation (ΔfH°) of a compound is described as the enthalpy change when the compound is formed from its separated elements. When the heat of formation for a compound is different from either a prediction or a reference compound, this difference can often be attributed to strain. For example, ΔfH° for cyclohexane is -29.9 kcal mol−1 while ΔfH° for methylcyclopentane is -25.5 kcal mol−1. Despite having the same atoms and number of bonds, methylcyclopentane is higher in energy than cyclohexane. This difference in energy can be attributed to the ring strain of a five-membered ring which is absent in cyclohexane. Experimentally, strain energy is often determined using heats of combustion which is typically an easy experiment to perform. Determining the strain energy within a molecule requires knowledge of the expected internal energy without the strain. There are two ways do this. First, one could compare to a similar compound that lacks strain, such as in the previous methylcyclohexane example. Unfortunately, it can often be difficult to obtain a suitable compound. An alternative is to use Benson group increment theory. As long as suitable group increments are available for the atoms within a compound, a prediction of ΔfH° can be made. If the experimental ΔfH° differs from the predicted ΔfH°, this difference in energy can be attributed to strain energy. Kinds of strain Van der Waals strain Van der Waals strain, or steric strain, occurs when atoms are forced to get closer than their Van der Waals radii allow. Specifically, Van der Waals strain is considered a form of strain where the interacting atoms are at least four bonds away from each other. The amount on steric strain in similar molecules is dependent on the size of the interacting groups; bulky tert-butyl groups take up much more space than methyl groups and often experience greater steric interactions. The effects of steric strain in the reaction of trialkylamines and trimethylboron were studied by Nobel laureate Herbert C. Brown et al. They found that as the size of the alkyl groups on the amine were increased, the equilibrium constant decreased as well. The shift in equilibrium was attributed to steric strain between the alkyl groups of the amine and the methyl groups on boron. Syn-pentane strain There are situations where seemingly identical conformations are not equal in strain energy. Syn-pentane strain is an example of this situation. There are two different ways to put both of the bonds the central in n-pentane into a gauche conformation, one of which is 3 kcal mol−1 higher in energy than the other. When the two methyl-substituted bonds are rotated from anti to gauche in opposite directions, the molecule assumes a cyclopentane-like conformation where the two terminal methyl groups are brought into proximity. If the bonds are rotated in the same direction, this doesn't occur. The steric strain between the two terminal methyl groups accounts for the difference in energy between the two similar, yet very different conformations. Allylic strain Allylic strain, or A1,3 strain is closely associated to syn-pentane strain. An example of allylic strain can be seen in the compound 2-pentene. It's possible for the ethyl substituent of the olefin to rotate such that the terminal methyl group is brought near to the vicinal methyl group of the olefin. These types of compounds usually take a more linear conformation to avoid the steric strain between the substituents. 1,3-diaxial strain 1,3-diaxial strain is another form of strain similar to syn-pentane. In this case, the strain occurs due to steric interactions between a substituent of a cyclohexane ring ('α') and gauche interactions between the alpha substituent and both methylene carbons two bonds away from the substituent in question (hence, 1,3-diaxial interactions). When the substituent is axial, it is brought near to an axial gamma hydrogen. The amount of strain is largely dependent on the size of the substituent and can be relieved by forming into the major chair conformation placing the substituent in an equatorial position. The difference in energy between conformations is called the A value and is well known for many different substituents. The A value is a thermodynamic parameter and was originally measured along with other methods using the Gibbs free energy equation and, for example, the Meerwein–Ponndorf–Verley reduction/Oppenauer oxidation equilibrium for the measurement of axial versus equatorial values of cyclohexanone/cyclohexanol (0.7 kcal mol−1). Torsional strain Torsional strain is the resistance to bond twisting. In cyclic molecules, it is also called Pitzer strain. Torsional strain occurs when atoms separated by three bonds are placed in an eclipsed conformation instead of the more stable staggered conformation. The barrier of rotation between staggered conformations of ethane is approximately 2.9 kcal mol−1. It was initially believed that the barrier to rotation was due to steric interactions between vicinal hydrogens, but the Van der Waals radius of hydrogen is too small for this to be the case. Recent research has shown that the staggered conformation may be more stable due to a hyperconjugative effect. Rotation away from the staggered conformation interrupts this stabilizing force. More complex molecules, such as butane, have more than one possible staggered conformation. The anti conformation of butane is approximately 0.9 kcal mol−1 (3.8 kJ mol−1) more stable than the gauche conformation. Both of these staggered conformations are much more stable than the eclipsed conformations. Instead of a hyperconjugative effect, such as that in ethane, the strain energy in butane is due to both steric interactions between methyl groups and angle strain caused by these interactions. Ring strain According to the VSEPR theory of molecular bonding, the preferred geometry of a molecule is that in which both bonding and non-bonding electrons are as far apart as possible. In molecules, it is quite common for these angles to be somewhat compressed or expanded compared to their optimal value. This strain is referred to as angle strain, or Baeyer strain. The simplest examples of angle strain are small cycloalkanes such as cyclopropane and cyclobutane, which are discussed below. Furthermore, there is often eclipsing or Pitzer strain in cyclic systems. These and possible transannular interactions were summarized early by H.C. Brown as internal strain, or I-Strain. Molecular mechanics or force field approaches allow to calculate such strain contributions, which then can be correlated e.g. with reaction rates or equilibria. Many reactions of alicyclic compounds, including equilibria, redox and solvolysis reactions, which all are characterized by transition between sp2 and sp3 state at the reaction center, correlate with corresponding strain energy differences SI (sp2 -sp3). The data reflect mainly the unfavourable vicinal angles in medium rings, as illustrated by the severe increase of ketone reduction rates with increasing SI (Figure 1). Another example is the solvolysis of bridgehead tosylates with steric energy differences between corresponding bromide derivatives (sp3) and the carbenium ion as sp2- model for the transition state. (Figure 2) In principle, angle strain can occur in acyclic compounds, but the phenomenon is rare. Small rings Cyclohexane is considered a benchmark in determining ring strain in cycloalkanes and it is commonly accepted that there is little to no strain energy. In comparison, smaller cycloalkanes are much higher in energy due to increased strain. Cyclopropane is analogous to a triangle and thus has bond angles of 60°, much lower than the preferred 109.5° of an sp3 hybridized carbon. Furthermore, the hydrogens in cyclopropane are eclipsed. Cyclobutane experiences similar strain, with bond angles of approximately 88° (it isn't completely planar) and eclipsed hydrogens. The strain energy of cyclopropane and cyclobutane are 27.5 and 26.3 kcal mol−1, respectively. Cyclopentane experiences much less strain, mainly due to torsional strain from eclipsed hydrogens: its preferred conformations interconvert by a process called pseudorotation. Ring strain can be considerably higher in bicyclic systems. For example, bicyclobutane, C4H6, is noted for being one of the most strained compounds that is isolatable on a large scale; its strain energy is estimated at 63.9 kcal mol−1 (267 kJ mol−1). Transannular strain Medium-sized rings (7–13 carbons) experience more strain energy than cyclohexane, due mostly to deviation from ideal vicinal angles, or Pitzer strain. Molecular mechanics calculations indicate that transannular strain, also known as Prelog strain, does not play an essential role. Transannular reactions however, such as 1,5-shifts in cyclooctane substitution reactions, are well known. Bicyclic systems The amount of strain energy in bicyclic systems is commonly the sum of the strain energy in each individual ring. This isn't always the case, as sometimes the fusion of rings induces some extra strain. Strain in allosteric systems In synthetic allosteric systems there are typically two or more conformers with stability differences due to strain contributions. Positive cooperativity for example results from increased binding of a substrate A to a conformer C2 which is produced by binding of an effector molecule E. If the conformer C2 has a similar stability as another equilibrating conformer C1 a fit induced by the substrate A will lead to binding of A to C2 also in absence of the effector E. Only if the stability of the conformer C2 is significantly smaller, meaning that in absence of an effector E the population of C2 is much smaller than that of C1, the ratio K2/K1 which measures the efficiency of the allosteric signal will increase. The ratio K2/K1 can be related directly to the strain energy difference between the conformers C1 and C2; if it is small higher concentrations of A will directly bind to C2 and make the effector E inefficient. In addition, the response time of such allosteric switches depends on the strain of the conformer interconversion transitions state. See also Strain (materials science) References Stereochemistry
Strain (chemistry)
[ "Physics", "Chemistry" ]
2,763
[ "Spacetime", "Stereochemistry", "Space", "nan" ]
1,680,187
https://en.wikipedia.org/wiki/Van%20der%20Waals%20strain
Van der Waals strain is strain resulting from Van der Waals repulsion when two substituents in a molecule approach each other with a distance less than the sum of their Van der Waals radii. Van der Waals strain is also called Van der Waals repulsion and is related to steric hindrance. One of the most common forms of this strain is eclipsing hydrogen, in alkanes. In rotational and pseudorotational mechanisms In molecules whose vibrational mode involves a rotational or pseudorotational mechanism (such as the Berry mechanism or the Bartell mechanism), Van der Waals strain can cause significant differences in potential energy, even between molecules with identical geometry. PF5, for example, has significantly lower potential energy than PCl5. Despite their identical trigonal bipyramidal molecular geometry, the higher electron count of chlorine as compared to fluorine causes a potential energy spike as the molecule enters its intermediate in the mechanism and the substituents draw nearer to each other. See also Van der Waals force Van der Waals molecule Van der Waals radius Van der Waals surface References Stereochemistry Strain
Van der Waals strain
[ "Physics", "Chemistry" ]
241
[ "Stereochemistry", "Space", "Stereochemistry stubs", "nan", "Spacetime" ]
1,681,010
https://en.wikipedia.org/wiki/Cycle%20graph%20%28algebra%29
In group theory, a subfield of abstract algebra, a cycle graph of a group is an undirected graph that illustrates the various cycles of that group, given a set of generators for the group. Cycle graphs are particularly useful in visualizing the structure of small finite groups. A cycle is the set of powers of a given group element a, where an, the n-th power of an element a, is defined as the product of a multiplied by itself n times. The element a is said to generate the cycle. In a finite group, some non-zero power of a must be the group identity, which we denote either as e or 1; the lowest such power is the order of the element a, the number of distinct elements in the cycle that it generates. In a cycle graph, the cycle is represented as a polygon, with its vertices representing the group elements and its edges indicating how they are linked together to form the cycle. Definition Each group element is represented by a node in the cycle graph, and enough cycles are represented as polygons in the graph so that every node lies on at least one cycle. All of those polygons pass through the node representing the identity, and some other nodes may also lie on more than one cycle. Suppose that a group element a generates a cycle of order 6 (has order 6), so that the nodes a, a2, a3, a4, a5, and a6 = e are the vertices of a hexagon in the cycle graph. The element a2 then has order 3; but making the nodes a2, a4, and e be the vertices of a triangle in the graph would add no new information. So, only the primitive cycles need be considered, those that are not subsets of another cycle. Also, the node a5, which also has order 6, generates the same cycle as does a itself; so we have at least two choices for which element to use in generating a cycle --- often more. To build a cycle graph for a group, we start with a node for each group element. For each primitive cycle, we then choose some element a that generates that cycle, and we connect the node for e to the one for a, a to a2, ..., ak−1 to ak, etc., until returning to e. The result is a cycle graph for the group. When a group element a has order 2 (so that multiplication by a is an involution), the rule above would connect e to a by two edges, one going out and the other coming back. Except when the intent is to emphasize the two edges of such a cycle, it is typically drawn as a single line between the two elements. Note that this correspondence between groups and graphs is not one-to-one in either direction: Two different groups can have the same cycle graph, and two different graphs can be cycle graphs for a single group. We give examples of each in the non-uniqueness section. Example and properties As an example of a group cycle graph, consider the dihedral group Dih4. The multiplication table for this group is shown on the left, and the cycle graph is shown on the right, with e specifying the identity element. Notice the cycle {e, a, a2, a3} in the multiplication table, with a4 = e. The inverse a−1 = a3 is also a generator of this cycle: (, , and . Similarly, any cycle in any group has at least two generators, and may be traversed in either direction. More generally, the number of generators of a cycle with n elements is given by the Euler φ function of n, and any of these generators may be written as the first node in the cycle (next to the identity e); or more commonly the nodes are left unmarked. Two distinct cycles cannot intersect in a generator. Cycles that contain a non-prime number of elements have cyclic subgroups that are not shown in the graph. For the group Dih4 above, we could draw a line between a2 and e since , but since a2 is part of a larger cycle, this is not an edge of the cycle graph. There can be ambiguity when two cycles share a non-identity element. For example, the 8-element quaternion group has cycle graph shown at right. Each of the elements in the middle row when multiplied by itself gives −1 (where 1 is the identity element). In this case we may use different colors to keep track of the cycles, although symmetry considerations will work as well. As noted earlier, the two edges of a 2-element cycle are typically represented as a single line. The inverse of an element is the node symmetric to it in its cycle, with respect to the reflection which fixes the identity. Non-uniqueness The cycle graph of a group is not uniquely determined up to graph isomorphism; nor does it uniquely determine the group up to group isomorphism. That is, the graph obtained depends on the set of generators chosen, and two different groups (with chosen sets of generators) can generate the same cycle graph. A single group can have different cycle graphs For some groups, choosing different elements to generate the various primitive cycles of that group can lead to different cycle graphs. There is an example of this for the abelian group , which has order 20. We denote an element of that group as a triple of numbers , where and each of and is either 0 or 1. The triple is the identity element. In the drawings below, is shown above and . This group has three primitive cycles, each of order 10. In the first cycle graph, we choose, as the generators of those three cycles, the nodes , , and . In the second, we generate the third of those cycles --- the blue one --- by starting instead with . The two resulting graphs are not isomorphic because they have diameters 5 and 4 respectively. Different groups can have the same cycle graph Two different (non-isomorphic) groups can have cycle graphs that are isomorphic, where the latter isomorphism ignores the labels on the nodes of the graphs. It follows that the structure of a group is not uniquely determined by its cycle graph. There is an example of this already for groups of order 16, the two groups being and . The abelian group is the direct product of the cyclic groups of orders 8 and 2. The non-abelian group is that semidirect product of and in which the non-identity element of maps to the multiply-by-5 automorphism of . In drawing the cycle graphs of those two groups, we take to be generated by elements s and t with where that latter relation makes abelian. And we take to be generated by elements and with Here are cycle graphs for those two groups, where we choose to generate the green cycle on the left and to generate that cycle on the right: In the right-hand graph, the green cycle, after moving from 1 to , moves next to because History Cycle graphs were investigated by the number theorist Daniel Shanks in the early 1950s as a tool to study multiplicative groups of residue classes. Shanks first published the idea in the 1962 first edition of his book Solved and Unsolved Problems in Number Theory. In the book, Shanks investigates which groups have isomorphic cycle graphs and when a cycle graph is planar. In the 1978 second edition, Shanks reflects on his research on class groups and the development of the baby-step giant-step method: Cycle graphs are used as a pedagogical tool in Nathan Carter's 2009 introductory textbook Visual Group Theory. Graph characteristics of particular group families Certain group types give typical graphs: Cyclic groups Zn, order n, is a single cycle graphed simply as an n-sided polygon with the elements at the vertices: When n is a prime number, groups of the form (Zn)m will have n-element cycles sharing the identity element: Dihedral groups Dihn, order 2n consists of an n-element cycle and n 2-element cycles: Dicyclic groups, Dicn = Q4n, order 4n: Other direct products: Symmetric groups – The symmetric group Sn contains, for any group of order n, a subgroup isomorphic to that group. Thus the cycle graph of every group of order n will be found in the cycle graph of Sn. See example: Subgroups of S4 Extended example: Subgroups of the full octahedral group The full octahedral group is the direct product of the symmetric group S4 and the cyclic group Z2. Its order is 48, and it has subgroups of every order that divides 48. In the examples below nodes that are related to each other are placed next to each other, so these are not the simplest possible cycle graphs for these groups (like those on the right). Like all graphs a cycle graph can be represented in different ways to emphasize different properties. The two representations of the cycle graph of S4 are an example of that. See also List of small groups Cayley graph References Skiena, S. (1990). Cycles, Stars, and Wheels. Implementing Discrete Mathematics: Combinatorics and Graph Theory with Mathematica (pp. 144-147). Pemmaraju, S., & Skiena, S. (2003). Cycles, Stars, and Wheels. Computational Discrete Mathematics: Combinatorics and Graph Theory with Mathematica (pp. 248-249). Cambridge University Press. External links Abstract algebra Group theory Application-specific graphs
Cycle graph (algebra)
[ "Mathematics" ]
1,965
[ "Group theory", "Abstract algebra", "Fields of abstract algebra", "Algebra" ]
17,851,628
https://en.wikipedia.org/wiki/Suniva
Suniva is an American owned, U.S. based manufacturer of crystalline silicon photovoltaic (PV) solar cells. Headquartered in metropolitan Atlanta, with a cell manufacturing facility in Georgia, Suniva has sold its PV products globally. Suniva's distribution network for solar panels covered over 53 distributors and wholesalers, across over seven different countries. History Suniva spun out of Georgia Institute of Technology's University Center of Excellence in Photovoltaics and the work of Dr. Ajeet Rohatgi in 2007. Dr. Rohatgi is the founder and director of the photovoltaic (PV) research program at Georgia Tech (since 1985) and the founding director of the U.S. Department of Energy-funded University Center of Excellence in Photovoltaics (UCEP). Suniva built its first manufacturing plant in Norcross, GA in 2008, which had an initial production capacity of 32 MW and has since expanded to over 400 MW. In July 2014, Suniva announced its 200 MW module facility in Saginaw, MI. Following a successful reorganization and exit from Bankruptcy in 2019, Suniva is now owned by Lion Point Capital, a New York based investment firm. 2017 Sections 201 & 202 Trade Complaint In April 2017, Suniva filed for bankruptcy. Then the company filed trade complaints against its international competitors under Sections 201 and 202 of the Trade Act of 1974 with the ITC eight days later. It asks for “global safeguard relief” from imports of crystalline silicon solar PV cells and modules. SolarWorld joined the complaint a month later. Opposition within the industry was fierce, with opponents arguing forcefully that a favorable finding on the trade case would destroy the rooftop solar industry by raising the prices of solar modules to unaffordable levels. Suniva and SolarWorld disputed the claims, suggesting that not only would a favorable decision not harm the industry but in fact would create 114,800 jobs instead. Initially only supported by the Congresspeople who represented Suniva and SolarWorld's districts, as decision day got closer, more groups came to support the trade petition. The USITC rendered a unanimous (4/4) decision on Sept. 22 that imports of solar panels had injured domestic manufacturers. Both Suniva and SolarWorld offered their suggestions to the USITC on September 28, 2017. On 22 January 2018, the Trump administration applied a tariff of 30 percent to imported solar panels. The tariff will last one year before stepping down to 25 percent in 2019, 20 percent in 2020, and 15 percent in 2021. The tariff expired in 2022. References Photovoltaics manufacturers
Suniva
[ "Engineering" ]
529
[ "Photovoltaics manufacturers", "Engineering companies" ]
17,854,576
https://en.wikipedia.org/wiki/Adjacent%20channel%20power%20ratio
Adjacent Channel Power Ratio (ACPR) is ratio between the total power of adjacent channel (intermodulation signal) to the main channel's power (useful signal). Ratio The ratio between the total power adjacent channel (intermodulation signal) to the main channel's power (useful signal). There are two ways of measuring ACPR. The first way is by finding 10*log of the ratio of the total output power to the power in adjacent channel. The second (and much more popular method) is to find the ratio of the output power in a smaller bandwidth around the center of carrier to the power in the adjacent channel. The smaller bandwidth is equal to the bandwidth of the adjacent channel signal. Second way is more popular, because it can be measured easily. ACPR is desired to be as low as possible. A high ACPR indicates that significant spectral spreading has occurred. See also Spectral leakage Spread spectrum References Power electronics Signal processing
Adjacent channel power ratio
[ "Technology", "Engineering" ]
196
[ "Telecommunications engineering", "Computer engineering", "Signal processing", "Electronic engineering", "Power electronics" ]
16,667,158
https://en.wikipedia.org/wiki/Electrostatic%20voltmeter
Electrostatic voltmeter can refer to an electrostatic charge meter, known also as surface DC voltmeter, or to a voltmeter to measure large electrical potentials, traditionally called electrostatic voltmeter. Charge meter A surface DC voltmeter is an instrument that measures voltage with no electric charge transfer. It can accurately measure surface potential (voltage) on materials without making physical contact, and so there is no electrostatic charge transfer or loading of the voltage source. Explanation Many voltage measurements cannot be made using conventional contacting voltmeters because they require charge transfer to the voltmeter, thus causing loading and modification of the source voltage. For example, when measuring voltage distribution on a dielectric surface, any measurement technique that requires charge transfer, no matter how small, will modify or destroy the actual data. Principle of operation In practice, an electrostatic charge monitoring probe is placed close (1 mm to 5 mm) to the surface to be measured, and the probe body is driven to the same potential as the measured unknown by an electronic circuit. This achieves a high-accuracy measurement that is virtually insensitive to variations in probe-to-surface distances. The technique also prevents arc-over between the probe and measured surface when measuring high voltages. Voltmeter The operating principle of an electrostatic voltmeter is similar to that of an electrometer; it is, however, designed to measure high potential differences, typically from a few hundred to many thousands volts. Principle of operation An electrostatic voltmeter uses the attraction force between two charged surfaces to create a deflection of a pointer directly calibrated in volts. Since the attraction force is the same regardless of the polarity of the charged surfaces (as long as the charge is opposite), the electrostatic voltmeter can measure DC voltages of either polarity. Typical construction is shown in the drawing. The pivoted sector NN is attracted to the fixed sector QQ. The small weight w counterbalances the moving sector and indicates the voltage by the pointer P. In newer instruments, the weight is replaced by a spring, thus allowing the meter to be used in horizontal and vertical positions; this form is shown in the photograph. The fixed sector is insulated from the rest of the meter. The butterfly-shaped moving industry is made of thin aluminum foil. To minimize high electrical stress, the fixed and moving sectors are highly polished without sharp corners. An electrostatic voltmeter uses the attraction force between two charged surfaces to create a deflection of a pointer directly calibrated in volts. Since the attraction force is the same regardless of the polarity of the charged surfaces (as long as the charge is opposite), the electrostatic voltmeter can measure DC voltages of either polarity. The instrument will draw negligible current from a DC source. It can also be used on AC and will draw a small current due to the capacitance of the rotor system. To reduce the torque required and improve the sensitivity and speed of response, a small mirror attached to the rotor may replace the pointer with a light beam deflected by it over a scale. References Voltmeters Electrical meters Measuring instruments
Electrostatic voltmeter
[ "Physics", "Technology", "Engineering" ]
659
[ "Voltmeters", "Physical quantities", "Measuring instruments", "Voltage", "Electrical meters" ]
16,672,559
https://en.wikipedia.org/wiki/Pressure%20exchanger
A pressure exchanger transfers pressure energy from a high pressure fluid stream to a low pressure fluid stream. Many industrial processes operate at elevated pressures and have high pressure waste streams. One way of providing a high pressure fluid to such a process is to transfer the waste pressure to a low pressure stream using a pressure exchanger. One particularly efficient type of pressure exchanger is a rotary pressure exchanger. This device uses a cylindrical rotor with longitudinal ducts parallel to its rotational axis. The rotor spins inside a sleeve between two end covers. Pressure energy is transferred directly from the high pressure stream to the low pressure stream in the ducts of the rotor. Some fluid that remains in the ducts serves as a barrier that inhibits mixing between the streams. This rotational action is similar to that of an old fashioned machine gun firing high pressure bullets and it is continuously refilled with new fluid cartridges. The ducts of the rotor charge and discharge as the pressure transfer process repeats itself. The performance of a pressure exchanger is measured by the efficiency of the energy transfer process and by the degree of mixing between the streams. The energy of the streams is the product of their flow volumes and pressures. Efficiency is a function of the pressure differentials and the volumetric losses (leakage) through the device computed with the following equation: where Q is flow, P is pressure, L is leakage flow, HDP is high pressure differential, LDP is low pressure differential, the subscript B refers to the low pressure feed to the device and the subscript G refers to the high pressure feed to the device. Mixing is a function of the concentrations of the species in the inlet streams and the ratio of flow volumes to the device. Reverse osmosis One application in which pressure exchangers are widely used is reverse osmosis (RO). In an RO system, pressure exchangers are used as energy recovery devices (ERDs). As illustrated, high-pressure concentrate from the membranes [C] is directed [3] to the ERD [D]. The ERD uses this high-pressure concentrate stream to pressurize the low-pressure seawater stream (stream [1] becomes stream [4]), which it then merges (with the aid of a circulation pump [B]) into the highest-pressure seawater stream created by the high-pressure pump [A]. This combined stream feeds the membranes [C]. The concentrate leaves the ERD at low pressure [5], expelled by the incoming feedwater flow [1]. Pressure exchangers save energy in these systems by reducing the load on the high pressure pump. In a seawater RO system operating at a 40% membrane water recovery rate, the ERD supplies 60% of the membrane feed flow. Energy is consumed by the circulation pump, however, because this pump merely circulates and does not pressurize water, its energy consumption is almost negligible: less than 3% of the energy consumed by the high pressure pump. Therefore, nearly 60% of the membrane feed flow is pressurized with almost no energy input. Applications Seawater desalination plants have produced potable water for many years. However, until recently desalination had been used only in special circumstances because of the high energy consumption of the process. Early designs for desalination plants made use of various evaporation technologies. The most advanced are the multi-stage flash distillation seawater evaporation desalinators, which make use of multiple stages and have an energy consumption of over 9 kWh per cubic meter of potable water produced. For this reason large seawater desalinators were initially constructed in locations with low energy costs, such as the Middle East, or next to process plants with available waste heat. In the 1970s the seawater reverse osmosis (SWRO) process was developed which made potable water from seawater by forcing it under high pressure through a tight membrane thus filtering out salts and impurities. These salts and impurities are discharged from the SWRO device as a concentrated brine solution in a continuous stream, which contains a large amount of high-pressure energy. Most of this energy can be recovered with a suitable device. Many early SWRO plants built in the 1970s and early 1980s had an energy consumption of over 6.0 kWh per cubic meter of potable water produced, due to low membrane performance, pressure drop limitations and the absence of energy recovery devices. An example where a pressure exchange engine finds application is in the production of potable water using the reverse osmosis membrane process. In this process, a feed saline solution is pumped into a membrane array at high pressure. The input saline solution is then divided by the membrane array into super saline solution (brine) at high pressure and potable water at low pressure. While the high pressure brine is no longer useful in this process as a fluid, the pressure energy that it contains has high value. A pressure exchange engine is employed to recover the pressure energy in the brine and transfer it to feed saline solution. After transfer of the pressure energy in the brine flow, the brine is expelled at low pressure to drain. Nearly all reverse osmosis plants operated for the desalination of sea water in order to produce drinking water in industrial scale are equipped with an energy recovery system based on turbines. These are activated by the concentrate (brine) leaving the plant and transfer the energy contained in the high pressure of this concentrate usually mechanically to the high-pressure pump. In the pressure exchanger the energy contained in the brine is transferred hydraulically and with an efficiency of approximately 98% to the feed. This reduces the energy demand for the desalination process significantly and thus the operating costs. Therefrom results an economic energy recovery, amortization times for such systems varying between 2 and 4 years depending on the place of operation. Reduced energy and capital costs mean that for the first time ever it is possible to produce potable water from seawater at a cost below $1 per cubic meter in many locations worldwide. Although the cost may be a bit higher on islands with high power costs, the PE has the potential to rapidly expand the market for seawater desalination. By means of the application of a pressure exchange system, which is already used in other domains, a considerably higher efficiency of energy recovery of reverse osmosis systems may be achieved than with the use of reverse running pumps or turbines. The pressure exchange system is suited, above all, for bigger plants i.e. approx. ≥ 2000 m3/d permeate production. See also Richard Stover, pioneered the development of an energy recovery device currently in use in most seawater reverse osmosis desalination plants References Energy Recovery Device Performance Analysis by Richard L. Stover Ph. D. Ghalilah SWRO Plant by Richard L. Stover Ph. D. http://www.energyrecovery.com/news/documents/ERDsforSWRO.pdf http://www.energyrecovery.com/news/pdf/eri_launches_advanced_swro.doc https://archive.today/20130421173348/http://www.patentstorm.us/patents/7306437-description.html Fluid dynamics Mechanical engineering Membrane technology
Pressure exchanger
[ "Physics", "Chemistry", "Engineering" ]
1,499
[ "Applied and interdisciplinary physics", "Separation processes", "Chemical engineering", "Membrane technology", "Mechanical engineering", "Piping", "Fluid dynamics" ]
11,305,573
https://en.wikipedia.org/wiki/Chemical%20vapor%20infiltration
Chemical vapour infiltration (CVI) is a ceramic engineering process whereby matrix material is infiltrated into fibrous preforms by the use of reactive gases at elevated temperature to form fiber-reinforced composites. The earliest use of CVI was the infiltration of fibrous alumina with chromium carbide. CVI can be applied to the production of carbon-carbon composites and ceramic-matrix composites. A similar technique is chemical vapour deposition (CVD), the main difference being that the deposition of CVD is on hot bulk surfaces, while CVI deposition is on porous substrates. Process During chemical vapour infiltration, the fibrous preform is supported on a porous metallic plate through which a mixture of carrier gas along with matrix material is passed at an elevated temperature. The preforms can be made using yarns or woven fabrics or they can be filament-wound or braided three-dimensional shapes. The infiltration takes place in a reactor which is connected to an effluent-treatment plant where the gases and residual matrix material are chemically treated. Induction heating is used in a conventional isothermal and isobaric CVI. A typical demonstration of the process is shown in Figure 1. Here, the gases and matrix material enter the reactor from the feed system at the bottom of the reactor. The fibrous preform undergoes a chemical reaction at high temperature with the matrix material and thus the latter infiltrates in the fiber or preform crevices. The CVI growth mechanism is shown in Figure 2. Here, as the reaction between fibre surface and the matrix material takes place, a coating of matrix is formed on the fibre surface while the fibre diameter decreases. The unreacted reactants along with gases exit the reactor via outlet system and are transferred to an effluent treatment plant. Modified CVI The ‘hot wall’ technique – isothermal and isobaric CVI, is still widely used. However, the processing time is typically very long and the deposition rate is slow, so new routes have been invented to develop more rapid infiltration techniques: Thermal-gradient CVI with forced flow – In this process, a forced flow of gases and matrix material is used to achieve less porous and more uniformly dense material. Here, the gaseous mixture along with the matrix material is passed at a pressurised flow through the preform or fibrous material. This process is carried out at a temperature gradient from 1050 °C at water cooled zone to 1200 °C at furnace zone is achieved. The Figure 3 shows the diagrammatic representation of a typical Forced-flow CVI (FCVI). Types of ceramic matrix composites with process parameters Table 1 : Examples of Different processes of CMCs. Examples Some examples where CVI process is used in the manufacturing are: Carbon / Carbon Composites (C/C) Based on previous study, a PAN-based carbon felt is chosen as preform, while kerosene is chosen as a precursor. The infiltration of matrix in the preform is performed at 1050 °C for several hours at atmospheric pressure by the FCVI. The inner of the upper surface of preform temperature should be kept at 1050 °C, middle at 1080 °C and the outer at 1020 °C. Nitrogen gas flows through the reactor for safety. Silicon Carbide / Silicon Carbide (SiC/SiC) Matrix:CH3SiCl3 (g) → SiC(s)+ 3 HCl(g) Interphase: CH4(g) → C(s)+ 2H2(g) The SiC fibers serve as a preform which is heated up to about 1000 °C in vacuum and then CH4 gas is introduced into the preform as the interlayer between fiber and matrix. This process lasts for 70 minutes under pressure. Next, the methyltrichlorosilane was carried by hydrogen into the chamber. The preform is in SiC matrix for hours at 1000 °C under pressure. Advantages of CVI Residual stresses are lower due to lower infiltration temperature. Large complex shapes can be produced. The composite prepared by this method have enhanced mechanical properties, corrosion resistance and thermal-shock resistance. Various matrices and fibre combination can be used to produce different composite properties. (SiC, C, Si3N4, BN, B4C, ZrC, etc.). There is very little damage to fibres and to the geometry of the preform due to low infiltration temperature and pressures. This process gives considerable flexibility in selecting fibers and matrices. Very pure and uniform matrix can be obtained by carefully controlling the purity of gases. Disadvantages The residual porosity is about 10 to 15% which is high; the production rate is low; the capital investment, production and processing costs are high. Applications CVI is used to build a variety of high-performance components: Heat-shield systems for space vehicles. High-temperature systems like combustion chambers, turbine blades, stator vanes, and disc brakes which experience extreme thermal shock. In the case of burners, high-temperature valves and gas ducts, oxides of CMCs are used. Components of slide bearings for providing corrosion resistance and wear resistance. References External links Center for Composite Materials World Academy of Ceramics Ceramic materials Chemical vapor deposition Industrial processes Plastics industry
Chemical vapor infiltration
[ "Chemistry", "Engineering" ]
1,086
[ "Chemical vapor deposition", "Ceramic engineering", "Ceramic materials" ]
11,307,608
https://en.wikipedia.org/wiki/Nautilus%20%28video%20game%29
Nautilus is a video game for Atari 8-bit computers created by Mike Potter and published by Synapse Software in 1982. The players control a submarine, the Nautilus, or a destroyer, the Colossus, attempting to either destroy or rebuild an underwater city. The game the first to feature a "split screen" display to allow both players to move at the same time. Gameplay Nautilus starts with player one in control of the submarine, visible in the lower pane of the split-screen display. The joystick allows the player to move left and right or rise and sink. The player can shoot their Thunderbolt torpedoes to the right or left in the direction of travel. The primary task for the player is to move into location beside the various underwater buildings and destroy them with their torpedoes in order to expose their energy core, which can be picked up by moving over it. The player wins the level by collecting all of the cores. Player two, or the computer player in a single-player game, controls the destroyer, visible in the upper pane. The ship's primary task is to ferry repair crews from the right side of the map back to the left, dropping them into an elevator that takes them to the bottom of the ocean. From there they quickly move back towards the right through a tube on the ocean floor, instantly repairing the buildings directly above them as they pass. The destroyer also drops depth charges and Barracuda missiles that attack the submarine. The missiles track the submarine and can be killed by hitting them with five torpedoes. Frogmen with limpet mines randomly appear on the sea bed and track the submarine if it passes over them. These are relatively easy to dodge in most cases, and can be killed by shooting them five times. The most dangerous enemy is normally the construction crew, who may fix one of the buildings while the Nautilus is inside, retrieving the core. Both the Nautilus and Colossus have a sonar system that indicates the direction to the other ship. When the two are aligned vertically the display turns red and a warning horn sounds. In two-player mode the actions of the destroyer are relatively limited. The delay between dropping charges and them reaching the submarine is enough to allow the sub to destroy an average building before they arrive, so the ship cannot easily directly attack the sub in order to prevent it from winning. This forces it to act as a ferry for the repair crews. Reception Nautilus was lauded at the time of its release, with Creative Computing calling it a "tour de force", and judges at the 4th annual Arkie Awards granting it a Certificate of Merit in the category of "Most Innovative Computer Game". In an article about Synapse, an InfoWorld author noted no one was examining their highly rated relational database program, in favour of watching a game of Nautilus being played. Grant Butenhoff for Computer Gaming World gave a positive review to the game, praising its "outstanding" graphics and intense action. References Citations Bibliography 1982 video games Asymmetrical multiplayer video games Atari 8-bit computer games Atari 8-bit computer-only games Naval video games Synapse Software games Video games developed in the United States Submarines in fiction Multiplayer and single-player video games
Nautilus (video game)
[ "Physics" ]
665
[ "Asymmetrical multiplayer video games", "Symmetry", "Asymmetry" ]
11,308,417
https://en.wikipedia.org/wiki/Vertex%20%28geometry%29
In geometry, a vertex (: vertices or vertexes) is a point where two or more curves, lines, or edges meet or intersect. As a consequence of this definition, the point where two lines meet to form an angle and the corners of polygons and polyhedra are vertices. Definition Of an angle The vertex of an angle is the point where two rays begin or meet, where two line segments join or meet, where two lines intersect (cross), or any appropriate combination of rays, segments, and lines that result in two straight "sides" meeting at one place. Of a polytope A vertex is a corner point of a polygon, polyhedron, or other higher-dimensional polytope, formed by the intersection of edges, faces or facets of the object. In a polygon, a vertex is called "convex" if the internal angle of the polygon (i.e., the angle formed by the two edges at the vertex with the polygon inside the angle) is less than π radians (180°, two right angles); otherwise, it is called "concave" or "reflex". More generally, a vertex of a polyhedron or polytope is convex, if the intersection of the polyhedron or polytope with a sufficiently small sphere centered at the vertex is convex, and is concave otherwise. Polytope vertices are related to vertices of graphs, in that the 1-skeleton of a polytope is a graph, the vertices of which correspond to the vertices of the polytope, and in that a graph can be viewed as a 1-dimensional simplicial complex the vertices of which are the graph's vertices. However, in graph theory, vertices may have fewer than two incident edges, which is usually not allowed for geometric vertices. There is also a connection between geometric vertices and the vertices of a curve, its points of extreme curvature: in some sense the vertices of a polygon are points of infinite curvature, and if a polygon is approximated by a smooth curve, there will be a point of extreme curvature near each polygon vertex. Of a plane tiling A vertex of a plane tiling or tessellation is a point where three or more tiles meet; generally, but not always, the tiles of a tessellation are polygons and the vertices of the tessellation are also vertices of its tiles. More generally, a tessellation can be viewed as a kind of topological cell complex, as can the faces of a polyhedron or polytope; the vertices of other kinds of complexes such as simplicial complexes are its zero-dimensional faces. Principal vertex A polygon vertex of a simple polygon is a principal polygon vertex if the diagonal intersects the boundary of only at and . There are two types of principal vertices: ears and mouths. Ears A principal vertex of a simple polygon is called an ear if the diagonal that bridges lies entirely in . (see also convex polygon) According to the two ears theorem, every simple polygon has at least two ears. Mouths A principal vertex of a simple polygon is called a mouth if the diagonal lies outside the boundary of . Number of vertices of a polyhedron Any convex polyhedron's surface has Euler characteristic where is the number of vertices, is the number of edges, and is the number of faces. This equation is known as Euler's polyhedron formula. Thus the number of vertices is 2 more than the excess of the number of edges over the number of faces. For example, since a cube has 12 edges and 6 faces, the formula implies that it has eight vertices. Vertices in computer graphics In computer graphics, objects are often represented as triangulated polyhedra in which the object vertices are associated not only with three spatial coordinates but also with other graphical information necessary to render the object correctly, such as colors, reflectance properties, textures, and surface normal. These properties are used in rendering by a vertex shader, part of the vertex pipeline. See also Vertex arrangement Vertex figure References External links Euclidean geometry 3D computer graphics 0 Point (geometry)
Vertex (geometry)
[ "Mathematics" ]
848
[ "Point (geometry)" ]
11,309,798
https://en.wikipedia.org/wiki/Third-brush%20dynamo
A third-brush dynamo was a type of dynamo, an electrical generator, formerly used for battery charging on motor vehicles. It was superseded, first by a two-brush dynamo equipped with an external voltage regulator, and later by an alternator. Construction As the name implies, the machine had three brushes in contact with the commutator. One was earthed to the frame of the vehicle and another was connected (through a reverse-current cut-out) to the live terminal of the vehicle's battery. The third was connected to the field winding of the dynamo. The other end of the field winding was connected to a switch which could be adjusted (by inserting or removing resistance) to give "low" or "high" charge. This switch was sometimes combined with the vehicle's light switch so that switching on the headlights simultaneously put the dynamo in high charge mode. Disadvantages The third-brush dynamo had the advantage of simplicity but, by modern standards, it gave poor voltage regulation. This led to short battery life as a result of over-charging or under-charging. See also Amplidyne Metadyne References Third Brush Generators - Yesterday's Tractors website Electrical generators Automotive charging circuits Automotive electrics
Third-brush dynamo
[ "Physics", "Technology", "Engineering" ]
247
[ "Electrical generators", "Machines", "Automotive electrics", "Physical systems", "Electrical engineering" ]
11,310,655
https://en.wikipedia.org/wiki/Marsaglia%20polar%20method
The Marsaglia polar method is a pseudo-random number sampling method for generating a pair of independent standard normal random variables. Standard normal random variables are frequently used in computer science, computational statistics, and in particular, in applications of the Monte Carlo method. The polar method works by choosing random points (x, y) in the square −1 < x < 1, −1 < y < 1 until and then returning the required pair of normal random variables as or, equivalently, where and represent the cosine and sine of the angle that the vector (x, y) makes with x axis. Theoretical basis The underlying theory may be summarized as follows: If u is uniformly distributed in the interval 0 ≤ u < 1, then the point (cos(2πu), sin(2πu)) is uniformly distributed on the unit circumference x2 + y2 = 1, and multiplying that point by an independent random variable ρ whose distribution is will produce a point whose coordinates are jointly distributed as two independent standard normal random variables. History This idea dates back to Laplace, whom Gauss credits with finding the above by taking the square root of The transformation to polar coordinates makes evident that θ is uniformly distributed (constant density) from 0 to 2π, and that the radial distance r has density (r2 has the appropriate chi square distribution.) This method of producing a pair of independent standard normal variates by radially projecting a random point on the unit circumference to a distance given by the square root of a chi-square-2 variate is called the polar method for generating a pair of normal random variables, Practical considerations A direct application of this idea, is called the Box–Muller transform, in which the chi variate is usually generated as but that transform requires logarithm, square root, sine and cosine functions. On some processors, the cosine and sine of the same argument can be calculated in parallel using a single instruction. Notably for Intel-based machines, one can use fsincos assembler instruction or the expi instruction (available e.g. in D), to calculate complex and just separate the real and imaginary parts. Note: To explicitly calculate the complex-polar form use the following substitutions in the general form, Let and Then In contrast, the polar method here removes the need to calculate a cosine and sine. Instead, by solving for a point on the unit circle, these two functions can be replaced with the x and y coordinates normalized to the radius. In particular, a random point (x, y) inside the unit circle is projected onto the unit circumference by setting and forming the point which is a faster procedure than calculating the cosine and sine. Some researchers argue that the conditional if instruction (for rejecting a point outside of the unit circle), can make programs slower on modern processors equipped with pipelining and branch prediction. Also this procedure requires about 27% more evaluations of the underlying random number generator (only of generated points lie inside of unit circle). That random point on the circumference is then radially projected the required random distance by means of using the same s because that s is independent of the random point on the circumference and is itself uniformly distributed from 0 to 1. Implementation Python A simple implementation in Python:import math import random def marsaglia_sample(): while True: U1 = random.uniform(-1, 1) U2 = random.uniform(-1, 1) if (w := U1**2 + U2**2) < 1: break Z1 = U1 * math.sqrt(-2 * math.log(w) / w) Z2 = U2 * math.sqrt(-2 * math.log(w) / w) return Z1, Z2 Java Simple implementation in Java using the mean and standard deviation: private static double spare; private static boolean hasSpare = false; public static synchronized double generateGaussian(double mean, double stdDev) { if (hasSpare) { hasSpare = false; return spare * stdDev + mean; } else { double u, v, s; do { u = Math.random() * 2 - 1; v = Math.random() * 2 - 1; s = u * u + v * v; } while (s >= 1 || s == 0); s = Math.sqrt(-2.0 * Math.log(s) / s); spare = v * s; hasSpare = true; return mean + stdDev * u * s; } } C++ A non-thread safe implementation in C++ using the mean and standard deviation: double generateGaussian(double mean, double stdDev) { static double spare; static bool hasSpare = false; if (hasSpare) { hasSpare = false; return spare * stdDev + mean; } else { double u, v, s; do { u = (rand() / ((double)RAND_MAX)) * 2.0 - 1.0; v = (rand() / ((double)RAND_MAX)) * 2.0 - 1.0; s = u * u + v * v; } while (s >= 1.0 || s == 0.0); s = sqrt(-2.0 * log(s) / s); spare = v * s; hasSpare = true; return mean + stdDev * u * s; } } C++11 GNU GCC libstdc++'s implementation of std::normal_distribution uses the Marsaglia polar method, as quoted from herein. Julia A simple Julia implementation:""" marsagliasample(N) Generate `2N` samples from the standard normal distribution using the Marsaglia method. """ function marsagliasample(N) z = Array{Float64}(undef,N,2); for i in axes(z,1) s = Inf; while s > 1 z[i,:] .= 2rand(2) .- 1; s = sum(abs2.(z[i,:])) end z[i,:] .*= sqrt(-2log(s)/s); end vec(z) end """ marsagliasample(n,μ,σ) Generate `n` samples from the normal distribution with mean `μ` and standard deviation `σ` using the Marsaglia method. """ function marsagliasample(n,μ,σ) μ .+ σ*marsagliasample(cld(n,2))[1:n]; endThe for loop can be parallelized by using the Threads.@threads macro. References Monte Carlo methods Pseudorandom number generators Non-uniform random numbers
Marsaglia polar method
[ "Physics" ]
1,480
[ "Monte Carlo methods", "Computational physics" ]
11,310,861
https://en.wikipedia.org/wiki/Bergmann%20azlactone%20peptide%20synthesis
The Bergmann azlactone peptide synthesis is a classic organic synthesis process for the preparation of dipeptides. In the presence of a base, peptides are formed by aminolysis of N-carboxyanhydrides of amino acids with amino acid esters (). This reaction can be looked at in further detail by Bailey. The resulting peptide is then protected by esters of benzylchroroformate in order to keep the amino groups intact (). This mechanism serves as a source of protection for the amino group in the amino acid. The ester will block the amino group from binding with other molecules. The last step in this reaction is the cyclization of the N-haloacylamino acids with an acetanhydride. This will result in the expected azlactone (). The reaction with a second amino acid allows for the ring to open, later forming an acylated unsaturated dipeptide. The reaction happens in a step-wise function which allows for the amino group to be protected and the azlactone to be produced. Catalytic hydrogenation and hydrolysis then take place in order to produce the dipeptide (). References Chemical reactions Name reactions
Bergmann azlactone peptide synthesis
[ "Chemistry" ]
252
[ "Name reactions", "nan" ]
11,310,997
https://en.wikipedia.org/wiki/Bergmann%20degradation
The Bergmann degradation is a series of chemical reactions designed to remove a single amino acid from the carboxylic acid (C-terminal) end of a peptide. First demonstrated by Max Bergmann in 1934, it is a rarely used method for sequencing peptides. The later developed Edman degradation is an improvement upon the Bergmann degradation, instead cleaving the N-terminal amino acid of peptides to produce a hydantoin containing the desired amino acid. The Bergmann degradation follows the earlier work of Bergmann and his close colleague Leonidas Zervas, combining the organic azide degradation of the Curtius rearrangement with the Bergmann-Zervas carbobenzoxy method, which they designed to occur under relatively mild conditions so as to allow peptide sequencing. A single round of the Bergmann degradation yields an aldehyde containing the sought after amino acid residue and the remaining fragment of the original peptide in amide form. The acyl azide of a peptide (1) undergoes a Curtius rearrangement in the presence of benzyl alcohol and heat(2) to give a benzyl carbamate (3). The Cbz group of intermediate 3 is removed by hydrogenolysis to give an unsubstituted amide (4) and an aldehyde (5). Mechanism The Bergmann degradation begins with benzoylation at the alpha-group of a peptide and subsequent conversion to an acyl azide. As in the Curtius rearrangement, the acyl azide, in the presence of benzyl alcohol and heat, rearranges to a highly reactive isocyanate intermediate, releasing nitrogen gas in the process. The isocyanate in turn reacts with benzyl alcohol to form a benzylurethane (also referred to as carboxybenzyl), a compound possessing a carbamate amine protecting group. Subsequent removal of the carbamate protecting group is carried out by catalytic hydrogenation in the presence of hydrochloric acid followed by addition to boiling water, yielding an unstable intermediate that rapidly rearranges to release carbon dioxide, driving the reaction forward. This leads to further rearrangement and subsequent hydrolysis, ultimately resulting in the formation of an aldehyde bearing the next amino acid residue in the sequencing series and the expulsion of the residual peptide in amide form. A mechanism has been proposed which depicts catalytic hydrogenation of the benzylurethane as a concerted rearrangement that releases carbon dioxide concomitantly with formation of the amide. Preparation of azide The aforementioned conversion to acyl azide has been carried out multifariously; Bergmann utilized methyl ester and hydrazide, whereas more recent attempts have designed methods such as: nitrosylation of N-formylaminoacyl hydrazide and subsequent substitution by sodium azide, reaction of a carboxylic acid with diphenyl phosphorazidate, triethylamine, and a hydroxyl component, and reaction between TMS azide and the anhydride of an amino acid. Applications The Bergmann degradation is intended for and has been used as a method for peptide sequencing. It was also proposed for use in cleaving the 3,4-bond of the penicillin nucleus. The compound 2,2-dimethyl-6-phthalimido-3-penamyl isocyanate was arrived at through various means, including the Curtius rearrangement, and it was envisioned that it could undergo the Bergmann degradation to form the desired aldehyde as well as the urea by-product. Though the Bergmann degradation was indeed possible, it was discovered that simple dilute acid hydrolysis would suffice in forming the desired product. Curtius rearrangement The Bergmann degradation makes use of the azide degradation described by the Curtius rearrangement. Curtius also attempted to degrade benzoylated amino acids; however, his method involved splitting the carbamate with strongly energetic treatment with acids, which lead to decomposition of the resultant aldehyde and acid amides. This convinced Bergmann that Curtius' azide degradation could be followed by treatment with benzyl alcohol (his carbobenzoxy method) to isolate the resultant amino acid aldehyde and residual peptide amide for sequencing purposes. Edman degradation The Edman degradation is an alternative method for peptide sequencing that cleaves amino acid residues from the N-terminus of a peptide. In 1950 Edman designed a reaction with phenylthiocyanate (the idea for which was borrowed from a 1927 study by Bergmann, Kann and Miekeley ) to give phenylthiocarbamyl peptides followed by hydrolysis under relatively mild conditions to cleave N-terminal amino acid as phenylthiohydantoin. Phenylthiohydantoin is stable enough to undergo various sequencing procedures such as those which involve chromatography and mass spectrometry. This was an improvement on an earlier method proposed by Abderhalden and Brockmann in 1930 that demonstrated N-terminal amino acid conversion to a hydantoin under stronger hydrolytic conditions, where some cleavage of the residual peptide proved problematic. The primary advantage the Edman degradation has over the Bergmann degradation is the ease with which the residual peptide can re-enter the process due to retention of its structure throughout sequential cleaving. Repetition of the Bergmann degradation is presumably not as straightforward, as the remaining peptide is in amide form. See also Curtius rearrangement Edman degradation References Organic redox reactions Rearrangement reactions Degradation reactions Name reactions
Bergmann degradation
[ "Chemistry" ]
1,167
[ "Organic redox reactions", "Organic reactions", "Name reactions", "Degradation reactions", "Rearrangement reactions" ]
658,183
https://en.wikipedia.org/wiki/Industrial%20process%20control
Industrial process control (IPC) or simply process control is a system used in modern manufacturing which uses the principles of control theory and physical industrial control systems to monitor, control and optimize continuous industrial production processes using control algorithms. This ensures that the industrial machines run smoothly and safely in factories and efficiently use energy to transform raw materials into high-quality finished products with reliable consistency while reducing energy waste and economic costs, something which could not be achieved purely by human manual control. In IPC, control theory provides the theoretical framework to understand system dynamics, predict outcomes and design control strategies to ensure predetermined objectives, utilizing concepts like feedback loops, stability analysis and controller design. On the other hand, the physical apparatus of IPC, based on automation technologies, consists of several components. Firstly, a network of sensors continuously measure various process variables (such as temperature, pressure, etc.) and product quality variables. A programmable logic controller (PLC, for smaller, less complex processes) or a distributed control system (DCS, for large-scale or geographically dispersed processes) analyzes this sensor data transmitted to it, compares it to predefined setpoints using a set of instructions or a mathematical model called the control algorithm and then, in case of any deviation from these setpoints (e.g., temperature exceeding setpoint), makes quick corrective adjustments through actuators such as valves (e.g. cooling valve for temperature control), motors or heaters to guide the process back to the desired operational range. This creates a continuous closed-loop cycle of measurement, comparison, control action, and re-evaluation which guarantees that the process remains within established parameters. The HMI (Human-Machine Interface) acts as the "control panel" for the IPC system where small number of human operators can monitor the process and make informed decisions regarding adjustments. IPCs can range from controlling the temperature and level of a single process vessel (controlled environment tank for mixing, separating, reacting, or storing materials in industrial processes.) to a complete chemical processing plant with several thousand control feedback loops. IPC provides several critical benefits to manufacturing companies. By maintaining a tight control over key process variables, it helps reduce energy use, minimize waste and shorten downtime for peak efficiency and reduced costs. It ensures consistent and improved product quality with little variability, which satisfies the customers and strengthens the company's reputation. It improves safety by detecting and alerting human operators about potential issues early, thus preventing accidents, equipment failures, process disruptions and costly downtime. Analyzing trends and behaviors in the vast amounts of data collected real-time helps engineers identify areas of improvement, refine control strategies and continuously enhance production efficiency using a data-driven approach. IPC is used across a wide range of industries where precise control is important. The applications can range from controlling the temperature and level of a single process vessel, to a complete chemical processing plant with several thousand control loops. In automotive manufacturing, IPC ensures consistent quality by meticulously controlling processes like welding and painting. Mining operations are optimized with IPC monitoring ore crushing and adjusting conveyor belt speeds for maximum output. Dredging benefits from precise control of suction pressure, dredging depth and sediment discharge rate by IPC, ensuring efficient and sustainable practices. Pulp and paper production leverages IPC to regulate chemical processes (e.g., pH and bleach concentration) and automate paper machine operations to control paper sheet moisture content and drying temperature for consistent quality. In chemical plants, it ensures the safe and efficient production of chemicals by controlling temperature, pressure and reaction rates. Oil refineries use it to smoothly convert crude oil into gasoline and other petroleum products. In power plants, it helps maintain stable operating conditions necessary for a continuous electricity supply. In food and beverage production, it helps ensure consistent texture, safety and quality. Pharmaceutical companies relies on it to produce life-saving drugs safely and effectively. The development of large industrial process control systems has been instrumental in enabling the design of large high volume and complex processes, which could not be otherwise economically or safely operated. Historical milestones in the development of industrial process control began in ancient civilizations, where water level control devices were used to regulate water flow for irrigation and water clocks. During the Industrial Revolution in the 18th century, there was a growing need for precise control over boiler pressure in steam engines. In the 1930s, pneumatic and electronic controllers, such as PID (Proportional-Integral-Derivative) controllers, were breakthrough innovations that laid the groundwork for modern control theory. The late 20th century saw the rise of programmable logic controllers (PLCs) and distributed control systems (DCS), while the advent of microprocessors further revolutionized IPC by enabling more complex control algorithms. History Early process control breakthroughs came most frequently in the form of water control devices. Ktesibios of Alexandria is credited for inventing float valves to regulate water level of water clocks in the 3rd century BC. In the 1st century AD, Heron of Alexandria invented a water valve similar to the fill valve used in modern toilets. Later process controls inventions involved basic physics principles. In 1620, Cornelis Drebbel invented a bimetallic thermostat for controlling the temperature in a furnace. In 1681, Denis Papin discovered the pressure inside a vessel could be regulated by placing weights on top of the vessel lid. In 1745, Edmund Lee created the fantail to improve windmill efficiency; a fantail was a smaller windmill placed 90° of the larger fans to keep the face of the windmill pointed directly into the oncoming wind. With the dawn of the Industrial Revolution in the 1760s, process controls inventions were aimed to replace human operators with mechanized processes. In 1784, Oliver Evans created a water-powered flourmill which operated using buckets and screw conveyors. Henry Ford applied the same theory in 1910 when the assembly line was created to decrease human intervention in the automobile production process. For continuously variable process control it was not until 1922 that a formal control law for what we now call PID control or three-term control was first developed using theoretical analysis, by Russian American engineer Nicolas Minorsky. Minorsky was researching and designing automatic ship steering for the US Navy and based his analysis on observations of a helmsman. He noted the helmsman steered the ship based not only on the current course error, but also on past error, as well as the current rate of change; this was then given a mathematical treatment by Minorsky. His goal was stability, not general control, which simplified the problem significantly. While proportional control provided stability against small disturbances, it was insufficient for dealing with a steady disturbance, notably a stiff gale (due to steady-state error), which required adding the integral term. Finally, the derivative term was added to improve stability and control. Development of modern process control operations Process control of large industrial plants has evolved through many stages. Initially, control would be from panels local to the process plant. However this required a large manpower resource to attend to these dispersed panels, and there was no overall view of the process. The next logical development was the transmission of all plant measurements to a permanently-staffed central control room. Effectively this was the centralization of all the localized panels, with the advantages of lower manning levels and easier overview of the process. Often the controllers were behind the control room panels, and all automatic and manual control outputs were transmitted back to plant. However, whilst providing a central control focus, this arrangement was inflexible as each control loop had its own controller hardware, and continual operator movement within the control room was required to view different parts of the process. With the coming of electronic processors and graphic displays it became possible to replace these discrete controllers with computer-based algorithms, hosted on a network of input/output racks with their own control processors. These could be distributed around the plant, and communicate with the graphic display in the control room or rooms. The distributed control system (DCS) was born. The introduction of DCSs allowed easy interconnection and re-configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other production computer systems. It enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling runs, and provided high level overviews of plant status and production levels. Hierarchy The accompanying diagram is a general model which shows functional manufacturing levels in a large process using processor and computer-based control. Referring to the diagram: Level 0 contains the field devices such as flow and temperature sensors (process value readings - PV), and final control elements (FCE), such as control valves; Level 1 contains the industrialized Input/Output (I/O) modules, and their associated distributed electronic processors; Level 2 contains the supervisory computers, which collate information from processor nodes on the system, and provide the operator control screens; Level 3 is the production control level, which does not directly control the process, but is concerned with monitoring production and monitoring targets; Level 4 is the production scheduling level. Control model To determine the fundamental model for any process, the inputs and outputs of the system are defined differently than for other chemical processes. The balance equations are defined by the control inputs and outputs rather than the material inputs. The control model is a set of equations used to predict the behavior of a system and can help determine what the response to change will be. The state variable (x) is a measurable variable that is a good indicator of the state of the system, such as temperature (energy balance), volume (mass balance) or concentration (component balance). Input variable (u) is a specified variable that commonly include flow rates. The entering and exiting flows are both considered control inputs. The control input can be classified as a manipulated, disturbance, or unmonitored variable. Parameters (p) are usually a physical limitation and something that is fixed for the system, such as the vessel volume or the viscosity of the material. Output (y) is the metric used to determine the behavior of the system. The control output can be classified as measured, unmeasured, or unmonitored. Types Processes can be characterized as batch, continuous, or hybrid. Batch applications require that specific quantities of raw materials be combined in specific ways for particular duration to produce an intermediate or end result. One example is the production of adhesives and glues, which normally require the mixing of raw materials in a heated vessel for a period of time to form a quantity of end product. Other important examples are the production of food, beverages and medicine. Batch processes are generally used to produce a relatively low to intermediate quantity of product per year (a few pounds to millions of pounds). A continuous physical system is represented through variables that are smooth and uninterrupted in time. The control of the water temperature in a heating jacket, for example, is an example of continuous process control. Some important continuous processes are the production of fuels, chemicals and plastics. Continuous processes in manufacturing are used to produce very large quantities of product per year (millions to billions of pounds). Such controls use feedback such as in the PID controller A PID Controller includes proportional, integrating, and derivative controller functions. Applications having elements of batch and continuous process control are often called hybrid applications. Control loops The fundamental building block of any industrial control system is the control loop, which controls just one process variable. An example is shown in the accompanying diagram, where the flow rate in a pipe is controlled by a PID controller, assisted by what is effectively a cascaded loop in the form of a valve servo-controller to ensure correct valve positioning. Some large systems may have several hundreds or thousands of control loops. In complex processes the loops are interactive, so that the operation of one loop may affect the operation of another. The system diagram for representing control loops is a Piping and instrumentation diagram. Commonly used control systems include programmable logic controller (PLC), Distributed Control System (DCS) or SCADA. A further example is shown. If a control valve were used to hold level in a tank, the level controller would compare the equivalent reading of a level sensor to the level setpoint and determine whether more or less valve opening was necessary to keep the level constant. A cascaded flow controller could then calculate the change in the valve position. Economic advantages The economic nature of many products manufactured in batch and continuous processes require highly efficient operation due to thin margins. The competing factor in process control is that products must meet certain specifications in order to be satisfactory. These specifications can come in two forms: a minimum and maximum for a property of the material or product, or a range within which the property must be. All loops are susceptible to disturbances and therefore a buffer must be used on process set points to ensure disturbances do not cause the material or product to go out of specifications. This buffer comes at an economic cost (i.e. additional processing, maintaining elevated or depressed process conditions, etc.). Process efficiency can be enhanced by reducing the margins necessary to ensure product specifications are met. This can be done by improving the control of the process to minimize the effect of disturbances on the process. The efficiency is improved in a two step method of narrowing the variance and shifting the target. Margins can be narrowed through various process upgrades (i.e. equipment upgrades, enhanced control methods, etc.). Once margins are narrowed, an economic analysis can be done on the process to determine how the set point target is to be shifted. Less conservative process set points lead to increased economic efficiency. Effective process control strategies increase the competitive advantage of manufacturers who employ them. See also References Further reading External links A Complete Guide to Statistical Process Control The Michigan Chemical Engineering Process Dynamics and Controls Open Textbook PID control virtual laboratory, free video tutorials, on-line simulators, advanced process control schemes Chemical process engineering Control theory Statistical process control Process engineering
Industrial process control
[ "Chemistry", "Mathematics", "Engineering" ]
2,870
[ "Process engineering", "Statistical process control", "Applied mathematics", "Control theory", "Chemical engineering", "Mechanical engineering by discipline", "Engineering statistics", "Chemical process engineering", "Dynamical systems" ]
658,501
https://en.wikipedia.org/wiki/Savitch%27s%20theorem
In computational complexity theory, Savitch's theorem, proved by Walter Savitch in 1970, gives a relationship between deterministic and non-deterministic space complexity. It states that for any function , In other words, if a nondeterministic Turing machine can solve a problem using space, a deterministic Turing machine can solve the same problem in the square of that space bound. Although it seems that nondeterminism may produce exponential gains in time (as formalized in the unproven exponential time hypothesis), Savitch's theorem shows that it has a markedly more limited effect on space requirements. Proof The proof relies on an algorithm for STCON, the problem of determining whether there is a path between two vertices in a directed graph, which runs in space for vertices. The basic idea of the algorithm is to solve recursively a somewhat more general problem, testing the existence of a path from a vertex to another vertex that uses at most edges, for a parameter given as input. STCON is a special case of this problem where is set large enough to impose no restriction on the paths (for instance, equal to the total number of vertices in the graph, or any larger value). To test for a -edge path from to , a deterministic algorithm can iterate through all vertices , and recursively search for paths of half the length from to and from to . This algorithm can be expressed in pseudocode (in Python syntax) as follows: def stcon(s, t) -> bool: """Test whether a path of any length exists from s to t""" return k_edge_path(s, t, n) # n is the number of vertices def k_edge_path(s, t, k) -> bool: """Test whether a path of length at most k exists from s to t""" if k == 0: return s == t if k == 1: return (s, t) in edges for u in vertices: if k_edge_path(s, u, floor(k / 2)) and k_edge_path(u, t, ceil(k / 2)): return True return False Because each recursive call halves the parameter , the number of levels of recursion is . Each level requires bits of storage for its function arguments and local variables: and the vertices , , and require bits each. The total auxiliary space complexity is thus . The input graph is considered to be represented in a separate read-only memory and does not contribute to this auxiliary space bound. Alternatively, it may be represented as an implicit graph. Although described above in the form of a program in a high-level language, the same algorithm may be implemented with the same asymptotic space bound on a Turing machine. This algorithm can be applied to an implicit graph whose vertices represent the configurations of a nondeterministic Turing machine and its tape, running within a given space bound . The edges of this graph represent the nondeterministic transitions of the machine, is set to the initial configuration of the machine, and is set to a special vertex representing all accepting halting states. In this case, the algorithm returns true when the machine has a nondeterministic accepting path, and false otherwise. The number of configurations in this graph is , from which it follows that applying the algorithm to this implicit graph uses space . Thus by deciding connectivity in a graph representing nondeterministic Turing machine configurations, one can decide membership in the language recognized by that machine, in space proportional to the square of the space used by the Turing machine. Corollaries Some important corollaries of the theorem include: PSPACE = NPSPACE That is, the languages that can be recognized by deterministic polynomial-space Turing machines and nondeterministic polynomial-space Turing machines are the same. This follows directly from the fact that the square of a polynomial function is still a polynomial function. It is believed that a similar relationship does not exist between the polynomial time complexity classes, P and NP, although this is still an open question. NL ⊆ L2 That is, all languages that can be solved nondeterministically in logarithmic space can be solved deterministically in the complexity class This follows from the fact that STCON is NL-complete. See also References Sources External links Lance Fortnow, Foundations of Complexity, Lesson 18: Savitch's Theorem. Accessed 09/09/09. Richard J. Lipton, Savitch’s Theorem. Gives a historical account on how the proof was discovered. Structural complexity theory Theorems in computational complexity theory
Savitch's theorem
[ "Mathematics" ]
973
[ "Theorems in computational complexity theory", "Theorems in discrete mathematics" ]
658,651
https://en.wikipedia.org/wiki/Polynomial%20hierarchy
In computational complexity theory, the polynomial hierarchy (sometimes called the polynomial-time hierarchy) is a hierarchy of complexity classes that generalize the classes NP and co-NP. Each class in the hierarchy is contained within PSPACE. The hierarchy can be defined using oracle machines or alternating Turing machines. It is a resource-bounded counterpart to the arithmetical hierarchy and analytical hierarchy from mathematical logic. The union of the classes in the hierarchy is denoted PH. Classes within the hierarchy have complete problems (with respect to polynomial-time reductions) that ask if quantified Boolean formulae hold, for formulae with restrictions on the quantifier order. It is known that equality between classes on the same level or consecutive levels in the hierarchy would imply a "collapse" of the hierarchy to that level. Definitions There are multiple equivalent definitions of the classes of the polynomial hierarchy. Oracle definition For the oracle definition of the polynomial hierarchy, define where P is the set of decision problems solvable in polynomial time. Then for i ≥ 0 define where is the set of decision problems solvable in polynomial time by a Turing machine augmented by an oracle for some complete problem in class A; the classes and are defined analogously. For example, , and is the class of problems solvable in polynomial time by a deterministic Turing machine with an oracle for some NP-complete problem. Quantified Boolean formulae definition For the existential/universal definition of the polynomial hierarchy, let be a language (i.e. a decision problem, a subset of {0,1}*), let be a polynomial, and define where is some standard encoding of the pair of binary strings x and w as a single binary string. The language L represents a set of ordered pairs of strings, where the first string x is a member of , and the second string w is a "short" () witness testifying that x is a member of . In other words, if and only if there exists a short witness w such that . Similarly, define Note that De Morgan's laws hold: and , where Lc is the complement of L. Let be a class of languages. Extend these operators to work on whole classes of languages by the definition Again, De Morgan's laws hold: and , where . The classes NP and co-NP can be defined as , and , where P is the class of all feasibly (polynomial-time) decidable languages. The polynomial hierarchy can be defined recursively as Note that , and . This definition reflects the close connection between the polynomial hierarchy and the arithmetical hierarchy, where R and RE play roles analogous to P and NP, respectively. The analytic hierarchy is also defined in a similar way to give a hierarchy of subsets of the real numbers. Alternating Turing machines definition An alternating Turing machine is a non-deterministic Turing machine with non-final states partitioned into existential and universal states. It is eventually accepting from its current configuration if: it is in an existential state and can transition into some eventually accepting configuration; or, it is in a universal state and every transition is into some eventually accepting configuration; or, it is in an accepting state. We define to be the class of languages accepted by an alternating Turing machine in polynomial time such that the initial state is an existential state and every path the machine can take swaps at most k – 1 times between existential and universal states. We define similarly, except that the initial state is a universal state. If we omit the requirement of at most k – 1 swaps between the existential and universal states, so that we only require that our alternating Turing machine runs in polynomial time, then we have the definition of the class AP, which is equal to PSPACE. Relations between classes in the polynomial hierarchy The union of all classes in the polynomial hierarchy is the complexity class PH. The definitions imply the relations: Unlike the arithmetic and analytic hierarchies, whose inclusions are known to be proper, it is an open question whether any of these inclusions are proper, though it is widely believed that they all are. If any , or if any , then the hierarchy collapses to level k: for all , . In particular, we have the following implications involving unsolved problems: P = NP if and only if P = PH. If NP = co-NP then NP = PH. (co-NP is .) The case in which NP = PH is also termed as a collapse of the PH to the second level. The case P = NP corresponds to a collapse of PH to P. The question of collapse to the first level is generally thought to be extremely difficult. Most researchers do not believe in a collapse, even to the second level. Relationships to other classes The polynomial hierarchy is an analogue (at much lower complexity) of the exponential hierarchy and arithmetical hierarchy. It is known that PH is contained within PSPACE, but it is not known whether the two classes are equal. One useful reformulation of this problem is that PH = PSPACE if and only if second-order logic over finite structures gains no additional power from the addition of a transitive closure operator over relations of relations (i.e., over the second-order variables). If the polynomial hierarchy has any complete problems, then it has only finitely many distinct levels. Since there are PSPACE-complete problems, we know that if PSPACE = PH, then the polynomial hierarchy must collapse, since a PSPACE-complete problem would be a -complete problem for some k. Each class in the polynomial hierarchy contains -complete problems (problems complete under polynomial-time many-one reductions). Furthermore, each class in the polynomial hierarchy is closed under -reductions: meaning that for a class in the hierarchy and a language , if , then as well. These two facts together imply that if is a complete problem for , then , and . For instance, . In other words, if a language is defined based on some oracle in , then we can assume that it is defined based on a complete problem for . Complete problems therefore act as "representatives" of the class for which they are complete. The Sipser–Lautemann theorem states that the class BPP is contained in the second level of the polynomial hierarchy. Kannan's theorem states that for any k, is not contained in SIZE(nk). Toda's theorem states that the polynomial hierarchy is contained in P#P. Problems See also EXPTIME Exponential hierarchy Arithmetic hierarchy References General references A. R. Meyer and L. J. Stockmeyer. The Equivalence Problem for Regular Expressions with Squaring Requires Exponential Space. In Proceedings of the 13th IEEE Symposium on Switching and Automata Theory, pp. 125–129, 1972. The paper that introduced the polynomial hierarchy. L. J. Stockmeyer. The polynomial-time hierarchy. Theoretical Computer Science, vol.3, pp. 1–22, 1976. C. Papadimitriou. Computational Complexity. Addison-Wesley, 1994. Chapter 17. Polynomial hierarchy, pp. 409–438. Section 7.2: The Polynomial Hierarchy, pp. 161–167. Citations Structural complexity theory Hierarchy Complexity classes
Polynomial hierarchy
[ "Mathematics" ]
1,465
[ "Mathematical logic", "Mathematical logic hierarchies" ]
658,686
https://en.wikipedia.org/wiki/Sinistral%20and%20dextral
Sinistral and dextral, in some scientific fields, are the two types of chirality ("handedness") or relative direction. The terms are derived from the Latin words for "left" (sinister) and "right" (dexter). Other disciplines use different terms (such as dextro- and laevo-rotary in chemistry, or clockwise and anticlockwise in physics) or simply use left and right (as in anatomy). Relative direction and chirality are distinct concepts. Relative direction is from the point of view of the observer; a completely symmetric object has a left side and a right side, from the observer's point of view, if the top and bottom and direction of observation are defined. Chirality, however, is observer-independent: no matter how one looks at a right-hand screw thread, it remains different from a left-hand screw thread. Therefore, a symmetric object has sinistral and dextral directions arbitrarily defined by the position of the observer, while an asymmetric object that shows chirality may have sinistral and dextral directions defined by characteristics of the object, regardless of the position of the observer. Biology Gastropods Because the coiled shells of gastropods are asymmetric, they possess a quality called chirality–the "handedness" of an asymmetric structure. Over 90% of gastropod species have shells in which the direction of the coil is dextral (right-handed). A small minority of species and genera have shells in which the coils are almost always sinistral (left-handed). Very few species show an even mixture of dextral and sinistral individuals (for example, Amphidromus perversus). Flatfish The most obvious characteristic of flatfish, other than their flatness, is their asymmetric morphology: both eyes are on the same side of the head in the adult fish. In some families of flatfish, the eyes are always on the right side of the body (dextral or right-eyed flatfish), and in others, they are always on the left (sinistral or left-eyed flatfish). Primitive spiny turbots include equal numbers of right- and left-sided individuals, and are generally more symmetric than other families. Geology In geology, the terms sinistral and dextral refer to the horizontal component of the movement of blocks on either side of a fault or the sense of movement within a shear zone. These are terms of relative direction, as the movement of the blocks is described relative to each other when viewed from above. Movement is sinistral (left-handed) if the block on the other side of the fault moves to the left, or if straddling the fault the left side moves toward the observer. Movement is dextral (right-handed) if the block on the other side of the fault moves to the right, or if straddling the fault the right side moves toward the observer. See also Dexter and sinister, as used in heraldry Helicity (disambiguation) Jeremy (snail) Laterality Left and right (disambiguation) Symmetry Terms of orientation References External links Chirality Orientation (geometry)
Sinistral and dextral
[ "Physics", "Chemistry", "Mathematics", "Biology" ]
669
[ "Pharmacology", "Origin of life", "Biochemistry", "Stereochemistry", "Chirality", "Topology", "Space", "Geometry", "Asymmetry", "Biological hypotheses", "Spacetime", "Orientation (geometry)", "Symmetry" ]
658,955
https://en.wikipedia.org/wiki/Quantum%20optics
Quantum optics is a branch of atomic, molecular, and optical physics and quantum chemistry dealing with how individual quanta of light, known as photons, interact with atoms and molecules. It includes the study of the particle-like properties of photons. Photons have been used to test many of the counter-intuitive predictions of quantum mechanics, such as entanglement and teleportation, and are a useful resource for quantum information processing. History Light propagating in a restricted volume of space has its energy and momentum quantized according to an integer number of particles known as photons. Quantum optics studies the nature and effects of light as quantized photons. The first major development leading to that understanding was the correct modeling of the blackbody radiation spectrum by Max Planck in 1899 under the hypothesis of light being emitted in discrete units of energy. The photoelectric effect was further evidence of this quantization as explained by Albert Einstein in a 1905 paper, a discovery for which he was to be awarded the Nobel Prize in 1921. Niels Bohr showed that the hypothesis of optical radiation being quantized corresponded to his theory of the quantized energy levels of atoms, and the spectrum of discharge emission from hydrogen in particular. The understanding of the interaction between light and matter following these developments was crucial for the development of quantum mechanics as a whole. However, the subfields of quantum mechanics dealing with matter-light interaction were principally regarded as research into matter rather than into light; hence one rather spoke of atom physics and quantum electronics in 1960. Laser science—i.e., research into principles, design and application of these devices—became an important field, and the quantum mechanics underlying the laser's principles was studied now with more emphasis on the properties of light, and the name quantum optics became customary. As laser science needed good theoretical foundations, and also because research into these soon proved very fruitful, interest in quantum optics rose. Following the work of Dirac in quantum field theory, John R. Klauder, George Sudarshan, Roy J. Glauber, and Leonard Mandel applied quantum theory to the electromagnetic field in the 1950s and 1960s to gain a more detailed understanding of photodetection and the statistics of light (see degree of coherence). This led to the introduction of the coherent state as a concept that addressed variations between laser light, thermal light, exotic squeezed states, etc. as it became understood that light cannot be fully described just referring to the electromagnetic fields describing the waves in the classical picture. In 1977, Kimble et al. demonstrated a single atom emitting one photon at a time, further compelling evidence that light consists of photons. Previously unknown quantum states of light with characteristics unlike classical states, such as squeezed light were subsequently discovered. Development of short and ultrashort laser pulses—created by Q switching and modelocking techniques—opened the way to the study of what became known as ultrafast processes. Applications for solid state research (e.g. Raman spectroscopy) were found, and mechanical forces of light on matter were studied. The latter led to levitating and positioning clouds of atoms or even small biological samples in an optical trap or optical tweezers by laser beam. This, along with Doppler cooling and Sisyphus cooling, was the crucial technology needed to achieve the celebrated Bose–Einstein condensation. Other remarkable results are the demonstration of quantum entanglement, quantum teleportation, and quantum logic gates. The latter are of much interest in quantum information theory, a subject that partly emerged from quantum optics, partly from theoretical computer science. Today's fields of interest among quantum optics researchers include parametric down-conversion, parametric oscillation, even shorter (attosecond) light pulses, use of quantum optics for quantum information, manipulation of single atoms, Bose–Einstein condensates, their application, and how to manipulate them (a sub-field often called atom optics), coherent perfect absorbers, and much more. Topics classified under the term of quantum optics, especially as applied to engineering and technological innovation, often go under the modern term photonics. Several Nobel Prizes have been awarded for work in quantum optics. These were awarded: in 2022, Alain Aspect, John Clauser and Anton Zeilinger "for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science". in 2012, Serge Haroche and David J. Wineland "for ground-breaking experimental methods that enable measuring & manipulation of individual quantum systems". in 2005, Theodor W. Hänsch, Roy J. Glauber and John L. Hall in 2001, Wolfgang Ketterle, Eric Allin Cornell and Carl Wieman in 1997, Steven Chu, Claude Cohen-Tannoudji and William Daniel Phillips for laser cooling Concepts According to quantum theory, light may be considered not only to be as an electro-magnetic wave but also as a "stream" of particles called photons, which travel with c, the speed of light in vacuum. These particles should not be considered to be classical billiard balls, but as quantum mechanical particles described by a wavefunction spread over a finite region. Each particle carries one quantum of energy, equal to hf, where h is the Planck constant and f is the frequency of the light. That energy possessed by a single photon corresponds exactly to the transition between discrete energy levels in an atom (or other system) that emitted the photon; material absorption of a photon is the reverse process. Einstein's explanation of spontaneous emission also predicted the existence of stimulated emission, the principle upon which the laser rests. However, the actual invention of the maser (and laser) many years later was dependent on a method to produce a population inversion. The use of statistical mechanics is fundamental to the concepts of quantum optics: light is described in terms of field operators for creation and annihilation of photons—i.e. in the language of quantum electrodynamics. A frequently encountered state of the light field is the coherent state, as introduced by E.C. George Sudarshan in 1960. This state, which can be used to approximately describe the output of a single-frequency laser well above the laser threshold, exhibits Poissonian photon number statistics. Via certain nonlinear interactions, a coherent state can be transformed into a squeezed coherent state, by applying a squeezing operator that can exhibit super- or sub-Poissonian photon statistics. Such light is called squeezed light. Other important quantum aspects are related to correlations of photon statistics between different beams. For example, spontaneous parametric down-conversion can generate so-called 'twin beams', where (ideally) each photon of one beam is associated with a photon in the other beam. Atoms are considered as quantum mechanical oscillators with a discrete energy spectrum, with the transitions between the energy eigenstates being driven by the absorption or emission of light according to Einstein's theory. For solid state matter, one uses the energy band models of solid state physics. This is important for understanding how light is detected by solid-state devices, commonly used in experiments. Quantum electronics Quantum electronics is a term that was used mainly between the 1950s and 1970s to denote the area of physics dealing with the effects of quantum mechanics on the behavior of electrons in matter, together with their interactions with photons. Today, it is rarely considered a sub-field in its own right, and it has been absorbed by other fields. Solid state physics regularly takes quantum mechanics into account, and is usually concerned with electrons. Specific applications of quantum mechanics in electronics is researched within semiconductor physics. The term also encompassed the basic processes of laser operation, which is today studied as a topic in quantum optics. Usage of the term overlapped early work on the quantum Hall effect and quantum cellular automata. See also Atomic, molecular, and optical physics Attophysics Nonclassical light Optomechanics Quantum control Optical phase space Optical physics Optics Quantization of the electromagnetic field Two-state quantum system Spinplasmonics Valleytronics Notes References The Nobel Prize in Physics 2005 Further reading L. Mandel, E. Wolf Optical Coherence and Quantum Optics (Cambridge 1995). D. F. Walls and G. J. Milburn Quantum Optics (Springer 1994). Crispin Gardiner and Peter Zoller, Quantum Noise (Springer 2004). H.M. Moya-Cessa and F. Soto-Eguibar, Introduction to Quantum Optics (Rinton Press 2011). M. O. Scully and M. S. Zubairy Quantum Optics (Cambridge 1997). W. P. Schleich Quantum Optics in Phase Space (Wiley 2001). External links An introduction to quantum optics of the light field Encyclopedia of laser physics and technology, with content on quantum optics (particularly quantum noise in lasers), by Rüdiger Paschotta. Qwiki – a quantum physics wiki devoted to providing technical resources for practicing quantum physicists. Quantiki – a free-content WWW resource in quantum information science that anyone can edit. Various Quantum Optics Reports Optics
Quantum optics
[ "Physics", "Chemistry" ]
1,864
[ "Applied and interdisciplinary physics", "Optics", "Quantum optics", "Quantum mechanics", " molecular", "Atomic", " and optical physics" ]
659,002
https://en.wikipedia.org/wiki/Anthony%20James%20Leggett
Sir Anthony James Leggett (born 26 March 1938) is a British–American theoretical physicist and professor emeritus at the University of Illinois Urbana-Champaign (UIUC). Leggett is widely recognised as a world leader in the theory of low-temperature physics, and his pioneering work on superfluidity was recognised by the 2003 Nobel Prize in Physics. He has shaped the theoretical understanding of normal and superfluid helium liquids and strongly coupled superfluids. He set directions for research in the quantum physics of macroscopic dissipative systems and use of condensed systems to test the foundations of quantum mechanics. Early life and education Leggett was born in Camberwell, South London, and raised Catholic. His father's forebears were village cobblers in a small village in Hampshire; Leggett's grandfather broke with this tradition to become a greengrocer; his father would relate how he used to ride with him to buy vegetables at the Covent Garden market in London. His mother's parents were of Irish descent; her father had moved to Britain and worked as a clerk in the naval dockyard in Chatham. His maternal grandmother, who survived into her eighties, was sent out to domestic service at the age of twelve. She eventually married his grandfather and raised a large family, then in her late sixties emigrated to Australia to join her daughter and son-in-law, and finally returned to the UK for her last years. His father and mother were each the first in their families to receive a university education; they met and became engaged while students at the Institute of Education at the University of London, but were unable to get married for some years because his father had to care for his own mother and siblings. His father worked as a secondary school teacher of physics, chemistry and mathematics. His mother also taught secondary school mathematics for a time, but had to give this up when he was born. He was eventually followed by two sisters, Clare and Judith, and two brothers, Terence and Paul, all raised in their parents' Roman Catholic faith. Leggett ceased to be a practising Catholic in his early twenties. Soon after he was born, his parents bought a house in Upper Norwood, south London. When he was 18 months old, WWII broke out and he was evacuated to Englefield Green, a small village in Surrey on the edge of the great park of Windsor Castle, where he stayed for the duration of the war. After the end of the war, he returned to the Upper Norwood house and lived there until 1950; his father taught at a school in north-east London and his mother looked after the five children full-time. He attended the local Catholic primary school, and later, following a successful performance in the 11-plus, which he took rather earlier than most, and then transferred to Wimbledon College. He later attended Beaumont College, a Jesuit school in Old Windsor. He and his two younger brothers, Terrence and Paul, attended Beaumont as a consequence of his father's appointment to teach science at the college. While there, Leggett primarily studied classics, since that was generally regarded as the most prestigious field at the time; this study led directly to his Greats degree while at Oxford. Despite Leggett's emphasis on classics at Beaumont, his father ran an evening 'science club' for his younger son and a couple of others. In his last year at Beaumont, Leggett won every single prize for the subjects that he studied that year. Leggett won a scholarship to Balliol College, Oxford, in December 1954 and entered the University the following year with the intention of reading the degree technically known as Literae Humaniores (classics). After completing his first degree he began a second undergraduate degree, this time in physics at Merton College, Oxford. One person who was willing to overlook Leggett's unorthodox credentials was Dirk ter Haar, then a reader in theoretical physics and a fellow of Magdalen College, Oxford; so Leggett signed up for research under ter Haar's supervision. As with all of ter Haar's students in that period, the tentatively assigned thesis topic was "Some Problems in the Theory of Many-Body Systems", which left a considerable degree of latitude. Dirk took a great interest in the personal welfare of his students and their families, and was meticulous in making sure they received adequate support; indeed, he encouraged Leggett to apply for a Prize Fellowship at Magdalen, which he held from 1963 to 1967. In the end Leggett's thesis consisted of studies of two somewhat disconnected problems in the general area of liquid helium, one on higher-order phonon interaction processes in superfluid 4He and the other on the properties of dilute solutions of 4He in normal liquid 3He (a system which unfortunately turned out to be much less experimentally accessible than the other side of the phase diagram, dilute solutions of 3He in 4He). The University of Oxford awarded Leggett an Honorary DLitt in June 2005. Career Leggett spent the period August 1964 – August 1965 as a postdoctoral research fellow at UIUC, and David Pines and his colleagues (John Bardeen, Gordon Baym, Leo Kadanoff and others) provided a fertile environment. He then spent a year in the group of Professor Takeo Matsubara at Kyoto University in Japan. After one more postdoctoral year which he spent in "roving" mode, spending time at Oxford, Harvard, and Illinois, in the autumn of 1967 he took up a lectureship at the University of Sussex, where he was to spend the majority of the next fifteen years of his career. During the mid 1970s, he spent considerable time in Japan at the University of Tokyo and also at Kwame Nkrumah University of Science and Technology in Kumasi, Ghana. In early 1982 he accepted an offer from UIUC of the MacArthur Chair with which the university had recently been endowed. As he had already committed himself to an eight-month stay as a visiting scientist at Cornell in early 1983, he finally arrived in Urbana in the early fall of that year, and has been there ever since. Leggett's own research interests shifted away from superfluid 3He since around 1980; he worked inter alia on the low-temperature properties of glasses, high-temperature superconductivity, the Bose–Einstein condensate (BEC) atomic gases and above all on the theory of experiments to test whether the formation of quantum mechanics will continue to describe the physical world as we push it up from the atomic level towards that of everyday life. From 2006 to 2016, he also held a position at the Institute for Quantum Computing in Waterloo, Canada. As of April 2023, he is chief scientist at the Institute for Condensed Matter Theory, a research institute at the UIUC. In 2013, he became the founding director of the Shanghai Center for Complex Physics. Research His research focuses on cuprate superconductivity, superfluidity in highly degenerate atomic gases, low temperature properties of amorphous solids, conceptual issues in the formulation of quantum mechanics and topological quantum computation. The edition of 29 December 2005 of the International Herald Tribune printed an article, "New tests of Einstein's 'spooky' reality", which referred to Leggett's Autumn 2005 debate at a conference in Berkeley, California, with fellow Nobel laureate Norman Ramsey of Harvard University. Both debated the worth of attempts to change quantum theory. Leggett thought attempts were justified, Ramsey opposed. Leggett believes quantum mechanics may be incomplete because of the quantum measurement problem. Awards and honours Leggett is a member of the National Academy of Sciences, the American Philosophical Society, the American Academy of Arts and Sciences, the Russian Academy of Sciences (foreign member), the Indian National Science Academy, and was elected a Fellow of the Royal Society (FRS) in 1980, a Fellow of the American Physical Society in 1985, and American Institute of Physics, and an Honorary Fellow of the Institute of Physics (HonFInstP) in 1998. He was awarded the 2003 Nobel Prize in Physics (with V. L. Ginzburg and A. A. Abrikosov) for pioneering contributions to the theory of superconductors and superfluids. He is an Honorary Fellow of the Institute of Physics (UK). He was appointed Knight Commander of the Order of the British Empire (KBE) in the 2004 Queen's Birthday Honours "for services to physics". He also won the 2002/2003 Wolf Foundation Prize for research on condensed forms of matter (with B. I. Halperin). He was also honoured with the Eugene Feenberg Memorial Medal (1999). He has been elected as a Foreign Fellow of the Indian National Science Academy (2011). Personal life In June 1973, he married Haruko Kinase. They met at Sussex University, in Brighton, England. In 1978, they had a daughter Asako. His wife Haruko earned a PhD in cultural anthropology from UIUC and has done research on the hospice system. Their daughter, Asako, also graduated from UIUC with a joint major in geography and chemistry. She holds dual US/UK citizenship. See also List of University of Waterloo people References External links including the Nobel Lecture Superfluid 3-He: The Early Days as Seen by a Theorist 1938 births Living people People educated at Wimbledon College 21st-century American physicists American Nobel laureates Alumni of Merton College, Oxford Academics of the University of Sussex English emigrants to the United States English physicists British Nobel laureates Fellows of Magdalen College, Oxford Knights Commander of the Order of the British Empire Nobel laureates in Physics People from Camberwell University of Illinois Urbana-Champaign faculty Wolf Prize in Physics laureates Fellows of the American Physical Society Fellows of the Royal Society Academic staff of the University of Waterloo Members of the United States National Academy of Sciences Foreign members of the Russian Academy of Sciences Foreign fellows of the Indian National Science Academy English Nobel laureates Maxwell Medal and Prize recipients Fellows of Merton College, Oxford Superfluidity
Anthony James Leggett
[ "Physics", "Chemistry", "Materials_science" ]
2,072
[ "Physical phenomena", "Phase transitions", "Phases of matter", "Superfluidity", "Condensed matter physics", "Exotic matter", "Matter", "Fluid dynamics" ]
659,068
https://en.wikipedia.org/wiki/Mass%20number
The mass number (symbol A, from the German word: Atomgewicht, "atomic weight"), also called atomic mass number or nucleon number, is the total number of protons and neutrons (together known as nucleons) in an atomic nucleus. It is approximately equal to the atomic (also known as isotopic) mass of the atom expressed in atomic mass units. Since protons and neutrons are both baryons, the mass number A is identical with the baryon number B of the nucleus (and also of the whole atom or ion). The mass number is different for each isotope of a given chemical element, and the difference between the mass number and the atomic number Z gives the number of neutrons (N) in the nucleus: . The mass number is written either after the element name or as a superscript to the left of an element's symbol. For example, the most common isotope of carbon is carbon-12, or , which has 6 protons and 6 neutrons. The full isotope symbol would also have the atomic number (Z) as a subscript to the left of the element symbol directly below the mass number: . Mass number changes in radioactive decay Different types of radioactive decay are characterized by their changes in mass number as well as atomic number, according to the radioactive displacement law of Fajans and Soddy. For example, uranium-238 usually decays by alpha decay, where the nucleus loses two neutrons and two protons in the form of an alpha particle. Thus the atomic number and the number of neutrons each decrease by 2 (Z: 92 → 90, N: 146 → 144), so that the mass number decreases by 4 (A = 238 → 234); the result is an atom of thorium-234 and an alpha particle (): {| border="0" |- style="height:2em;" | ||→ || ||+ |||| |} On the other hand, carbon-14 decays by beta decay, whereby one neutron is transmuted into a proton with the emission of an electron and an antineutrino. Thus the atomic number increases by 1 (Z: 6 → 7) and the mass number remains the same (A = 14), while the number of neutrons decreases by 1 (N: 8 → 7). The resulting atom is nitrogen-14, with seven protons and seven neutrons: {| border="0" |- style="height:2em;" | ||→ || ||+ || ||+ || |} Beta decay is possible because different isobars have mass differences on the order of a few electron masses. If possible, a nuclide will undergo beta decay to an adjacent isobar with lower mass. In the absence of other decay modes, a cascade of beta decays terminates at the isobar with the lowest atomic mass. Another type of radioactive decay without change in mass number is emission of a gamma ray from a nuclear isomer or metastable excited state of an atomic nucleus. Since all the protons and neutrons remain in the nucleus unchanged in this process, the mass number is also unchanged. Mass number and isotopic mass The mass number gives an estimate of the isotopic mass measured in atomic mass units (u). For 12C, the isotopic mass is exactly 12, since the atomic mass unit is defined as 1/12 of the mass of 12C. For other isotopes, the isotopic mass is usually within 0.1 u of the mass number. For example, 35Cl (17 protons and 18 neutrons) has a mass number of 35 and an isotopic mass of 34.96885. The difference of the actual isotopic mass minus the mass number of an atom is known as the mass excess, which for 35Cl is –0.03115. Mass excess should not be confused with mass defect which is the difference between the mass of an atom and its constituent particles (namely protons, neutrons and electrons). There are two reasons for mass excess: The neutron is slightly heavier than the proton. This increases the mass of nuclei with more neutrons than protons relative to the atomic mass unit scale based on 12C with equal numbers of protons and neutrons. Nuclear binding energy varies between nuclei. A nucleus with greater binding energy has a lower total energy, and therefore a lower mass according to Einstein's mass–energy equivalence relation E = mc2. For 35Cl, the isotopic mass is less than 35, so this must be the dominant factor. Relative atomic mass of an element The mass number should also not be confused with the standard atomic weight (also called atomic weight) of an element, which is the ratio of the average atomic mass of the different isotopes of that element (weighted by abundance) to the atomic mass constant. The atomic weight is a mass ratio, while the mass number is a counted number (and so an integer). This weighted average can be quite different from the near-integer values for individual isotopic masses. For instance, there are two main isotopes of chlorine: chlorine-35 and chlorine-37. In any given sample of chlorine that has not been subjected to mass separation there will be roughly 75% of chlorine atoms which are chlorine-35 and only 25% of chlorine atoms which are chlorine-37. This gives chlorine a relative atomic mass of 35.5 (actually 35.4527 g/mol). Moreover, the weighted average mass can be near-integer, but at the same time not corresponding to the mass of any natural isotope. For example, bromine has only two stable isotopes, 79Br and 81Br, naturally present in approximately equal fractions, which leads to the standard atomic mass of bromine close to 80 (79.904 g/mol), even though the isotope 80Br with such mass is unstable. References Further reading Nuclear chemistry Chemical quantities
Mass number
[ "Physics", "Chemistry", "Mathematics" ]
1,258
[ "Nuclear chemistry", "Quantity", "Chemical quantities", "nan", "Nuclear physics" ]
659,414
https://en.wikipedia.org/wiki/Information-theoretic%20death
Information-theoretic death is a term of art used in cryonics to define death in a way that is permanent and independent of any future medical advances, no matter how distant or improbable that may be. Because detailed reading or restoration of information-storing brain structures is beyond current technology, the term lacks practical importance in contemporary medicine. See also Lost media References Cryonics Death Futures studies Transhumanism
Information-theoretic death
[ "Technology", "Engineering", "Biology" ]
88
[ "Genetic engineering", "Transhumanism", "Ethics of science and technology" ]
659,899
https://en.wikipedia.org/wiki/Gravimetric%20analysis
Gravimetric analysis describes a set of methods used in analytical chemistry for the quantitative determination of an analyte (the ion being analyzed) based on its mass. The principle of this type of analysis is that once an ion's mass has been determined as a unique compound, that known measurement can then be used to determine the same analyte's mass in a mixture, as long as the relative quantities of the other constituents are known. The four main types of this method of analysis are precipitation, volatilization, electro-analytical and miscellaneous physical method. The methods involve changing the phase of the analyte to separate it in its pure form from the original mixture and are quantitative measurements. Precipitation method The precipitation method is the one used for the determination of the amount of calcium in water. Using this method, an excess of oxalic acid, H2C2O4, is added to a measured, known volume of water. By adding a reagent, here ammonium oxalate, the calcium will precipitate as calcium oxalate. The proper reagent, when added to aqueous solution, will produce highly insoluble precipitates from the positive and negative ions that would otherwise be soluble with their counterparts (equation 1). The reaction is: Formation of calcium oxalate: Ca2+(aq) + C2O42- → CaC2O4 The precipitate is collected, dried and ignited to high (red) heat which converts it entirely to calcium oxide. The reaction is pure calcium oxide formed CaC2O4 → CaO(s) + CO(g)+ CO2(g) The pure precipitate is cooled, then measured by weighing, and the difference in weights before and after reveals the mass of analyte lost, in this case calcium oxide. That number can then be used to calculate the amount, or the percent concentration, of it in the original mix. Volatilization methods Volatilization methods can be either direct or indirect. Water eliminated in a quantitative manner from many inorganic substances by ignition is an example of a direct determination. It is collected on a solid desiccant and its mass determined by the gain in mass of the desiccant. Another direct volatilization method involves carbonates which generally decompose to release carbon dioxide when acids are used. Because carbon dioxide is easily evolved when heat is applied, its mass is directly established by the measured increase in the mass of the absorbent solid used. Determination of the amount of water by measuring the loss in mass of the sample during heating is an example of an indirect method. It is well known that changes in mass occur due to decomposition of many substances when heat is applied, regardless of the presence or absence of water. Because one must make the assumption that water was the only component lost, this method is less satisfactory than direct methods. This often faulty and misleading assumption has proven to be wrong on more than a few occasions. There are many substances other than water loss that can lead to loss of mass with the addition of heat, as well as a number of other factors that may contribute to it. The widened margin of error created by this all-too-often false assumption is not one to be lightly disregarded as the consequences could be far-reaching. Nevertheless, the indirect method, although less reliable than direct, is still widely used in commerce. For example, it's used to measure the moisture content of cereals, where a number of imprecise and inaccurate instruments are available for this purpose. Types of volatilization methods In volatilization methods, removal of the analyte involves separation by heating or chemically decomposing a volatile sample at a suitable temperature. In other words, thermal or chemical energy is used to precipitate a volatile species. For example, the water content of a compound can be determined by vaporizing the water using thermal energy (heat). Heat can also be used, if oxygen is present, for combustion to isolate the suspect species and obtain the desired results. The two most common gravimetric methods using volatilization are those for water and carbon dioxide. An example of this method is the isolation of sodium hydrogen bicarbonate (the main ingredient in most antacid tablets) from a mixture of carbonate and bicarbonate. The total amount of this analyte, in whatever form, is obtained by addition of an excess of dilute sulfuric acid to the analyte in solution. In this reaction, nitrogen gas is introduced through a tube into the flask which contains the solution. As it passes through, it gently bubbles. The gas then exits, first passing a drying agent (here CaSO4, the common desiccant Drierite). It then passes a mixture of the drying agent and sodium hydroxide which lies on asbestos or Ascarite II, a non-fibrous silicate containing sodium hydroxide. The mass of the carbon dioxide is obtained by measuring the increase in mass of this absorbent. This is performed by measuring the difference in weight of the tube in which the ascarite contained before and after the procedure. The calcium sulfate (CaSO4) in the tube retains carbon dioxide selectively as it's heated, and thereby, removed from the solution. The drying agent absorbs any aerosolized water and/or water vapor (reaction 3.). The mix of the drying agent and NaOH absorbs the CO2 and any water that may have been produced as a result of the absorption of the NaOH (reaction 4.). The reactions are: Reaction 3 - absorption of water NaHCO3(aq) + H2SO4(aq) → CO2(g) + H2O(l) + NaHSO4(aq). Reaction 4. Absorption of CO2 and residual water CO2(g) + 2 NaOH(s) → Na2CO3(s) + H2O(l). Procedure The sample is dissolved, if it is not already in solution. The solution may be treated to adjust the pH (so that the proper precipitate is formed, or to suppress the formation of other precipitates). If it is known that species are present which interfere (by also forming precipitates under the same conditions as the analyte), the sample might require treatment with a different reagent to remove these interferents. The precipitating reagent is added at a concentration that favors the formation of a "good" precipitate (see below). This may require low concentration, extensive heating (often described as "digestion"), or careful control of the pH. Digestion can help reduce the amount of coprecipitation. After the precipitate has formed and been allowed to "digest", the solution is carefully filtered. The filter is used to collect the precipitate; smaller particles are more difficult to filter. Depending on the procedure followed, the filter might be a piece of ashless filter paper in a fluted funnel, or a filter crucible. Filter paper is convenient because it does not typically require cleaning before use; however, filter paper can be chemically attacked by some solutions (such as concentrated acid or base), and may tear during the filtration of large volumes of solution. The alternative is a crucible whose bottom is made of some porous material, such as sintered glass, porcelain or sometimes metal. These are chemically inert and mechanically stable, even at elevated temperatures. However, they must be carefully cleaned to minimize contamination or carryover(cross-contamination). Crucibles are often used with a mat of glass or asbestos fibers to trap small particles. After the solution has been filtered, it should be tested to make sure that the analyte has been completely precipitated. This is easily done by adding a few drops of the precipitating reagent; if a precipitate is observed, the precipitation is incomplete. After filtration, the precipitate – including the filter paper or crucible – is heated, or charred. This accomplishes the following: The remaining moisture is removed (drying). Secondly, the precipitate is converted to a more chemically stable form. For instance, calcium ion might be precipitated using oxalate ion, to produce calcium oxalate (CaC2O4); it might then be heated to convert it into the oxide (CaO). It is vital that the empirical formula of the weighed precipitate be known, and that the precipitate be pure; if two forms are present, the results will be inaccurate. The precipitate cannot be weighed with the necessary accuracy in place on the filter paper; nor can the precipitate be completely removed from the filter paper to weigh it. The precipitate can be carefully heated in a crucible until the filter paper has burned away; this leaves only the precipitate. (As the name suggests, "ashless" paper is used so that the precipitate is not contaminated with ash.) After the precipitate is allowed to cool (preferably in a desiccator to keep it from absorbing moisture), it is weighed (in the crucible). To calculate the final mass of the analyte, the starting mass of the empty crucible is subtracted from the final mass of the crucible containing the sample. Since the composition of the precipitate is known, it is simple to calculate the mass of analyte in the original sample. Example A chunk of ore is to be analyzed for sulfur content. It is treated with concentrated nitric acid and potassium chlorate to convert all of the sulfur to sulfate (SO). The nitrate and chlorate are removed by treating the solution with concentrated HCl. The sulfate is precipitated with barium (Ba2+) and weighed as BaSO4. Advantages Gravimetric analysis, if methods are followed carefully, provides for exceedingly precise analysis. In fact, gravimetric analysis was used to determine the atomic masses of many elements in the periodic table to six figure accuracy. Gravimetry provides very little room for instrumental error and does not require a series of standards for calculation of an unknown. Also, methods often do not require expensive equipment. Gravimetric analysis, due to its high degree of accuracy, when performed correctly, can also be used to calibrate other instruments in lieu of reference standards. Gravimetric analysis is currently used to allow undergraduate chemistry/Biochemistry students to experience a grad level laboratory and it is a highly effective teaching tool to those who want to attend medical school or any research graduate school. Disadvantages Gravimetric analysis usually only provides for the analysis of a single element, or a limited group of elements, at a time. Comparing modern dynamic flash combustion coupled with gas chromatography with traditional combustion analysis will show that the former is both faster and allows for simultaneous determination of multiple elements while traditional determination allowed only for the determination of carbon and hydrogen. Methods are often convoluted and a slight mis-step in a procedure can often mean disaster for the analysis (colloid formation in precipitation gravimetry, for example). Compare this with hardy methods such as spectrophotometry and one will find that analysis by these methods is much more efficient. Steps in a gravimetric analysis After appropriate dissolution of the sample the following steps should be followed for successful gravimetric procedure: 1. Preparation of the Solution: This may involve several steps including adjustment of the pH of the solution in order for the precipitate to occur quantitatively and get a precipitate of desired properties, removing interferences, adjusting the volume of the sample to suit the amount of precipitating agent to be added. 2. Precipitation: This requires addition of a precipitating agent solution to the sample solution. Upon addition of the first drops of the precipitating agent, supersaturation occurs, then nucleation starts to occur where every few molecules of precipitate aggregate together forming a nucleus. At this point, addition of extra precipitating agent will either form new nuclei or will build up on existing nuclei to give a precipitate. This can be predicted by Von Weimarn ratio where, according to this relation the particle size is inversely proportional to a quantity called the relative supersaturation where Relative supersaturation = (Q – S)/S The Q is the concentration of reactants before precipitation, S is the solubility of precipitate in the medium from which it is being precipitated. Therefore, to get particle growth instead of further nucleation we must make the relative supersaturation ratio as small as possible. The optimum conditions for precipitation which make the supersaturation low are: a. Precipitation using dilute solutions to decrease Q b. Slow addition of precipitating agent to keep Q as low as possible c. Stirring the solution during addition of precipitating agent to avoid concentration sites and keep Q low d. Increase solubility by precipitation from hot solution e. Adjust the pH to increase S, but not too much increase np as we do not want to lose precipitate by dissolution f. Usually add a little excess of the precipitating agent for quantitative precipitation and check for completeness of the precipitation 3. Digestion of the precipitate: The precipitate is left hot (below boiling) for 30 min to one hour for the particles to be digested. Digestion involves dissolution of small particles and reprecipitation on larger ones resulting in particle growth and better precipitate characteristics. This process is called Ostwald ripening. An important advantage of digestion is observed for colloidal precipitates where large amounts of adsorbed ions cover the huge area of the precipitate. Digestion forces the small colloidal particles to agglomerate which decreases their surface area and thus adsorption. You should know that adsorption is a major problem in gravimetry in case of colloidal precipitate since a precipitate tends to adsorb its own ions present in excess, Therefore, forming what is called a primary ion layer which attracts ions from solution forming a secondary or counter ion layer. Individual particles repel each other keeping the colloidal properties of the precipitate. Particle coagulation can be forced by either digestion or addition of a high concentration of a diverse ions strong electrolytic solution in order to shield the charges on colloidal particles and force agglomeration. Usually, coagulated particles return to the colloidal state if washed with water, a process called peptization. 4. Washing and Filtering the Precipitate: It is crucial to wash the precipitate thoroughly to remove all adsorbed species that would add to the weight of the precipitate. One should be careful nor to use too much water since part of the precipitate may be lost. Also, in case of colloidal precipitates we should not use water as a washing solution since peptization would occur. In such situations dilute nitric acid, ammonium nitrate, or dilute acetic acid may be used. Usually, it is a good practice to check for the presence of precipitating agent in the filtrate of the final washing solution. The presence of precipitating agent means that extra washing is required. Filtration should be done in appropriate sized Gooch or ignition filter paper. 5. Drying and Ignition: The purpose of drying (heating at about 120-150 oC in an oven) or ignition in a muffle furnace at temperatures ranging from 600 to 1200 oC is to get a material with exactly known chemical structure so that the amount of analyte can be accurately determined. 6. Precipitation from Homogeneous Solution: To make Q minimum we can, in some situations, generate the precipitating agent in the precipitation medium rather than adding it. For example, to precipitate iron as the hydroxide, we dissolve urea in the sample. Heating of the solution generates hydroxide ions from the hydrolysis of urea. Hydroxide ions are generated at all points in solution and thus there are no sites of concentration. We can also adjust the rate of urea hydrolysis and thus control the hydroxide generation rate. This type of procedure can be very advantageous in case of colloidal precipitates. Solubility in the presence of diverse ions As expected from previous information, diverse ions have a screening effect on dissociated ions which leads to extra dissociation. Solubility will show a clear increase in presence of diverse ions as the solubility product will increase. Look at the following example: Find the solubility of AgCl (Ksp = 1.0 x 10−10) in 0.1 M NaNO3. The activity coefficients for silver and chloride are 0.75 and 0.76, respectively. AgCl(s) = Ag+ + Cl− We can no longer use the thermodynamic equilibrium constant (i.e. in absence of diverse ions) and we have to consider the concentration equilibrium constant or use activities instead of concentration if we use Kth: Ksp = aAg+ aCl− Ksp = [Ag+] fAg+ [Cl−] fCl− 1.0 x 10−10 = s x 0.75 x s x 0.76 s = 1.3 x 10−5 M We have calculated the solubility of AgCl in pure water to be 1.0 x 10−5 M, if we compare this value to that obtained in presence of diverse ions we see % increase in solubility = {(1.3 x 10−5 – 1.0 x 10−5) / 1.0 x 10−5} x 100 = 30% Therefore, once again we have an evidence for an increase in dissociation or a shift of equilibrium to right in presence of diverse ions. References External links Gravimetric Quimociac Technique Analytical chemistry Scientific techniques
Gravimetric analysis
[ "Chemistry" ]
3,767
[ "nan" ]
659,902
https://en.wikipedia.org/wiki/Acid%E2%80%93base%20titration
An acid–base titration is a method of quantitative analysis for determining the concentration of Brønsted-Lowry acid or base (titrate) by neutralizing it using a solution of known concentration (titrant). A pH indicator is used to monitor the progress of the acid–base reaction and a titration curve can be constructed. This differs from other modern modes of titrations, such as oxidation-reduction titrations, precipitation titrations, & complexometric titrations. Although these types of titrations are also used to determine unknown amounts of substances, these substances vary from ions to metals. Acid–base titration finds extensive applications in various scientific fields, such as pharmaceuticals, environmental monitoring, and quality control in industries. This method's precision and simplicity makes it an important tool in quantitative chemical analysis, contributing significantly to the general understanding of solution chemistry. History The history of acid-base titration dates back to the late 19th century when advancements in analytical chemistry fostered the development of systematic techniques for quantitative analysis. The origins of titration methods can be linked to the work of chemists such as Karl Friedrich Mohr in the mid-1800s. His contributions laid the groundwork for understanding titrations involving acids and bases. Theoretical progress came with the research of Swedish chemist Svante Arrhenius, who in the late 19th century, introduced the Arrhenius theory, providing a theoretical framework for acid-base reactions. This theoretical foundation, along with ongoing experimental refinements, contributed to the evolution of acid-base titration as a precise and widely applicable analytical method. Over time, the method has undergone further refinements and adaptations, establishing itself as an essential tool in laboratories across various scientific disciplines. Alkalimetry and acidimetry Alkalimetry and acidimetry are types of volumetric analyses in which the fundamental reaction is a neutralization reaction. They involve the controlled addition of either an acid or a base (titrant) of known concentration to the solution of the unknown concentration (titrate) until the reaction reaches its stoichiometric equivalence point. At this point, the moles of acid and base are equal, resulting in a neutral solution: acid + base → salt + water For example: HCl + NaOH → NaCl + H2O Acidimetry is the specialized analytical use of acid-base titration to determine the concentration of a basic (alkaline) substance using standard acid. This can be used for weak bases and strong bases. An example of an acidimetric titration involving a strong base is as follows: Ba(OH)2 + 2 H+ → Ba2+ + 2 H2O In this case, the strong base (Ba(OH)2) is neutralized by the acid until all of the base has reacted. This allows the viewer to calculate the concentration of the base from the volume of the standard acid that is used. Alkalimetry follows uses same concept of specialized analytic acid-base titration, but to determine the concentration of an acidic substance using standard base. An example of an alkalimetric titration involving a strong acid is as follows: H2SO4 + 2 OH− → SO42- + 2 H2O In this case, the strong acid (H2SO4) is neutralized by the base until all of the acid has reacted. This allows the viewer to calculate the concentration of the acid from the volume of the standard base that is used. The standard solution (titrant) is stored in the burette, while the solution of unknown concentration (analyte/titrate) is placed in the Erlenmeyer flask below it with an indicator. Indicator choice A suitable pH indicator must be chosen in order to detect the end point of the titration. The colour change or other effect should occur close to the equivalence point of the reaction so that the experimenter can accurately determine when that point is reached. The pH of the equivalence point can be estimated using the following rules: A strong acid will react with a strong base to form a neutral (pH = 7) solution. A strong acid will react with a weak base to form an acidic (pH < 7) solution. A weak acid will react with a strong base to form a basic (pH > 7) solution. These indicators are essential tools in chemistry and biology, aiding in the determination of a solution's acidity or alkalinity through the observation of colour transitions. The table below serves as a reference guide for these indicator choices, offering insights into the pH ranges and colour transformations associated with specific indicators: Phenolphthalein is widely recognized as one of the most commonly used acid-base indicators in chemistry. Its popularity is because of its effectiveness in a broad pH range and its distinct colour transitions. Its sharp and easily detectable colour changes makes phenolphthalein a valuable tool for determining the endpoint of acid-base titrations, as a precise pH change signifies the completion of the reaction. When a weak acid reacts with a weak base, the equivalence point solution will be basic if the base is stronger and acidic if the acid is stronger. If both are of equal strength, then the equivalence pH will be neutral. However, weak acids are not often titrated against weak bases because the colour change shown with the indicator is often quick, and therefore very difficult for the observer to see the change of colour. The point at which the indicator changes colour is called the endpoint. A suitable indicator should be chosen, preferably one that will experience a change in colour (an endpoint) close to the equivalence point of the reaction. In addition to the wide variety of indicator solutions, pH papers, crafted from paper or plastic infused with combinations of these indicators, serve as a practical alternative. The pH of a solution can be estimated by immersing a strip of pH paper into it and matching the observed colour to the reference standards provided on the container. Overshot titration Overshot titrations are a common phenomenon, and refer to a situation where the volume of titrant added during a chemical titration exceeds the amount required to reach the equivalence point. This excess titrant leads to an outcome where the solution becomes slightly more alkaline or over-acidified. Overshooting the equivalence point can occur due to various factors, such as errors in burette readings, imperfect reaction stoichiometry, or issues with endpoint detection. The consequences of overshot titrations can affect the accuracy of the analytical results, particularly in quantitative analysis. Researchers and analysts often employ corrective measures, such as back-titration and using more precise titration techniques, to mitigate the impact of overshooting and obtain reliable and precise measurements. Understanding the causes, consequences, and solutions related to overshot titrations is crucial in achieving accurate and reproducible results in the field of chemistry. Mathematical analysis: titration of weak acid For calculating concentrations, an ICE table can be used. ICE stands for initial, change, and equilibrium. The pH of a weak acid solution being titrated with a strong base solution can be found at different points along the way. These points fall into one of four categories: initial pH pH before the equivalence point pH at the equivalence point pH after the equivalence point 1. The initial pH is approximated for a weak acid solution in water using the equation: where [H3O+]0 is the initial concentration of the hydronium ion. 2. The pH before the equivalence point depends on the amount of weak acid remaining and the amount of conjugate base formed. The pH can be calculated approximately by the Henderson–Hasselbalch equation: where Ka is the acid dissociation constant. 3. The pH at the equivalence point depends on how much the weak acid is consumed to be converted into its conjugate base. Note that when an acid neutralizes a base, the pH may or may not be neutral (pH = 7). The pH depends on the strengths of the acid and base. In the case of a weak acid and strong base titration, the pH is greater than 7 at the equivalence point. Thus pH can be calculated using the following formula: Where {[OH^{-}]} is the concentration of the hydroxide ion. The concentration of the hydroxide ion is calculated from the concentration of the hydronium ion and using the following relationship: Where Kb is the base dissociation constant, Kw is the water dissociation constant. 4. The pH after the equivalence point depends on the concentration of the conjugate base of the weak acid and the strong base of the titrant. However, the base of the titrant is stronger than the conjugate base of the acid. Therefore, the pH in this region is controlled by the strong base. As such the pH can be found using the following: where is the concentration of the strong base that is added, is the volume of base added until the equilibrium, is the concentration of the strong acid that is added, and is the initial volume of the acid. Single formula More accurately, a single formula that describes the titration of a weak acid with a strong base from start to finish is given below: where " φ = fraction of completion of the titration (φ < 1 is before the equivalence point, φ = 1 is the equivalence point, and φ > 1 is after the equivalence point) = the concentrations of the acid and base respectively = the volumes of the acid and base respectively Graphical methods Identifying the pH associated with any stage in the titration process is relatively simple for monoprotic acids and bases. A monoprotic acid is an acid that donates one proton. A monoprotic base is a base that accepts one proton. A monoprotic acid or base only has one equivalence point on a titration curve. A diprotic acid donates two protons and a diprotic base accepts two protons. The titration curve for a diprotic solution has two equivalence points. A polyprotic substance has multiple equivalence points. All titration reactions contain small buffer regions that appear horizontal on the graph. These regions contain comparable concentrations of acid and base, preventing sudden changes in pH when additional acid or base is added. Pharmaceutical applications In the pharmaceutical industry, acid-base titration serves as a fundamental analytical technique with diverse applications. One primary use involves the determination of the concentration of Active Pharmaceutical Ingredients (APIs) in drug formulations, ensuring product quality and compliance with regulatory standards. Acid–base titration is particularly valuable in quantifying acidic or basic functional groups with pharmaceutical compounds. Additionally, the method is employed for the analysis of additives or ingredients, making it easier to adjust and control how a product is made. Quality control laboratories utilize acid-base titration to assess the purity of raw materials and to monitor various stages of drug manufacturing processes. The technique's reliability and simplicity make it an integral tool in pharmaceutical research and development, contributing to the production of safe and effective medications. Environmental monitoring applications Acid–base titration plays a crucial role in environmental monitoring by providing a quantitative analytical method for assessing the acidity or alkalinity of water samples. The measurement of parameters such as pH, total alkalinity, and acidity is essential in evaluating the environmental impact of industrial discharges, agricultural runoff, and other sources of water contamination. Acid–base titration allows for the determination of the buffering capacity of natural water systems, aiding in the assessment of their ability to resist changes in pH. Monitoring pH levels is important for preserving aquatic ecosystems and ensuring compliance with environmental regulations. Acid–base titration is also utilized in the analysis of acid rain effects on soil and water bodies, contributing to the overall understanding and management of environmental quality. The method's prevision and reliability make it a valuable tool in safeguarding ecosystems and assessing the impact of human activities on natural water resources. See also Henderson–Hasselbalch equation pH indicator References External links Graphical method to solve acid-base problems, including titrations Graphic and numerical solver for general acid-base problems - Software Program for phone and tablets Titration
Acid–base titration
[ "Chemistry" ]
2,517
[ "Instrumental analysis", "Titration" ]
660,008
https://en.wikipedia.org/wiki/Scanning%20laser%20ophthalmoscopy
Scanning laser ophthalmoscopy (SLO) is a method of examination of the eye. It uses the technique of confocal laser scanning microscopy for diagnostic imaging of the retina or cornea of the human eye. As a method used to image the retina with a high degree of spatial sensitivity, it is helpful in the diagnosis of glaucoma, macular degeneration, and other retinal disorders. It has further been combined with adaptive optics technology to provide sharper images of the retina. Scanning laser ophthalmoscopy SLO utilizes horizontal and vertical scanning mirrors to scan a specific region of the retina and create raster images viewable on a television monitor. While it is able to image the retina in real time, it has issues with reflections from eye astigmatism and the cornea. Eye movements additionally can confound the data from SLO. Adaptive optics scanning laser ophthalmoscopy Adaptive optics scanning laser ophthalmoscopy (AOSLO) is a technique used to measure living retinal cells. It utilizes adaptive optics to remove optical aberrations from images obtained from scanning laser ophthalmoscopy of the retina. History Scanning laser ophthalmoscopy developed as a method to view a distinct layer of the living eye at the microscopic level. The use of confocal methods to diminish extra light by focusing detected light through a small pinhole made possible the imaging of individual layers of the retina with greater distinction than ever before. However, using SLO for monitoring of individual retinal cells proved problematic because of optical aberrations created from the tissues of the anterior eye (specifically the cornea and lens). These aberrations (caused additionally by astigmatism and other factors affecting eye position) diminished lateral resolution and proved difficult to remove. Adaptive optics was first attempted for SLO in the 1980s. This first attempt did not use wavefront-detecting technology with its deformable mirror and estimated aberrations through pre-measured factors such as astigmatism. However, this did not diffuse the small monochromatic aberrations resulting from light traveling through the anterior eye both into and out of the pupil during scanning. The invention and adaptation of the Shack–Hartmann wavefront sensor for the apparatus produced images of the retina with much higher lateral resolution. The addition of microelectricalmechanical (MEMs) mirrors instead of larger, more expensive mirror deformable mirror systems to the apparatus made AOSLO further usable for a wider range of studies and for use in patients. Procedure The subject is placed in a dental impression mount fixed in a way to make it possible to manipulate the head in three dimensions. The subject's pupils are dilated using a dilating agent to minimize fluctuations from accommodation. After the eyes are sufficiently dilated, the subject is told to fixate on a target while in the mount. Once the subject is properly placed, wavefront correction and imaging takes place. A laser is collimated and then reflected off of a beam-splitting mirror. As in confocal SLO, light must pass through both a horizontal and a vertical scanning mirror before and after the eye is scanned to align the moving beam for eventual retinal faster images of the retina. Additionally, the light is reflected off of a deformable mirror before and after exposure to the eye to diffuse optical aberrations. The laser enters the eye through the pupil to illuminate the region it has been focused onto and light reflected back leaves the same way. Light returning from the mirrors passes through the first beam splitter onto another beam splitter where it is directed simultaneously toward a photomultiplier tube (PMT) and toward a Shack–Hartmann wavefront sensor array. The light going toward the photomultiplier is focused through a confocal microscopy pinhole to remove light not reflecting off of the plane of interest and then recorded in the PMT. Light directed to the wavefront sensor array is split up by the lenslets in the array and then recorded onto a charge-coupled device (CCD) camera for detection of optical aberrations. These aberrations are then subtracted from the images recorded in the PMT to vastly increase lateral resolution. Applications A major use of this increased lateral resolution from AOSLO has been the ability to determine the spatial distribution of cone cells around the fovea. Not only can the spatial density of these cells be found for a variety of regions within the retina, but the anisotropy of these cells can also be calculated to determine the axial orientation of retinal cells in the living subject. This represents a major benefit over typical histological examination of small numbers of donated human eyes. AOSLO has also revealed significant decreases in foveal cone packing density for myopic eyes in comparison to emmetriopic eyes. This difference has been hypothesized to originate from a natural decrease in cone density with the increase in eye axial length associated with myopia. Abnormalities in photoreceptor structure in regions damaged by macular dystrophy have additionally been imaged by AOSLO. In these subjects, a dark area has been visualized within the macular lesion and morphologically abnormal photoreceptors have been visible on the lesion perimeter. Furthermore, scanning of subjects with cone dystrophy and retinitis pigmentosa (RP) has shown significant changes in cone packing density for these subjects compared to those with normal retinas. This presents a possible future use of AOSLO in phenotype tracking and confirmation for subjects with diseased genotypes. The imaging of retinal pigment epithelium (RPE) cells in patients with and without retinal disease has also proved possible with the use of AOSLO. With the loss of photoreceptor cells, background scattered light decreases and the light focused on the RPE can be analyzed more clearly. As the loss of RPE cells represents the primary pathology of macular degeneration, this provides a possible future avenue for tracking RPE degradation in vivo. This has been further proved with the analysis of lipofuscin granule autofluoresence in normal human and rhesus macaque retinas by AOSLO. Comparison of this fluorescence in normal and diseased retinas with simultaneous imaging of cone structure and cone/retinal pigment cell ratio analysis has been shown to be possible and in the future may allow for the tracking of retinal damage from retinal dystrophies. AOSLO has already been used in rhesus macaques to track light damage to the macula from particular wavelengths. Additionally, AOSLO provides a greater degree of accuracy for eye tracking than possible before with other techniques. Because of the short scan time involved in AOSLO, eye motion itself represents an obstacle to taking images of the retina. Computational adjustments and modeling have been able to correct for aberrations caused by eye motion between frames. However, by tracking these aberrations based on changes to the retina between pictures, the effect of light on the individual orientation of the cone can be tracked. Research utilizing a visual stimulus and AOSLO eye tracking have yielded data on how the retina tracks movement at the microscopic level. The high degree of specificity and the ability to focus the laser on different levels of the eyes with AOSLO has additionally allowed for real time tracking of blood flow in the eye. By injecting fluorescin into macaques before scanning, fluorescence adaptive optics scanning laser ophthalmoscopy (FAOSLO) can be utilized to image individual capillaries in the nerve fiber layer and determine the thickness of the nerve fiber layer itself. Vessel pattern and diameter for these capillaries have been measured throughout the regions scanned by FAOSLO. This has future applications for monitoring glaucoma patients who either have changes in nerve fiber layer thickness or alterations in vasculature from damage to the retina. Comparison to retinal dissection and other imaging techniques AOSLO represents an advantageous alternative to retinal dissection for a variety of reasons. Analysis of cone packing density before AOSLO was only possible on mounted eyes from eye donor banks. As this method could not measure changes to cones in living eyes, it could not be used to track retinal changes over time or eye movements. With the use of living subjects, AOSLO allows for these measurements as well as easier control of age and other confounding factors while maintaining similar anatomical results for cone packing density. Future clinical implications for AOSLO are also possible. AOSLO compares favorably with other retinal imaging techniques as well. Fluorescein angiography uses injection of a fluorescein dye to image the back of the retina. It is a commonly used technique but it has a large number of side effects, including nausea in one fifth of patients and in some cases death from anaphylaxis. Optical coherence tomography (OCT) represents a powerful clinical tool for monitoring retinal physiology in patients. OCT uses low coherence interferometry to differentiate tissues within the eye and create a cross section of a living patients’ retina non-invasively. It actually has greater axial resolution than AOSLO. However, AOSLO represents a method with much greater translational resolution than OCT and can thus be used to track minor lateral physical changes such as the effects of eye movements on the retina. A combination of AOSLO and OCT has recently been attempted in one apparatus to produce the first three-dimensional images of individual cone cells and illustrate the overall cone mosaic near the fovea at high speeds. See also Fundus photography Ophthalmoscopy Notes External links Optos website Diagnostic ophthalmology Medical equipment
Scanning laser ophthalmoscopy
[ "Biology" ]
2,051
[ "Medical equipment", "Medical technology" ]
660,026
https://en.wikipedia.org/wiki/Decibel%20watt
The decibel watt (dBW or dBW) is a unit for the measurement of the strength of a signal expressed in decibels relative to one watt. It is used because of its capability to express both very large and very small values of power in a short range of number; e.g., 1 milliwatt = −30 dBW, 1 watt = 0 dBW, 10 watts = 10 dBW, 100 watts = 20 dBW, and 1,000,000 W = 60 dBW. and also Compare dBW to dBm, which is referenced to one milliwatt (0.001 W). A given dBW value expressed in dBm is always 30 more because 1 watt is 1,000 milliwatts, and a ratio of 1,000 (in power) is 30 dB; e.g., 10 dBm (10 mW) is equal to −20 dBW (0.01 W). In the SI system the non SI modifier decibel (dB) is not permitted for use directly alongside SI units so the dBW is not directly permitted but 10 dBW may be written 10 dB (1 watt). See also dBm References Sound Units of power Logarithmic scales of measurement
Decibel watt
[ "Physics", "Mathematics" ]
263
[ "Physical quantities", "Quantity", "Power (physics)", "Logarithmic scales of measurement", "Units of power", "Units of measurement" ]
660,542
https://en.wikipedia.org/wiki/Serviceability%20failure
In engineering, a serviceability failure occurs when a structure does not collapse, but rather fails to meet the required specifications. For example, severe wind may cause an excess of vibration at a pedestrian bridge making it impossible to cross it safely or comfortably. Similar excessive vibrations can be caused by pedestrians due to their walking, running, or jumping. Similarly, storm conditions may cause water to spill over a coastal structure, so that boats are not safe behind the structure. Examples of serviceability failures include deformations, vibration, cracking, and leakages. Engineering failures
Serviceability failure
[ "Technology", "Engineering" ]
112
[ "Systems engineering", "Reliability engineering", "Technological failures", "Engineering failures", "Civil engineering" ]
660,850
https://en.wikipedia.org/wiki/Visualization%20%28graphics%29
Visualization (or visualisation (see spelling differences)), also known as Graphics Visualization, is any technique for creating images, diagrams, or animations to communicate a message. Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas since the dawn of humanity. from history include cave paintings, Egyptian hieroglyphs, Greek geometry, and Leonardo da Vinci's revolutionary methods of technical drawing for engineering purposes that actively involve scientific requirements. Visualization today has ever-expanding applications in science, education, engineering (e.g., product visualization), interactive multimedia, medicine, etc. Typical of a visualization application is the field of computer graphics. The invention of computer graphics (and 3D computer graphics) may be the most important development in visualization since the invention of central perspective in the Renaissance period. The development of animation also helped advance visualization. Overview The use of visualization to present information is not a new phenomenon. It has been used in maps, scientific drawings, and data plots for over a thousand years. Examples from cartography include Ptolemy's Geographia (2nd century AD), a map of China (1137 AD), and Minard's map (1861) of Napoleon's invasion of Russia a century and a half ago. Most of the concepts learned in devising these images carry over in a straightforward manner to computer visualization. Edward Tufte has written three critically acclaimed books that explain many of these principles. Computer graphics has from its beginning been used to study scientific problems. However, in its early days the lack of graphics power often limited its usefulness. The recent emphasis on visualization started in 1987 with the publication of Visualization in Scientific Computing, a special issue of Computer Graphics. Since then, there have been several conferences and workshops, co-sponsored by the IEEE Computer Society and ACM SIGGRAPH, devoted to the general topic, and special areas in the field, for example volume visualization. Most people are familiar with the digital animations produced to present meteorological data during weather reports on television, though few can distinguish between those models of reality and the satellite photos that are also shown on such programs. TV also offers scientific visualizations when it shows computer drawn and animated reconstructions of road or airplane accidents. Some of the most popular examples of scientific visualizations are computer-generated images that show real spacecraft in action, out in the void far beyond Earth, or on other planets. Dynamic forms of visualization, such as educational animation or timelines, have the potential to enhance learning about systems that change over time. Apart from the distinction between interactive visualizations and animation, the most useful categorization is probably between abstract and model-based scientific visualizations. The abstract visualizations show completely conceptual constructs in 2D or 3D. These generated shapes are completely arbitrary. The model-based visualizations either place overlays of data on real or digitally constructed images of reality or make a digital construction of a real object directly from the scientific data. Scientific visualization is usually done with specialized software, though there are a few exceptions, noted below. Some of these specialized programs have been released as open source software, having very often its origins in universities, within an academic environment where sharing software tools and giving access to the source code is common. There are also many proprietary software packages of scientific visualization tools. Models and frameworks for building visualizations include the data flow models popularized by systems such as AVS, IRIS Explorer, and VTK toolkit, and data state models in spreadsheet systems such as the Spreadsheet for Visualization and Spreadsheet for Images. Applications Scientific visualization As a subject in computer science, scientific visualization is the use of interactive, sensory representations, typically visual, of abstract data to reinforce cognition, hypothesis building, and reasoning. Scientific visualization is the transformation, selection, or representation of data from simulations or experiments, with an implicit or explicit geometric structure, to allow the exploration, analysis, and understanding of the data. Scientific visualization focuses and emphasizes the representation of higher order data using primarily graphics and animation techniques. It is a very important part of visualization and maybe the first one, as the visualization of experiments and phenomena is as old as science itself. Traditional areas of scientific visualization are flow visualization, medical visualization, astrophysical visualization, and chemical visualization. There are several different techniques to visualize scientific data, with isosurface reconstruction and direct volume rendering being the more common. Data and information visualization Data visualization is a related subcategory of visualization dealing with statistical graphics and geospatial data (as in thematic cartography) that is abstracted in schematic form. Information visualization concentrates on the use of computer-supported tools to explore large amount of abstract data. The term "information visualization" was originally coined by the User Interface Research Group at Xerox PARC and included Jock Mackinlay. Practical application of information visualization in computer programs involves selecting, transforming, and representing abstract data in a form that facilitates human interaction for exploration and understanding. Important aspects of information visualization are dynamics of visual representation and the interactivity. Strong techniques enable the user to modify the visualization in real-time, thus affording unparalleled perception of patterns and structural relations in the abstract data in question. Educational visualization Educational visualization is using a simulation to create an image of something so it can be taught about. This is very useful when teaching about a topic that is difficult to otherwise see, for example, atomic structure, because atoms are far too small to be studied easily without expensive and difficult to use scientific equipment. Knowledge visualization The use of visual representations to transfer knowledge between at least two persons aims to improve the transfer of knowledge by using computer and non-computer-based visualization methods complementarily. Thus properly designed visualization is an important part of not only data analysis but knowledge transfer process, too. Knowledge transfer may be significantly improved using hybrid designs as it enhances information density but may decrease clarity as well. For example, visualization of a 3D scalar field may be implemented using iso-surfaces for field distribution and textures for the gradient of the field. Examples of such visual formats are sketches, diagrams, images, objects, interactive visualizations, information visualization applications, and imaginary visualizations as in stories. While information visualization concentrates on the use of computer-supported tools to derive new insights, knowledge visualization focuses on transferring insights and creating new knowledge in groups. Beyond the mere transfer of facts, knowledge visualization aims to further transfer insights, experiences, attitudes, values, expectations, perspectives, opinions, and predictions by using various complementary visualizations. See also: picture dictionary, visual dictionary Product visualization Product visualization involves visualization software technology for the viewing and manipulation of 3D models, technical drawing and other related documentation of manufactured components and large assemblies of products. It is a key part of product lifecycle management. Product visualization software typically provides high levels of photorealism so that a product can be viewed before it is actually manufactured. This supports functions ranging from design and styling to sales and marketing. Technical visualization is an important aspect of product development. Originally technical drawings were made by hand, but with the rise of advanced computer graphics the drawing board has been replaced by computer-aided design (CAD). CAD-drawings and models have several advantages over hand-made drawings such as the possibility of 3-D modeling, rapid prototyping, and simulation. 3D product visualization promises more interactive experiences for online shoppers, but also challenges retailers to overcome hurdles in the production of 3D content, as large-scale 3D content production can be extremely costly and time-consuming. Visual communication Visual communication is the communication of ideas through the visual display of information. Primarily associated with two dimensional images, it includes: alphanumerics, art, signs, and electronic resources. Recent research in the field has focused on web design and graphically oriented usability. Visual analytics Visual analytics focuses on human interaction with visualization systems as part of a larger process of data analysis. Visual analytics has been defined as "the science of analytical reasoning supported by the interactive visual interface". Its focus is on human information discourse (interaction) within massive, dynamically changing information spaces. Visual analytics research concentrates on support for perceptual and cognitive operations that enable users to detect the expected and discover the unexpected in complex information spaces. Technologies resulting from visual analytics find their application in almost all fields, but are being driven by critical needs (and funding) in biology and national security. Interactivity Interactive visualization or interactive visualisation is a branch of graphic visualization in computer science that involves studying how humans interact with computers to create graphic illustrations of information and how this process can be made more efficient. For a visualization to be considered interactive it must satisfy two criteria: Human input: control of some aspect of the visual representation of information, or of the information being represented, must be available to a human, and Response time: changes made by the human must be incorporated into the visualization in a timely manner. In general, interactive visualization is considered a soft real-time task. One particular type of interactive visualization is virtual reality (VR), where the visual representation of information is presented using an immersive display device such as a stereo projector (see stereoscopy). VR is also characterized by the use of a spatial metaphor, where some aspect of the information is represented in three dimensions so that humans can explore the information as if it were present (where instead it was remote), sized appropriately (where instead it was on a much smaller or larger scale than humans can sense directly), or had shape (where instead it might be completely abstract). Another type of interactive visualization is collaborative visualization, in which multiple people interact with the same computer visualization to communicate their ideas to each other or to explore information cooperatively. Frequently, collaborative visualization is used when people are physically separated. Using several networked computers, the same visualization can be presented to each person simultaneously. The people then make annotations to the visualization as well as communicate via audio (i.e., telephone), video (i.e., a video-conference), or text (i.e., IRC) messages. Human control of visualization The Programmer's Hierarchical Interactive Graphics System (PHIGS) was one of the first programmatic efforts at interactive visualization and provided an enumeration of the types of input humans provide. People can: Pick some part of an existing visual representation; Locate a point of interest (which may not have an existing representation); Stroke a path; Choose an option from a list of options; Valuate by inputting a number; and Write by inputting text. All of these actions require a physical device. Input devices range from the common – keyboards, mice, graphics tablets, trackballs, and touchpads – to the esoteric – wired gloves, boom arms, and even omnidirectional treadmills. These input actions can be used to control both the unique information being represented or the way that the information is presented. When the information being presented is altered, the visualization is usually part of a feedback loop. For example, consider an aircraft avionics system where the pilot inputs roll, pitch, and yaw and the visualization system provides a rendering of the aircraft's new attitude. Another example would be a scientist who changes a simulation while it is running in response to a visualization of its current progress. This is called computational steering. More frequently, the representation of the information is changed rather than the information itself. Rapid response to human input Experiments have shown that a delay of more than 20 ms between when input is provided and a visual representation is updated is noticeable by most people . Thus it is desirable for an interactive visualization to provide a rendering based on human input within this time frame. However, when large amounts of data must be processed to create a visualization, this becomes hard or even impossible with current technology. Thus the term "interactive visualization" is usually applied to systems that provide feedback to users within several seconds of input. The term interactive framerate is often used to measure how interactive a visualization is. Framerates measure the frequency with which an image (a frame) can be generated by a visualization system. A framerate of 50 frames per second (frame/s) is considered good while 0.1 frame/s would be considered poor. The use of framerates to characterize interactivity is slightly misleading however, since framerate is a measure of bandwidth while humans are more sensitive to latency. Specifically, it is possible to achieve a good framerate of 50 frame/s but if the images generated refer to changes to the visualization that a person made more than 1 second ago, it will not feel interactive to a person. The rapid response time required for interactive visualization is a difficult constraint to meet and there are several approaches that have been explored to provide people with rapid visual feedback based on their input. Some include Parallel rendering – where more than one computer or video card is used simultaneously to render an image. Multiple frames can be rendered at the same time by different computers and the results transferred over the network for display on a single monitor. This requires each computer to hold a copy of all the information to be rendered and increases bandwidth, but also increases latency. Also, each computer can render a different region of a single frame and send the results over a network for display. This again requires each computer to hold all of the data and can lead to a load imbalance when one computer is responsible for rendering a region of the screen with more information than other computers. Finally, each computer can render an entire frame containing a subset of the information. The resulting images plus the associated depth buffer can then be sent across the network and merged with the images from other computers. The result is a single frame containing all the information to be rendered, even though no single computer's memory held all of the information. This is called parallel depth compositing and is used when large amounts of information must be rendered interactively. Progressive rendering – where a framerate is guaranteed by rendering some subset of the information to be presented and providing incremental (progressive) improvements to the rendering once the visualization is no longer changing. Level-of-detail (LOD) rendering – where simplified representations of information are rendered to achieve a desired framerate while a person is providing input and then the full representation is used to generate a still image once the person is through manipulating the visualization. One common variant of LOD rendering is subsampling. When the information being represented is stored in a topologically rectangular array (as is common with digital photos, MRI scans, and finite difference simulations), a lower resolution version can easily be generated by skipping n points for each 1 point rendered. Subsampling can also be used to accelerate rendering techniques such as volume visualization that require more than twice the computations for an image twice the size. By rendering a smaller image and then scaling the image to fill the requested screen space, much less time is required to render the same data. Frameless rendering – where the visualization is no longer presented as a time series of images, but as a single image where different regions are updated over time. See also Graphical perception Spatial visualization ability References Further reading Bederson, Benjamin B., and Ben Shneiderman. The Craft of Information Visualization: Readings and Reflections, Morgan Kaufmann, 2003, . Cleveland, William S. (1993). Visualizing Data. Cleveland, William S. (1994). The Elements of Graphing Data. Charles D. Hansen, Chris Johnson. The Visualization Handbook, Academic Press (June 2004). Kravetz, Stephen A. and David Womble. ed. Introduction to Bioinformatics. Totowa, N.J. Humana Press, 2003. Will Schroeder, Ken Martin, Bill Lorensen. The Visualization Toolkit, by August 2004. Spence, Robert Information Visualization: Design for Interaction (2nd Edition), Prentice Hall, 2007, . Edward R. Tufte (1992). The Visual Display of Quantitative Information Edward R. Tufte (1990). Envisioning Information. Edward R. Tufte (1997). Visual Explanations: Images and Quantities, Evidence and Narrative. Matthew Ward, Georges Grinstein, Daniel Keim. Interactive Data Visualization: Foundations, Techniques, and Applications. (May 2010). Wilkinson, Leland. The Grammar of Graphics, Springer External links National Institute of Standards and Technology Scientific Visualization Tutorials, Georgia Tech Scientific Visualization Studio (NASA) Visual-literacy.org, (e.g. Periodic Table of Visualization Methods) Conferences Many conferences occur where interactive visualization academic papers are presented and published. Amer. Soc. of Information Science and Technology (ASIS&T SIGVIS) Special Interest Group in Visualization Information and Sound ACM SIGCHI ACM SIGGRAPH ACM VRST Eurographics IEEE Visualization ACM Transactions on Graphics IEEE Transactions on Visualization and Computer Graphics Infographics Computational science Computer graphics Data modeling
Visualization (graphics)
[ "Mathematics", "Engineering" ]
3,536
[ "Computational science", "Applied mathematics", "Data engineering", "Data modeling" ]
661,039
https://en.wikipedia.org/wiki/Kerr%20effect
The Kerr effect, also called the quadratic electro-optic (QEO) effect, is a change in the refractive index of a material in response to an applied electric field. The Kerr effect is distinct from the Pockels effect in that the induced index change for the Kerr effect is directly proportional to the square of the electric field instead of varying linearly with it. All materials show a Kerr effect, but certain liquids display it more strongly than others. The Kerr effect was discovered in 1875 by Scottish physicist John Kerr. Two special cases of the Kerr effect are normally considered, these being the Kerr electro-optic effect, or DC Kerr effect, and the optical Kerr effect, or AC Kerr effect. Kerr electro-optic effect The Kerr electro-optic effect, or DC Kerr effect, is the special case in which a slowly varying external electric field is applied by, for instance, a voltage on electrodes across the sample material. Under this influence, the sample becomes birefringent, with different indices of refraction for light polarized parallel to or perpendicular to the applied field. The difference in index of refraction, Δn, is given by where λ is the wavelength of the light, K is the Kerr constant, and E is the strength of the electric field. This difference in index of refraction causes the material to act like a waveplate when light is incident on it in a direction perpendicular to the electric field. If the material is placed between two "crossed" (perpendicular) linear polarizers, no light will be transmitted when the electric field is turned off, while nearly all of the light will be transmitted for some optimum value of the electric field. Higher values of the Kerr constant allow complete transmission to be achieved with a smaller applied electric field. Some polar liquids, such as nitrotoluene (C7H7NO2) and nitrobenzene (C6H5NO2) exhibit very large Kerr constants. A glass cell filled with one of these liquids is called a Kerr cell. These are frequently used to modulate light, since the Kerr effect responds very quickly to changes in electric field. Light can be modulated with these devices at frequencies as high as 10 GHz. Because the Kerr effect is relatively weak, a typical Kerr cell may require voltages as high as 30 kV to achieve complete transparency. This is in contrast to Pockels cells, which can operate at much lower voltages. Another disadvantage of Kerr cells is that the best available material, nitrobenzene, is poisonous. Some transparent crystals have also been used for Kerr modulation, although they have smaller Kerr constants. In media that lack inversion symmetry, the Kerr effect is generally masked by the much stronger Pockels effect. The Kerr effect is still present, however, and in many cases can be detected independently of Pockels effect contributions. Optical Kerr effect The optical Kerr effect, or AC Kerr effect is the case in which the electric field is due to the light itself. This causes a variation in index of refraction which is proportional to the local irradiance of the light. This refractive index variation is responsible for the nonlinear optical effects of self-focusing, self-phase modulation and modulational instability, and is the basis for Kerr-lens modelocking. This effect only becomes significant with very intense beams such as those from lasers. The optical Kerr effect has also been observed to dynamically alter the mode-coupling properties in multimode fiber, a technique that has potential applications for all-optical switching mechanisms, nanophotonic systems and low-dimensional photo-sensors devices. Magneto-optic Kerr effect The magneto-optic Kerr effect (MOKE) is the phenomenon that the light reflected from a magnetized material has a slightly rotated plane of polarization. It is similar to the Faraday effect where the plane of polarization of the transmitted light is rotated. Theory DC Kerr effect For a nonlinear material, the electric polarization will depend on the electric field : where is the vacuum permittivity and is the -th order component of the electric susceptibility of the medium. We can write that relationship explicitly; the i-th component for the vector P can be expressed as: where . It is often assumed that ∥ , i.e., the component parallel to x of the polarization field; ∥ and so on. For a linear medium, only the first term of this equation is significant and the polarization varies linearly with the electric field. For materials exhibiting a non-negligible Kerr effect, the third, χ(3) term is significant, with the even-order terms typically dropping out due to inversion symmetry of the Kerr medium. Consider the net electric field E produced by a light wave of frequency ω together with an external electric field E0: where Eω is the vector amplitude of the wave. Combining these two equations produces a complex expression for P. For the DC Kerr effect, we can neglect all except the linear terms and those in : which is similar to the linear relationship between polarization and an electric field of a wave, with an additional non-linear susceptibility term proportional to the square of the amplitude of the external field. For non-symmetric media (e.g. liquids), this induced change of susceptibility produces a change in refractive index in the direction of the electric field: where λ0 is the vacuum wavelength and K is the Kerr constant for the medium. The applied field induces birefringence in the medium in the direction of the field. A Kerr cell with a transverse field can thus act as a switchable wave plate, rotating the plane of polarization of a wave travelling through it. In combination with polarizers, it can be used as a shutter or modulator. The values of K depend on the medium and are about 9.4×10−14 m·V−2 for water, and 4.4×10−12 m·V−2 for nitrobenzene. For crystals, the susceptibility of the medium will in general be a tensor, and the Kerr effect produces a modification of this tensor. AC Kerr effect In the optical or AC Kerr effect, an intense beam of light in a medium can itself provide the modulating electric field, without the need for an external field to be applied. In this case, the electric field is given by: where Eω is the amplitude of the wave as before. Combining this with the equation for the polarization, and taking only linear terms and those in χ(3)|Eω|3: As before, this looks like a linear susceptibility with an additional non-linear term: and since: where n0=(1+χLIN)1/2 is the linear refractive index. Using a Taylor expansion since χNL ≪ n02, this gives an intensity dependent refractive index (IDRI) of: where n2 is the second-order nonlinear refractive index, and I is the intensity of the wave. The refractive index change is thus proportional to the intensity of the light travelling through the medium. The values of n2 are relatively small for most materials, on the order of 10−20 m2 W−1 for typical glasses. Therefore, beam intensities (irradiances) on the order of 1 GW cm−2 (such as those produced by lasers) are necessary to produce significant variations in refractive index via the AC Kerr effect. The optical Kerr effect manifests itself temporally as self-phase modulation, a self-induced phase- and frequency-shift of a pulse of light as it travels through a medium. This process, along with dispersion, can produce optical solitons. Spatially, an intense beam of light in a medium will produce a change in the medium's refractive index that mimics the transverse intensity pattern of the beam. For example, a Gaussian beam results in a Gaussian refractive index profile, similar to that of a gradient-index lens. This causes the beam to focus itself, a phenomenon known as self-focusing. As the beam self-focuses, the peak intensity increases which, in turn, causes more self-focusing to occur. The beam is prevented from self-focusing indefinitely by nonlinear effects such as multiphoton ionization, which become important when the intensity becomes very high. As the intensity of the self-focused spot increases beyond a certain value, the medium is ionized by the high local optical field. This lowers the refractive index, defocusing the propagating light beam. Propagation then proceeds in a series of repeated focusing and defocusing steps. See also Jeffree cell, an early acousto-optic modulator Filament propagation Rapatronic camera, which used a Kerr cell to take sub-millisecond photographs of nuclear explosions Optical heterodyne detection Zeeman effect References External links Kerr cells in early television (Scroll down the page for several early articles on Kerr cells.) Nonlinear optics Polarization (waves)
Kerr effect
[ "Physics" ]
1,856
[ "Polarization (waves)", "Astrophysics" ]
661,254
https://en.wikipedia.org/wiki/Brocken%20spectre
A Brocken spectre (British English; American spelling: Brocken specter; ), also called Brocken bow, mountain spectre, or spectre of the Brocken is the magnified (and apparently enormous) shadow of an observer cast in mid air upon any type of cloud opposite a strong light source. The figure's head can be surrounded by a bright area called , or halo-like rings of rainbow-coloured light forming a glory, which appear opposite the Sun's direction when uniformly sized water droplets in clouds refract and backscatter sunlight. The phenomenon can appear on any misty mountainside, cloud bank, or be seen from an aircraft, but the frequent fogs and low-altitude accessibility of the Brocken, the highest peak of the Harz Mountains in Germany, have created a local legend from which the phenomenon draws its name. The Brocken spectre was observed and described by Johann Silberschlag in 1780, and has often been recorded in literature about the region. Occurrence The "spectre" appears when the sun shines from behind the observer, who is looking down from a ridge or peak into mist or fog. The light projects the observer's shadow through the mist, often in a triangular shape due to perspective. The apparent magnification of size of the shadow is an optical illusion that occurs when the observer judges their shadow on relatively nearby clouds to be at the same distance as faraway land objects seen through gaps in the clouds, or when there are no reference points by which to judge its size. The shadow also falls on water droplets of varying distances from the eye, confusing depth perception. The ghost can appear to move (sometimes suddenly) because of the movement of the cloud layer and variations in density within the cloud. Ulloa's halo Before the first reports of the phenomenon in Europe, two members of the French Geodesic Mission to the Equator, Antonio de Ulloa and Pierre Bouguer, reported that while walking near the summit of the Pambamarca mountain, in the Ecuadorian Andes, they saw their shadows projected on a lower-lying cloud, with a circular "halo or glory" around the shadow of the observer's head. Ulloa noted that This was then called "Ulloa's halo" or "Bouguer's halo". Ulloa reported that the glories were surrounded by a larger ring of white light, which would today be called a fog bow. On other occasions, he observed arches of white light formed by reflected moonlight, whose explanation is unknown but which may have been related to ice-crystal halos. References in popular culture and the arts Samuel Taylor Coleridge's poem "Constancy to an Ideal Object" concludes with an image of the Brocken spectre: And art thou nothing? Such thou art, as when The woodman winding westward up the glen At wintry dawn, where o'er the sheep-track's maze The viewless snow-mist weaves a glist'ning haze, Sees full before him, gliding without tread, An image with a glory round its head; The enamoured rustic worships its fair hues, Nor knows he makes the shadow he pursues! Lewis Carroll's "Phantasmagoria" includes a line about a Spectre who "...tried the Brocken business first/but caught a sort of chill/so came to England to be nursed/and here it took the form of thirst/which he complains of still." Stanisław Lem's Fiasco (1986) has a reference to the "Brocken Specter": "He was alone. He had been chasing himself. Not a common phenomenon, but known even on Earth. The Brocken Specter in the Alps, for example." The situation, of pursuing one's self, via a natural illusion is a repeated theme in Lem. A scene of significance in his book The Investigation (1975) depicts a detective who, within the confines of a snowy, dead-end alley, confronts a man who turns out to be the detective's own reflection, "The stranger... was himself. He was standing in front of a huge mirrored wall marking the end of the arcade." In The Radiant Warrior (1989), part of Leo Frankowski's Conrad Stargard series, the protagonist uses the Brocken Spectre to instill confidence in his recruits. The Brocken spectre is a key trope in Paul Beatty's The White Boy Shuffle (1996), in which a character, Nicholas Scoby, declares that his dream (he specifically calls it a "Dream and a half, really") is to see his glory through a Brocken spectre (69). In James Hogg's novel The Private Memoirs and Confessions of a Justified Sinner (1824) the Brocken spectre is used to suggest psychological horror. Carl Jung in Memories, Dreams, Reflections wrote: ... I had a dream which both frightened and encouraged me. It was night in some unknown place, and I was making slow and painful headway against a mighty wind. Dense fog was flying along everywhere. I had my hands cupped around a tiny light which threatened to go out at any moment... Suddenly I had the feeling that something was coming up behind me. I looked back, and saw a gigantic black figure following me... When I awoke I realized at once that the figure was a "specter of the Brocken," my own shadow on the swirling mists, brought into being by the little light I was carrying. In Gravity's Rainbow, Geli Tripping and Slothrop make "god-shadows" from a Harz precipice, as Walpurgisnacht wanes to dawn. Additionally, the French–Canadian quadruple agent Rémy Marathe muses episodically about the possibility of witnessing the fabled spectre on the mountains of Tucson in David Foster Wallace's novel Infinite Jest. The explorer Eric Shipton saw a Brocken Spectre during his first ascent of Nelion on Mount Kenya with Percy Wyn-Harris and Gustav Sommerfelt in 1929. He wrote: Then the towering buttresses of Batian and Nelion appeared; the rays of the setting sun broke through and, in the east, sharply defined, a great circle of rainbow colours framed our own silhouettes. It was the only perfect Brocken Spectre I have ever seen. The progressive metal band Fates Warning makes numerous references to the Brocken Spectre in both their debut album title Night on Bröcken and in lyrics on a subsequent song called "The Sorceress" from the album Awaken the Guardian that read "Through the Brocken Spectre rose a luring Angel." The design of Kriemhild Gretchen, a Witch in the anime series Puella Magi Madoka Magica, may have been inspired by the Brocken spectre. In Charles Dickens's Little Dorrit, Book II Chapter 23, Flora Finching, in the course of one of her typically free-associative babbles to Mr Clennam, says " ... ere yet Mr F appeared a misty shadow on the horizon paying attentions like the well-known spectre of some place in Germany beginning with a B ... " "Brocken Spectre" is the title of a track on David Tipper's 2010 album Broken Soul Jamboree. In the tokusatsu series Tensou Sentai Goseiger, Semattarei of the Brocken Spectre is one of the enemies that Gosei Angels must face. In the manga One Piece, Brocken spectres make an appearance in the Skypiea story arc. In the anime Detective Conan, Brocken spectres are mentioned in episode 348 and episode 546 as well. In "The Problem of Pain" by C.S. Lewis the Brocken spectre is mentioned in the chapter "Heaven". In chapter 12 of Whose Body? (Lord Peter Wimsey) by Dorothy L. Sayers. The Brocken Spectre occurring is proven by lawyers to explain circumstances in a case in episode 9 of "Innocence, Fight Against False Charges", a 2019 Japanese drama. In October 2024 the BBC News website reported that a wildlife photographer from East Yorkshire in the United Kingdom captured a photo of a Brocken Spectre whilst out bird watching in Bridlington. See also Diffraction Earth's shadow, the shadow that the Earth itself casts on its atmosphere , "Big Grey Man" in Scottish Gaelic, a supposed supernatural being found on Scotland's second-highest peak, Ben Macdhui Dark Watchers, supposed supernatural beings seen along the crest of the Santa Lucia Mountains, in California , an optical phenomenon that creates a bright spot around the shadow of the viewer's head Opposition surge, the brightening of a rough surface, or an object with many particles, when illuminated from directly behind the observer References Further reading External links "What are Brocken Spectres and How Do They Form?", an article on the Online Fellwalking Club page (dead link, 2012 archived version) A Cairngorm example, from the Universities Space Research Association See a picture and a YouTube videoclip taken by Michael Elcock here Snowdon walker captures rare Brocken spectre, BBC News Online, 3 January 2020 Brocken Spectre panorama "Time-lapse video of Brocken specter cast by Mt. Tamalpais fire lookout in Marin County California." Optical illusions Atmospheric optical phenomena
Brocken spectre
[ "Physics" ]
1,969
[ "Physical phenomena", "Earth phenomena", "Optical illusions", "Optical phenomena", "Atmospheric optical phenomena" ]
661,808
https://en.wikipedia.org/wiki/Bravais%20lattice
In geometry and crystallography, a Bravais lattice, named after , is an infinite array of discrete points generated by a set of discrete translation operations described in three dimensional space by where the ni are any integers, and ai are primitive translation vectors, or primitive vectors, which lie in different directions (not necessarily mutually perpendicular) and span the lattice. The choice of primitive vectors for a given Bravais lattice is not unique. A fundamental aspect of any Bravais lattice is that, for any choice of direction, the lattice appears exactly the same from each of the discrete lattice points when looking in that chosen direction. The Bravais lattice concept is used to formally define a crystalline arrangement and its (finite) frontiers. A crystal is made up of one or more atoms, called the basis or motif, at each lattice point. The basis may consist of atoms, molecules, or polymer strings of solid matter, and the lattice provides the locations of the basis. Two Bravais lattices are often considered equivalent if they have isomorphic symmetry groups. In this sense, there are 5 possible Bravais lattices in 2-dimensional space and 14 possible Bravais lattices in 3-dimensional space. The 14 possible symmetry groups of Bravais lattices are 14 of the 230 space groups. In the context of the space group classification, the Bravais lattices are also called Bravais classes, Bravais arithmetic classes, or Bravais flocks. Unit cell In crystallography, there is the concept of a unit cell which comprises the space between adjacent lattice points as well as any atoms in that space. A unit cell is defined as a space that, when translated through a subset of all vectors described by , fills the lattice space without overlapping or voids. (I.e., a lattice space is a multiple of a unit cell.) There are mainly two types of unit cells: primitive unit cells and conventional unit cells. A primitive cell is the very smallest component of a lattice (or crystal) which, when stacked together with lattice translation operations, reproduces the whole lattice (or crystal). Note that the translations must be lattice translation operations that cause the lattice to appear unchanged after the translation. If arbitrary translations were allowed, one could make a primitive cell half the size of the true one, and translate twice as often, as an example. Another way of defining the size of a primitive cell that avoids invoking lattice translation operations, is to say that the primitive cell is the smallest possible component of a lattice (or crystal) that can be repeated to reproduce the whole lattice (or crystal), and that contains exactly one lattice point. In either definition, the primitive cell is characterized by its small size. There are clearly many choices of cell that can reproduce the whole lattice when stacked (two lattice halves, for instance), and the minimum size requirement distinguishes the primitive cell from all these other valid repeating units. If the lattice or crystal is 2-dimensional, the primitive cell has a minimum area; likewise in 3 dimensions the primitive cell has a minimum volume. Despite this rigid minimum-size requirement, there is not one unique choice of primitive unit cell. In fact, all cells whose borders are primitive translation vectors will be primitive unit cells. The fact that there is not a unique choice of primitive translation vectors for a given lattice leads to the multiplicity of possible primitive unit cells. Conventional unit cells, on the other hand, are not necessarily minimum-size cells. They are chosen purely for convenience and are often used for illustration purposes. They are loosely defined. Primitive unit cell Primitive unit cells are defined as unit cells with the smallest volume for a given crystal. (A crystal is a lattice and a basis at every lattice point.) To have the smallest cell volume, a primitive unit cell must contain (1) only one lattice point and (2) the minimum amount of basis constituents (e.g., the minimum number of atoms in a basis). For the former requirement, counting the number of lattice points in a unit cell is such that, if a lattice point is shared by m adjacent unit cells around that lattice point, then the point is counted as 1/m. The latter requirement is necessary since there are crystals that can be described by more than one combination of a lattice and a basis. For example, a crystal, viewed as a lattice with a single kind of atom located at every lattice point (the simplest basis form), may also be viewed as a lattice with a basis of two atoms. In this case, a primitive unit cell is a unit cell having only one lattice point in the first way of describing the crystal in order to ensure the smallest unit cell volume. There can be more than one way to choose a primitive cell for a given crystal and each choice will have a different primitive cell shape, but the primitive cell volume is the same for every choice and each choice will have the property that a one-to-one correspondence can be established between primitive unit cells and discrete lattice points over the associated lattice. All primitive unit cells with different shapes for a given crystal have the same volume by definition; For a given crystal, if n is the density of lattice points in a lattice ensuring the minimum amount of basis constituents and v is the volume of a chosen primitive cell, then nv = 1 resulting in v = 1/n, so every primitive cell has the same volume of 1/n. Among all possible primitive cells for a given crystal, an obvious primitive cell may be the parallelepiped formed by a chosen set of primitive translation vectors. (Again, these vectors must make a lattice with the minimum amount of basis constituents.) That is, the set of all points where and is the chosen primitive vector. This primitive cell does not always show the clear symmetry of a given crystal. In this case, a conventional unit cell easily displaying the crystal symmetry is often used. The conventional unit cell volume will be an integer-multiple of the primitive unit cell volume. Origin of concept In two dimensions, any lattice can be specified by the length of its two primitive translation vectors and the angle between them. There are an infinite number of possible lattices one can describe in this way. Some way to categorize different types of lattices is desired. One way to do so is to recognize that some lattices have inherent symmetry. One can impose conditions on the length of the primitive translation vectors and on the angle between them to produce various symmetric lattices. These symmetries themselves are categorized into different types, such as point groups (which includes mirror symmetries, inversion symmetries and rotation symmetries) and translational symmetries. Thus, lattices can be categorized based on what point group or translational symmetry applies to them. In two dimensions, the most basic point group corresponds to rotational invariance under 2π and π, or 1- and 2-fold rotational symmetry. This actually applies automatically to all 2D lattices, and is the most general point group. Lattices contained in this group (technically all lattices, but conventionally all lattices that don't fall into any of the other point groups) are called oblique lattices. From there, there are 4 further combinations of point groups with translational elements (or equivalently, 4 types of restriction on the lengths/angles of the primitive translation vectors) that correspond to the 4 remaining lattice categories: square, hexagonal, rectangular, and centered rectangular. Thus altogether there are 5 Bravais lattices in 2 dimensions. Likewise, in 3 dimensions, there are 14 Bravais lattices: 1 general "wastebasket" category (triclinic) and 13 more categories. These 14 lattice types are classified by their point groups into 7 lattice systems (triclinic, monoclinic, orthorhombic, tetragonal, cubic, rhombohedral, and hexagonal). In 2 dimensions In two-dimensional space there are 5 Bravais lattices, grouped into four lattice systems, shown in the table below. Below each diagram is the Pearson symbol for that Bravais lattice. Note: In the unit cell diagrams in the following table the lattice points are depicted using black circles and the unit cells are depicted using parallelograms (which may be squares or rectangles) outlined in black. Although each of the four corners of each parallelogram connects to a lattice point, only one of the four lattice points technically belongs to a given unit cell and each of the other three lattice points belongs to one of the adjacent unit cells. This can be seen by imagining moving the unit cell parallelogram slightly left and slightly down while leaving all the black circles of the lattice points fixed. The unit cells are specified according to the relative lengths of the cell edges (a and b) and the angle between them (θ). The area of the unit cell can be calculated by evaluating the norm , where a and b are the lattice vectors. The properties of the lattice systems are given below: In 3 dimensions In three-dimensional space there are 14 Bravais lattices. These are obtained by combining one of the seven lattice systems with one of the centering types. The centering types identify the locations of the lattice points in the unit cell as follows: Primitive (P): lattice points on the cell corners only (sometimes called simple) Base-centered (S: A, B, or C): lattice points on the cell corners with one additional point at the center of each face of one pair of parallel faces of the cell (sometimes called end-centered) Body-centered (I): lattice points on the cell corners, with one additional point at the center of the cell Face-centered (F): lattice points on the cell corners, with one additional point at the center of each of the faces of the cell Not all combinations of lattice systems and centering types are needed to describe all of the possible lattices, as it can be shown that several of these are in fact equivalent to each other. For example, the monoclinic I lattice can be described by a monoclinic C lattice by different choice of crystal axes. Similarly, all A- or B-centred lattices can be described either by a C- or P-centering. This reduces the number of combinations to 14 conventional Bravais lattices, shown in the table below. Below each diagram is the Pearson symbol for that Bravais lattice. Note: In the unit cell diagrams in the following table all the lattice points on the cell boundary (corners and faces) are shown; however, not all of these lattice points technically belong to the given unit cell. This can be seen by imagining moving the unit cell slightly in the negative direction of each axis while keeping the lattice points fixed. Roughly speaking, this can be thought of as moving the unit cell slightly left, slightly down, and slightly out of the screen. This shows that only one of the eight corner lattice points (specifically the front, left, bottom one) belongs to the given unit cell (the other seven lattice points belong to adjacent unit cells). In addition, only one of the two lattice points shown on the top and bottom face in the Base-centered column belongs to the given unit cell. Finally, only three of the six lattice points on the faces in the Face-centered column belong to the given unit cell. The unit cells are specified according to six lattice parameters which are the relative lengths of the cell edges (a, b, c) and the angles between them (α, β, γ), where α is the angle between b and c, β is the angle between a and c, and γ is the angle between a and b. The volume of the unit cell can be calculated by evaluating the triple product , where a, b, and c are the lattice vectors. The properties of the lattice systems are given below: Some basic information for the lattice systems and Bravais lattices in three dimensions is summarized in the diagram at the beginning of this page. The seven sided polygon (heptagon) and the number 7 at the centre indicate the seven lattice systems. The inner heptagons indicate the lattice angles, lattice parameters, Bravais lattices and Schöenflies notations for the respective lattice systems. In 4 dimensions In four dimensions, there are 64 Bravais lattices. Of these, 23 are primitive and 41 are centered. Ten Bravais lattices split into enantiomorphic pairs. See also Crystal habit Crystal system einstein problem Miller index Reciprocal lattice Translation operator (quantum mechanics) Translational symmetry Zone axis References Further reading (English: Memoir 1, Crystallographic Society of America, 1949). External links Catalogue of Lattices (by Nebe and Sloane) Crystallography Tessellation Lattice points ja:ブラベー格子
Bravais lattice
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
2,624
[ "Tessellation", "Lattice points", "Euclidean plane geometry", "Materials science", "Number theory", "Crystallography", "Condensed matter physics", "Planes (geometry)", "Symmetry" ]
13,860,656
https://en.wikipedia.org/wiki/Extensional%20viscosity
Extensional viscosity (also known as elongational viscosity) is a viscosity coefficient when the applied stress is extensional stress. It is often used for characterizing polymer solutions. Extensional viscosity can be measured using rheometers that apply extensional stress. Acoustic rheometer is one example of such devices. Extensional viscosity is defined as the ratio of the normal stress difference to the rate of strain. For uniaxial extension along direction : where is the extensional viscosity or elongational viscosity is the normal stress along direction n. is the rate of strain: The ratio between the extensional viscosity and the dynamic viscosity is known as Trouton's Ratio, . For a Newtonian Fluid, the Trouton ratio equals three. See also Rheology References Fluid dynamics Viscosity
Extensional viscosity
[ "Physics", "Chemistry", "Engineering" ]
183
[ "Physical phenomena", "Physical quantities", "Chemical engineering", "Piping", "Wikipedia categories named after physical quantities", "Viscosity", "Physical properties", "Fluid dynamics stubs", "Fluid dynamics" ]
13,871,016
https://en.wikipedia.org/wiki/Greying%20of%20hair
Greying of hair, also known as greying, canities, or achromotrichia, is the progressive loss of pigmentation in the hair, eventually turning the hair grey or white which typically occurs naturally as people age. Terminology Greying of hair is the partial or complete process of a hair becoming grey or white. It is also known as canities or achromotrichia. The word "canities" is derived from the Latin word cānitiēs for "gray hair, old age". Overview Changes in hair colour typically occur naturally as people age, eventually turning the hair grey and then white. This normally begins in the early to mid-twenties in men and late twenties in women. More than 60 percent of Americans have some grey hair by age 40. The age at which greying begins seems almost entirely due to genetics. Sometimes people are born with grey hair because they inherit the trait. The order in which greying happens is usually: nose hair, hair on the head, beard, body hair, eyebrows. Greying is a gradual process; according to a study by L'Oreal, overall, of those between 45 and 65 years old, 74% had some grey hair, covering an average of 27% of their head, and approximately 1 in 10 people had no grey hairs even after the age of 60. Causes Grey or white hair is not caused by a true grey or white pigment, but is due to a lack of pigmentation and melanin. The clear hairs appear as grey or white because of the way light is reflected from the hairs. The change in hair colour occurs when melanin ceases to be produced in the hair root and new hairs grow in without pigment. The stem cells at the base of hair follicles produce melanocytes, the cells that produce and store pigment in hair and skin. The death of the melanocyte stem cells causes the onset of greying. It remains unclear why the stem cells of one hair follicle may fail to activate well over a decade before those in adjacent follicles less than a millimeter apart. In non-balding individuals, hair may grow faster once it turns grey. Unlike in the skin where pigment production is continuous, melanogenesis in the hair is closely associated with stages of the hair cycle. Hair is actively pigmented in the anagen phase and is "turned off" during the catagen phase, and absent during telogen. Thus, a single hair cannot be white on the root side, and colored on the terminal side. Several genes appear to be responsible for the process of greying. Bcl2 and Bcl-w were the first two discovered, then in 2016, the IRF4 (interferon regulatory factor 4) gene was announced after a study of 6,000 people living in five Latin American countries. However, it found that environmental factors controlled about 70% of cases of hair greying. In some cases, grey hair may be caused by thyroid deficiencies, Waardenburg syndrome or a vitamin B12 deficiency. At some point in the human life cycle, cells that are located in the base of the hair's follicles slow, and eventually stop producing pigment. Piebaldism is a rare autosomal dominant disorder of melanocyte development, which may cause a congenital white forelock. Greying of hair may be triggered by the accumulation of hydrogen peroxide and abnormally low levels of the enzyme catalase, which breaks down hydrogen peroxide and relieves oxidative stress in patients with vitiligo. Since vitiligo can cause eyelashes to turn white, the same process is believed to be involved in hair on the head (and elsewhere) due to aging. Stress Anecdotes report that stress, both chronic and acute, may induce achromotrichia earlier in individuals than it otherwise would have. There is some evidence for chronic stress causing premature achromotrichia, but no definite link has been established. It is known that the stress hormone cortisol accumulates in human hair over time, but whether this has any effect on hair color has not yet been resolved. A 2020 paper, published in the journal Nature reported that stress can cause hair to lose its pigment. An overactive immune response can destroy melanocytes and melanocyte stem cells in black-haired rats. When intentionally subjecting them to panic, they bleached their coat. The next time the rats' coat grew, there were no melanocyte stem cells in these damaged follicles, so white hairs sprouted, and the color loss was permanent. UV damage Excessive exposure to the sun is the most common cause of structural damage of the hair shaft. Photochemical hair damage encompasses hair protein degradation and loss, as well as hair pigment deterioration Photobleaching is common among people with European ancestry. Around 72 percent of customers who agreed to be involved in a study and have European ancestry reported in a recent 23andMe research that the sun lightens their hair. The company also have identified 48 genetic markers that may influence hair photobleaching. Medical conditions Albinism is a genetic abnormality in which little or no pigment is found in human hair, eyes, and skin. The hair is often white or pale blond. However, it can be red, darker blond, light brown, or rarely, even dark brown. Vitiligo is a patchy loss of hair and skin color that may occur as the result of an auto-immune disease. In a preliminary 2013 study, researchers treated the buildup of hydrogen peroxide which causes this with a light-activated pseudo-catalase. This produced significant media coverage that further investigation may someday lead to a general non-dye treatment for grey hair. Malnutrition is also known to cause hair to become lighter, thinner, and more brittle. Dark hair may turn reddish or blondish due to the decreased production of melanin. The condition is reversible with proper nutrition. Werner syndrome and pernicious anemia can also cause premature greying. A 2005 uncontrolled study demonstrated that people 50–70 years of age with dark eyebrows but grey hair are significantly more likely to have type II diabetes than those with both grey eyebrows and hair. Artificial factors A 1996 British Medical Journal study found that tobacco smoking was correlated with premature greying. Smokers were found to be four times more likely to begin greying prematurely, compared to nonsmokers. Grey hair may temporarily darken after inflammatory processes, after electron-beam-induced alopecia, and after some chemotherapy regimens. Much remains to be learned about the physiology of human greying. There are no special diets, nutritional supplements, vitamins, or proteins that have been proven to slow, stop, or in any way affect the greying process, although many have been marketed over the years. However, French scientists treating leukemia patients with imatinib, a drug used in treating cancer, noted an unexpected side effect: some of the patients' hair color was restored to their pre-grey color. Changes after death The hair color of buried bodies can change. Hair contains a mixture of black-brown eumelanin and red-yellow pheomelanin. Eumelanin is less chemically stable than pheomelanin and breaks down faster when oxidized. The color of hair changes faster under extreme conditions. It changes more slowly under dry oxidizing conditions (such as in burials in sand or in ice) than under wet reducing conditions (such as burials in wood or plaster coffins). Management The anti-cancer drug imatinib has recently been shown to reverse the greying process. However, it is expensive and has potentially severe and deadly side effects, so it is not practical to use to alter a person's hair color. Nevertheless, if the mechanism of action of imatinib on melanocyte stem cells can be discovered, it is possible that a safer and less expensive substitute drug might someday be developed. It is not yet known whether imatinib has an effect on catalase, or if its reversal of the greying process is due to something else. See also Premature greying of hair Canities subita References Human hair color Biological anthropology External signs of ageing
Greying of hair
[ "Biology" ]
1,693
[ "Senescence", "External signs of ageing" ]
14,958,673
https://en.wikipedia.org/wiki/Optogenetics
Optogenetics is a biological technique to control the activity of neurons or other cell types with light. This is achieved by expression of light-sensitive ion channels, pumps or enzymes specifically in the target cells. On the level of individual cells, light-activated enzymes and transcription factors allow precise control of biochemical signaling pathways. In systems neuroscience, the ability to control the activity of a genetically defined set of neurons has been used to understand their contribution to decision making, learning, fear memory, mating, addiction, feeding, and locomotion. In a first medical application of optogenetic technology, vision was partially restored in a blind patient with Retinitis pigmentosa. Optogenetic techniques have also been introduced to map the functional connectivity of the brain. By altering the activity of genetically labelled neurons with light and by using imaging and electrophysiology techniques to record the activity of other cells, researchers can identify the statistical dependencies between cells and brain regions. In a broader sense, the field of optogenetics also includes methods to record cellular activity with genetically encoded indicators. In 2010, optogenetics was chosen as the "Method of the Year" across all fields of science and engineering by the interdisciplinary research journal Nature Methods. In the same year an article on "Breakthroughs of the Decade" in the academic research journal Science highlighted optogenetics. History In 1979, Francis Crick suggested that controlling all cells of one type in the brain, while leaving the others more or less unaltered, is a real challenge for neuroscience. Crick speculated that a technology using light might be useful to control neuronal activity with temporal and spatial precision but at the time there was no technique to make neurons responsive to light. By the early 1990s LC Katz and E Callaway had shown that light could uncage glutamate. Heberle and Büldt in 1994 had already shown functional heterologous expression of a bacteriorhodopsin for light-activated ion flow in yeast. In 1995, Georg Nagel et al. and Ernst Bamberg tried the heterologous expression of microbial rhodopsins (also bacteriorhodopsin and also in a non-neural system, Xenopus oocytes) (Georg Nagel et al., 1995, FEBS Lett.) and showed light-induced current. The earliest genetically targeted method that used light to control rhodopsin-sensitized neurons was reported in January 2002, by Boris Zemelman and Gero Miesenböck, who employed Drosophila rhodopsin cultured mammalian neurons. In 2003, Zemelman and Miesenböck developed a second method for light-dependent activation of neurons in which single ionotropic channels TRPV1, TRPM8 and P2X2 were gated by photocaged ligands in response to light. Beginning in 2004, the Kramer and Isacoff groups developed organic photoswitches or "reversibly caged" compounds in collaboration with the Trauner group that could interact with genetically introduced ion channels. TRPV1 methodology, albeit without the illumination trigger, was subsequently used by several laboratories to alter feeding, locomotion and behavioral resilience in laboratory animals. However, light-based approaches for altering neuronal activity were not applied outside the original laboratories, likely because the easier to employ channelrhodopsin was cloned soon thereafter. Peter Hegemann, studying the light response of green algae at the University of Regensburg, had discovered photocurrents that were too fast to be explained by the classic g-protein-coupled animal rhodopsins. Teaming up with the electrophysiologist Georg Nagel at the Max Planck Institute in Frankfurt, they could demonstrate that a single gene from the alga Chlamydomonas produced large photocurrents when expressed in the oocyte of a frog. To identify expressing cells, they replaced the cytoplasmic tail of the algal protein with a fluorescent protein YFP, generating the first generally applicable optogenetic tool. They stated in the 2003 paper that "expression of ChR2 in oocytes or mammalian cells may be used as a powerful tool to increase cytoplasmic Ca2+ concentration or to depolarize the cell membrane, simply by illumination". Karl Deisseroth in the Bioengineering Department at Stanford published the notebook pages from early July 2004 of his initial experiment showing light activation of neurons expressing a channelrhodopsin. In August 2005, his laboratory staff, including graduate students Ed Boyden and Feng Zhang, in collaboration with Georg Nagel, published the first demonstration of a single-component optogenetic system, in neurons using the channelrhodopsin-2(H134R)-eYFP mutant from Georg Nagel, which is the first mutant of channelrhodopsin-2 since its functional characterization by Georg Nagel and Hegemann. Zhuo-Hua Pan of Wayne State University, researching on restore sight to blindness, tried channelrhodopsin out in ganglion cells—the neurons in human eyes that connect directly to the brain. Pan's first observation of optical activation of retinal neurons with channelrhodopsin was in February 2004 according to Pan, five months before Deisseroth's initial observation in July 2004. Indeed, the transfected neurons became electrically active in response to light, and in 2005 Zhuo-Hua Pan reported successful in-vivo transfection of channelrhodopsin in retinal ganglion cells of mice, and electrical responses to photostimulation in retinal slice culture. This approach was eventually realized in a human patient by Botond Roska and coworkers in 2021. In April 2005, Susana Lima and Miesenböck reported the first use of genetically targeted P2X2 photostimulation to control the behaviour of an animal. They showed that photostimulation of genetically circumscribed groups of neurons, such as those of the dopaminergic system, elicited characteristic behavioural changes in fruit flies. In October 2005, Lynn Landmesser and Stefan Herlitze also published the use of channelrohodpsin-2 to control neuronal activity in cultured hippocampal neurons and chicken spinal cord circuits in intact developing embryos. In addition, they introduced for the first time vertebrate rhodopsin, a light-activated G protein coupled receptor, as a tool to inhibit neuronal activity via the recruitment of intracellular signaling pathways also in hippocampal neurons and the intact developing chicken embryo. The groups of Alexander Gottschalk and Georg Nagel made the first ChR2 mutant (H134R) and were first to use channelrhodopsin-2 for controlling neuronal activity in an intact animal, showing that motor patterns in the roundworm C. elegans could be evoked by light stimulation of genetically selected neural circuits (published in December 2005). In mice, controlled expression of optogenetic tools is often achieved with cell-type-specific Cre/loxP methods developed for neuroscience by Joe Z. Tsien back in the 1990s to activate or inhibit specific brain regions and cell-types in vivo. In 2007, the labs of Boyden and Deisseroth (together with the groups of Gottschalk and Georg Nagel) simultaneously reported successful optogenetic inhibition of activity in neurons. In 2007, Georg Nagel and Hegemann's groups started the optogenetic manipulation of cAMP. In 2014, Avelar et al. reported the first rhodopsin-guanylyl cyclase gene from fungus. In 2015, Scheib et al. and Gao et al. characterized the activity of the rhodopsin-guanylyl cyclase gene. And Shiqiang Gao et al. and Georg Nagel, Alexander Gottschalk identified it as the first 8 TM rhodopsin. Description Optogenetics provides millisecond-scale temporal precision which allows the experimenter to keep pace with fast biological information processing (for example, in probing the causal role of specific action potential patterns in defined neurons). Indeed, to probe the neural code, optogenetics by definition must operate on the millisecond timescale to allow addition or deletion of precise activity patterns within specific cells in the brains of intact animals, including mammals (see Figure 1). By comparison, the temporal precision of traditional genetic manipulations (employed to probe the causal role of specific genes within cells, via "loss-of-function" or "gain of function" changes in these genes) is rather slow, from hours or days to months. It is important to also have fast readouts in optogenetics that can keep pace with the optical control. This can be done with electrical recordings ("optrodes") or with reporter proteins that are biosensors, where scientists have fused fluorescent proteins to detector proteins. Additionally, beyond its scientific impact optogenetics represents an important case study in the value of both ecological conservation (as many of the key tools of optogenetics arise from microbial organisms occupying specialized environmental niches), and in the importance of pure basic science as these opsins were studied over decades for their own sake by biophysicists and microbiologists, without involving consideration of their potential value in delivering insights into neuroscience and neuropsychiatric disease. Light-activated proteins: channels, pumps and enzymes The hallmark of optogenetics therefore is introduction of fast light-activated channels, pumps, and enzymes that allow temporally precise manipulation of electrical and biochemical events while maintaining cell-type resolution through the use of specific targeting mechanisms. Among the microbial opsins which can be used to investigate the function of neural systems are the channelrhodopsins (ChR2, ChR1, VChR1, and SFOs) to excite neurons and anion-conducting channelrhodopsins for light-induced inhibition. Indirectly light-controlled potassium channels have recently been engineered to prevent action potential generation in neurons during blue light illumination. Light-driven ion pumps are also used to inhibit neuronal activity, e.g. halorhodopsin (NpHR), enhanced halorhodopsins (eNpHR2.0 and eNpHR3.0, see Figure 2), archaerhodopsin (Arch), fungal opsins (Mac) and enhanced bacteriorhodopsin (eBR). Optogenetic control of well-defined biochemical events within behaving mammals is now also possible. Building on prior work fusing vertebrate opsins to specific G-protein coupled receptors a family of chimeric single-component optogenetic tools was created that allowed researchers to manipulate within behaving mammals the concentration of defined intracellular messengers such as cAMP and IP3 in targeted cells. Other biochemical approaches to optogenetics (crucially, with tools that displayed low activity in the dark) followed soon thereafter, when optical control over small GTPases and adenylyl cyclase was achieved in cultured cells using novel strategies from several different laboratories. Photoactivated adenylyl cyclases have been discovered in fungi and successfully used to control cAMP levels in mammalian neurons. This emerging repertoire of optogenetic actuators now allows cell-type-specific and temporally precise control of multiple axes of cellular function within intact animals. Hardware for light application Another necessary factor is hardware (e.g. integrated fiberoptic and solid-state light sources) to allow specific cell types, even deep within the brain, to be controlled in freely behaving animals. Most commonly, the latter is now achieved using the fiberoptic-coupled diode technology introduced in 2007, though to avoid use of implanted electrodes, researchers have engineered ways to inscribe a "window" made of zirconia that has been modified to be transparent and implanted in mice skulls, to allow optical waves to penetrate more deeply to stimulate or inhibit individual neurons. To stimulate superficial brain areas such as the cerebral cortex, optical fibers or LEDs can be directly mounted to the skull of the animal. More deeply implanted optical fibers have been used to deliver light to deeper brain areas. Complementary to fiber-tethered approaches, completely wireless techniques have been developed utilizing wirelessly delivered power to headborne LEDs for unhindered study of complex behaviors in freely behaving organisms. Expression of optogenetic actuators Optogenetics also necessarily includes the development of genetic targeting strategies such as cell-specific promoters or other customized conditionally-active viruses, to deliver the light-sensitive probes to specific populations of neurons in the brain of living animals (e.g. worms, fruit flies, mice, rats, and monkeys). In invertebrates such as worms and fruit flies some amount of all-trans-retinal (ATR) is supplemented with food. A key advantage of microbial opsins as noted above is that they are fully functional without the addition of exogenous co-factors in vertebrates. Technique The technique of using optogenetics is flexible and adaptable to the experimenter's needs. Cation-selective channelrhodopsins (e.g. ChR2) are used to excite neurons, anion-conducting channelrhodopsins (e.g. GtACR2) inhibit neuronal activity. Combining these tools into a single construct (e.g. BiPOLES) allows for both inhibition and excitation, depending on the wavelength of illumination. Introducing the microbial opsin into a specific subset of cells is challenging. A popular approach is to introduce an engineered viral vector that contains the optogenetic actuator gene attached to a specific promoter such as CAMKIIα, which is active in excitatory neurons. This allows for some level of specificity, preventing e.g. expression in glia cells. A more specific approach is based on transgenic "driver" mice which express Cre recombinase, an enzyme that catalyzes recombination between two lox-P sites, in a specific subset of cells, e.g. parvalbumin-expressing interneurons. By introducing an engineered viral vector containing the optogenetic actuator gene in between two lox-P sites, only the cells producing Cre recombinase will express the microbial opsin. This technique has allowed for multiple modified optogenetic actuators to be used without the need to create a whole line of transgenic animals every time a new microbial opsin is needed. After the introduction and expression of the microbial opsin, a computer-controlled light source has to be optically coupled to the brain region in question. Light-emitting diodes (LEDs) or fiber-coupled diode-pumped solid-state lasers (DPSS) are frequently used. Recent advances include the advent of wireless head-mounted devices that apply LEDs to the targeted areas and as a result, give the animals more freedom to move. Fiber-based approaches can also be used to combine optical stimulation and calcium imaging. This enables researchers to visualize and manipulate the activity of single neurons in awake behaving animals. It is also possible to record from multiple deep brain regions at the same using GRIN lenses connected via optical fiber to an externally positioned photodetector and photostimulator. Technical challenges Selective expression One of the main problems of optogenetics is that not all the cells in question may express the microbial opsin gene at the same level. Thus, even illumination with a defined light intensity will have variable effects on individual cells. Optogenetic stimulation of neurons in the brain is even less controlled as the light intensity drops exponentially from the light source (e.g. implanted optical fiber). It remains difficult to target opsin to defined subcellular compartments, e.g. the plasma membrane, synaptic vesicles, or mitochondria. Restricting the opsin to specific regions of the plasma membrane such as dendrites, somata or axon terminals provides a more robust understanding of neuronal circuitry. Mathematical modelling shows that selective expression of opsin in specific cell types can dramatically alter the dynamical behavior of the neural circuitry. In particular, optogenetic stimulation that preferentially targets inhibitory cells can transform the excitability of the neural tissue, affecting non-transfected neurons as well. Kinetics and synchronization The original channelrhodopsin-2 was slower closing than typical cation channels of cortical neurons, leading to prolonged depolarization and calcium influx. Many channelrhodopsin variants with more favorable kinetics have since been engineered.[55] [56] A difference between natural spike patterns and optogenetic activation is that pulsed light stimulation produces synchronous activation of expressing neurons, which removes the possibility of sequential activity in the stimulated population. Therefore, it is difficult to understand how the cells in the population affected communicate with one another or how their phasic properties of activation relate to circuit function. Optogenetic activation has been combined with functional magnetic resonance imaging (ofMRI) to elucidate the connectome, a thorough map of the brain's neural connections. Precisely timed optogenetic activation is used to calibrate the delayed hemodynamic signal (BOLD) fMRI is based on. Light absorption spectrum The opsin proteins currently in use have absorption peaks across the visual spectrum, but remain considerably sensitive to blue light. This spectral overlap makes it very difficult to combine opsin activation with genetically encoded indicators (GEVIs, GECIs, GluSnFR, synapto-pHluorin), most of which need blue light excitation. Opsins with infrared activation would, at a standard irradiance value, increase light penetration and augment resolution through reduction of light scattering. Spatial response Due to scattering, a narrow light beam to stimulate neurons in a patch of neural tissue can evoke a response profile that is much broader than the stimulation beam. In this case, neurons may be activated (or inhibited) unintentionally. Computational simulation tools are used to estimate the volume of stimulated tissue for different wavelengths of light. Applications The field of optogenetics has furthered the fundamental scientific understanding of how specific cell types contribute to the function of biological tissues such as neural circuits in vivo. On the clinical side, optogenetics-driven research has led to insights into restoring with light, Parkinson's disease and other neurological and psychiatric disorders such as autism, Schizophrenia, drug abuse, anxiety, and depression. An experimental treatment for blindness involves a channel rhodopsin expressed in ganglion cells, stimulated with light patterns from engineered goggles. Identification of particular neurons and networks Amygdala Optogenetic approaches have been used to map neural circuits in the amygdala that contribute to fear conditioning. One such example of a neural circuit is the connection made from the basolateral amygdala to the dorsal-medial prefrontal cortex where neuronal oscillations of 4 Hz have been observed in correlation to fear induced freezing behaviors in mice. Transgenic mice were introduced with channelrhodoposin-2 attached with a parvalbumin-Cre promoter that selectively infected interneurons located both in the basolateral amygdala and the dorsal-medial prefrontal cortex responsible for the 4 Hz oscillations. The interneurons were optically stimulated generating a freezing behavior and as a result provided evidence that these 4 Hz oscillations may be responsible for the basic fear response produced by the neuronal populations along the dorsal-medial prefrontal cortex and basolateral amygdala. Olfactory bulb Optogenetic activation of olfactory sensory neurons was critical for demonstrating timing in odor processing and for mechanism of neuromodulatory mediated olfactory guided behaviors (e.g. aggression, mating) In addition, with the aid of optogenetics, evidence has been reproduced to show that the "afterimage" of odors is concentrated more centrally around the olfactory bulb rather than on the periphery where the olfactory receptor neurons would be located. Transgenic mice infected with channel-rhodopsin Thy1-ChR2, were stimulated with a 473 nm laser transcranially positioned over the dorsal section of the olfactory bulb. Longer photostimulation of mitral cells in the olfactory bulb led to observations of longer lasting neuronal activity in the region after the photostimulation had ceased, meaning the olfactory sensory system is able to undergo long term changes and recognize differences between old and new odors. Nucleus accumbens Optogenetics, freely moving mammalian behavior, in vivo electrophysiology, and slice physiology have been integrated to probe the cholinergic interneurons of the nucleus accumbens by direct excitation or inhibition. Despite representing less than 1% of the total population of accumbal neurons, these cholinergic cells are able to control the activity of the dopaminergic terminals that innervate medium spiny neurons (MSNs) in the nucleus accumbens. These accumbal MSNs are known to be involved in the neural pathway through which cocaine exerts its effects, because decreasing cocaine-induced changes in the activity of these neurons has been shown to inhibit cocaine conditioning. The few cholinergic neurons present in the nucleus accumbens may prove viable targets for pharmacotherapy in the treatment of cocaine dependence Prefrontal cortex In vivo and in vitro recordings from the University of Colorado, Boulder Optophysiology Laboratory of Donald C. Cooper Ph.D. showing individual CAMKII AAV-ChR2 expressing pyramidal neurons within the prefrontal cortex that demonstrated high fidelity action potential output with short pulses of blue light at 20 Hz (Figure 1). Motor cortex In vivo repeated optogenetic stimulation in healthy animals was able to eventually induce seizures. This model has been termed optokindling. Piriform cortex In vivo repeated optogenetic stimulation of pyramidal cells of the piriform cortex in healthy animals was able to eventually induce seizures. In vitro studies have revealed a loss of feedback inhibition in the piriform circuit due to impaired GABA synthesis. Heart Optogenetics was applied on atrial cardiomyocytes to end spiral wave arrhythmias, found to occur in atrial fibrillation, with light. This method is still in the development stage. A recent study explored the possibilities of optogenetics as a method to correct for arrhythmias and resynchronize cardiac pacing. The study introduced channelrhodopsin-2 into cardiomyocytes in ventricular areas of hearts of transgenic mice and performed in vitro studies of photostimulation on both open-cavity and closed-cavity mice. Photostimulation led to increased activation of cells and thus increased ventricular contractions resulting in increasing heart rates. In addition, this approach has been applied in cardiac resynchronization therapy (CRT) as a new biological pacemaker as a substitute for electrode based-CRT. Lately, optogenetics has been used in the heart to defibrillate ventricular arrhythmias with local epicardial illumination, a generalized whole heart illumination or with customized stimulation patterns based on arrhythmogenic mechanisms in order to lower defibrillation energy. Spiral ganglion Optogenetic stimulation of the spiral ganglion in deaf mice restored auditory activity. Optogenetic application onto the cochlear region allows for the stimulation or inhibition of the spiral ganglion cells (SGN). In addition, due to the characteristics of the resting potentials of SGN's, different variants of the protein channelrhodopsin-2 have been employed such as Chronos, CatCh and f-Chrimson. Chronos and CatCh variants are particularly useful in that they have less time spent in their deactivated states, which allow for more activity with less bursts of blue light emitted. Additionally, using engineered red-shifted channels as f-Chrimson allow for stimulation using longer wavelengths, which decreases the potential risks of phototoxicity in the long term without compromising gating speed. The result being that the LED producing the light would require less energy and the idea of cochlear prosthetics in association with photo-stimulation, would be more feasible. Brainstem Optogenetic stimulation of a modified red-light excitable channelrhodopsin (ReaChR) expressed in the facial motor nucleus enabled minimally invasive activation of motoneurons effective in driving whisker movements in mice. One novel study employed optogenetics on the Dorsal Raphe Nucleus to both activate and inhibit dopaminergic release onto the ventral tegmental area. To produce activation transgenic mice were infected with channelrhodopsin-2 with a TH-Cre promoter and to produce inhibition the hyperpolarizing opsin NpHR was added onto the TH-Cre promoter. Results showed that optically activating dopaminergic neurons led to an increase in social interactions, and their inhibition decreased the need to socialize only after a period of isolation. Visual system Studying the visual system using optogenetics can be challenging. Indeed, the light used for optogenetic control may lead to the activation of photoreceptors, as a result of the proximity between primary visual circuits and these photoreceptors. In this case, spatial selectivity is difficult to achieve (particularly in the case of the fly optic lobe). Thus, the study of the visual system requires spectral separation, using channels that are activated by different wavelengths of light than rhodopsins within the photoreceptors (peak activation at 480 nm for Rhodopsin 1 in Drosophila). Red-shifted CsChrimson or bistable Channelrhodopsin are used for optogenetic activation of neurons (i.e. depolarization), as both allow spectral separation. In order to achieve neuronal silencing (i.e. hyperpolarization), an anion channelrhodopsin discovered in the cryptophyte algae species Guillardia theta (named GtACR1). can be used. GtACR1 is more light sensitive than other inhibitory channels such as the Halorhodopsin class of chlorid pumps and imparts a strong conductance. As its activation peak (515 nm) is close to that of Rhodopsin 1, it is necessary to carefully calibrate the optogenetic illumination as well as the visual stimulus. The factors to take into account are the wavelength of the optogenetic illumination (possibly higher than the activation peak of GtACR1), the size of the stimulus (in order to avoid the activation of the channels by the stimulus light) and the intensity of the optogenetic illumination. It has been shown that GtACR1 can be a useful inhibitory tool in optogenetic study of Drosophila's visual system by silencing T4/T5 neurons expression. These studies can also be led on intact behaving animals, for instance to probe optomotor response. Sensorimotor system Optogenetically inhibiting or activating neurons tests their necessity and sufficiency, respectively, in generating a behavior. Using this approach, researchers can dissect the neural circuitry controlling motor output. By perturbing neurons at various places in the sensorimotor system, researchers have learned about the role of descending neurons in eliciting stereotyped behaviors, how localized tactile sensory input and activity of interneurons alters locomotion, and the role of Purkinje cells in generating and modulating movement. This is a powerful technique for understanding the neural underpinnings of animal locomotion and movement more broadly. Precise temporal control of interventions The currently available optogenetic actuators allow for the accurate temporal control of the required intervention (i.e. inhibition or excitation of the target neurons) with precision routinely going down to the millisecond level. The temporal precision varies, however, across optogenetic actuators, and depends on the frequency and intensity of the stimulation. Experiments can now be devised where the light used for the intervention is triggered by a particular element of behavior (to inhibit the behavior), a particular unconditioned stimulus (to associate something to that stimulus) or a particular oscillatory event in the brain (to inhibit the event). This kind of approach has already been used in several brain regions: Hippocampus Sharp waves and ripple complexes (SWRs) are distinct high frequency oscillatory events in the hippocampus thought to play a role in memory formation and consolidation. These events can be readily detected by following the oscillatory cycles of the on-line recorded local field potential. In this way the onset of the event can be used as a trigger signal for a light flash that is guided back into the hippocampus to inhibit neurons specifically during the SWRs and also to optogenetically inhibit the oscillation itself. These kinds of "closed-loop" experiments are useful to study SWR complexes and their role in memory. Cellular biology/cell signaling pathways Analogously to how natural light-gated ion channels such as channelrhodopsin-2 allows optical control of ion flux, which is especially useful in neuroscience, natural light-controlled signal transduction proteins also allow optical control of biochemical pathways, including both second-messenger generation and protein-protein interactions, which is especially useful in studying cell and developmental biology. In 2002, the first example of using photoproteins from another organism for controlling a biochemical pathway was demonstrated using the light-induced interaction between plant phytochrome and phytochrome-interacting factor (PIF) to control gene transcription in yeast. By fusing phytochrome to a DNA-binding domain and PIF to a transcriptional activation domain, transcriptional activation of genes recognized by the DNA-binding domain could be induced by light. This study anticipated aspects of the later development of optogenetics in the brain, for example, by suggesting that "Directed light delivery by fiber optics has the potential to target selected cells or tissues, even within larger, more-opaque organisms." The literature has been inconsistent as to whether control of cellular biochemistry with photoproteins should be subsumed within the definition of optogenetics, as optogenetics in common usage refers specifically to the control of neuronal firing with opsins, and as control of neuronal firing with opsins postdates and uses distinct mechanisms from control of cellular biochemistry with photoproteins. Photosensitive proteins used in various cell signaling pathways In addition to phytochromes, which are found in plants and cyanobacteria, LOV domains(Light-oxygen-voltage-sensing domain) from plants and yeast and cryptochrome domains from plants are other natural photosensory domains that have been used for optical control of biochemical pathways in cells. In addition, a synthetic photosensory domain has been engineered from the fluorescent protein Dronpa for optical control of biochemical pathways. In photosensory domains, light absorption is either coupled to a change in protein-protein interactions (in the case of phytochromes, some LOV domains, cryptochromes, and Dronpa mutants) or a conformational change that exposes a linked protein segment or alters the activity of a linked protein domain (in the case of phytochromes and some LOV domains). Light-regulated protein-protein interactions can then be used to recruit proteins to DNA, for example to induce gene transcription or DNA modifications, or to the plasma membrane, for example to activate resident signaling proteins. CRY2 also clusters when active, so has been fused with signaling domains and subsequently photoactivated to allow for clustering-based activation. The LOV2 domain of Avena sativa(common oat) has been used to expose short peptides or an active protein domain in a light-dependent manner. Introduction of this LOV domain into another protein can regulate function through light induced peptide disorder. The asLOV2 protein, which optogenetically exposes a peptide, has also been used as a scaffold for several synthetic light induced dimerization and light induced dissociation systems (iLID and LOVTRAP, respectively). The systems can be used to control proteins through a protein splitting strategy. Photodissociable Dronpa domains have also been used to cage a protein active site in the dark, uncage it after cyan light illumination, and recage it after violet light illumination. Temporal control of signal transduction with light The ability to optically control signals for various time durations is being explored to elucidate how cell signaling pathways convert signal duration and response to different outputs. Natural signaling cascades are capable of responding with different outputs to differences in stimulus timing duration and dynamics. For example, treating PC12 cells with epidermal growth factor (EGF, inducing a transient profile of ERK activity) leads to cellular proliferation whereas introduction of nerve growth factor (NGF, inducing a sustained profile of ERK activity) leads to differentiation into neuron-like cells. This behavior was initially characterized using EGF and NGF application, but the finding has been partially replicated with optical inputs. In addition, a rapid negative feedback loop in the RAF-MEK-ERK pathway was discovered using pulsatile activation of a photoswitchable RAF engineered with photodissociable Dronpa domains. Optogenetic noise-photostimulation Professor Elias Manjarrez's research group introduced the Optogenetic noise-photostimulation. This is a technique that uses random noisy light to activate neurons expressing ChR2. An optimal level of optogenetic-noise photostimulation on the brain can increase the somatosensory evoked field potentials, the firing frequency response of pyramidal neurons to somatosensory stimulation, and the sodium current amplitude. Awards The powerful impact of optogenetic technology on brain research has been recognized by numerous awards to key players in the field. In 2010, Georg Nagel, Peter Hegemann and Ernst Bamberg were awarded the Wiley Prize in Biomedical Sciences and they were also among those awarded the Karl Heinz Beckurts Prize in 2010. In the same year, Karl Deisseroth was awarded the inaugural HFSP Nakasone Award for "his pioneering work on the development of optogenetic methods for studying the function of neuronal networks underlying behavior". In 2012, Bamberg, Deisseroth, Hegemann and Georg Nagel were awarded the Zülch Prize by the Max Planck Society, and Miesenböck was awarded the Baillet Latour Health Prize for "having pioneered optogenetic approaches to manipulate neuronal activity and to control animal behaviour." In 2013, Georg Nagel and Hegemann were among those awarded the Louis-Jeantet Prize for Medicine. Also that year, year Bamberg, Boyden, Deisseroth, Hegemann, Miesenböck and Georg Nagel were jointly awarded The Brain Prize for "their invention and refinement of optogenetics." In 2017, Deisseroth was awarded the Else Kröner Fresenius Research Prize for "his discoveries in optogenetics and hydrogel-tissue chemistry, as well as his research into the neural circuit basis of depression." In 2018, the Inamori Foundation presented Deisseroth with the Kyoto Prize for "spearheading optogenetics" and "revolutionizing systems neuroscience research." In 2019, Bamberg, Boyden, Deisseroth, Hegemann, Miesenböck and Georg Nagel were awarded the Rumford Prize by the American Academy of Arts and Sciences in recognition of "their extraordinary contributions related to the invention and refinement of optogenetics." In 2020, Deisseroth was awarded the Heineken Prize for Medicine from the Royal Netherlands Academy of Arts and Sciences, for developing optogenetics and hydrogel-tissue chemistry. In 2020, Miesenböck, Hegemann and Georg Nagel jointly received the Shaw Prize in Life Science and Medicine. In 2021, Hegemann, Deisseroth and Dieter Oesterhelt received the Albert Lasker Award for Basic Medical Research. References Further reading External links Neuroscience Biological techniques and tools Cybernetics Control theory Brain–computer interface Neuroprosthetics Neural engineering Articles containing video clips Optics
Optogenetics
[ "Physics", "Chemistry", "Mathematics", "Biology" ]
7,650
[ "Neuroscience", "Applied and interdisciplinary physics", "Optics", "Dynamical systems", "Applied mathematics", "Control theory", " molecular", "nan", "Atomic", " and optical physics" ]
14,961,545
https://en.wikipedia.org/wiki/Iwasawa%20manifold
In mathematics, in the field of differential geometry, an Iwasawa manifold is a compact quotient of a 3-dimensional complex Heisenberg group by a cocompact, discrete subgroup. An Iwasawa manifold is a nilmanifold, of real dimension 6. Iwasawa manifolds give examples where the first two terms E1 and E2 of the Frölicher spectral sequence are not isomorphic. As a complex manifold, such an Iwasawa manifold is an important example of a compact complex manifold which does not admit any Kähler metric. References . Differential geometry Lie groups Homogeneous spaces Complex manifolds
Iwasawa manifold
[ "Physics", "Mathematics" ]
128
[ "Lie groups", "Mathematical structures", "Group actions", "Homogeneous spaces", "Space (mathematics)", "Topological spaces", "Algebraic structures", "Geometry", "Symmetry" ]
14,966,101
https://en.wikipedia.org/wiki/Coh-Metrix
Coh-Metrix is a computational tool that produces indices of the linguistic and discourse representations of a text. Developed by Arthur C. Graesser and Danielle S. McNamara, Coh-Metrix analyzes texts on many different features. Measurements Coh-Metrix can be used in many different ways to investigate the cohesion of the explicit text and the coherence of the mental representation of the text. "Our definition of cohesion consists of characteristics of the explicit text that play some role in helping the reader mentally connect ideas in the text" (Graesser, McNamara, & Louwerse, 2003). The definition of coherence is the subject of much debate. Theoretically, the coherence of a text is defined by the interaction between linguistic representations and knowledge representations. While coherence can be defined as characteristics of the text (i.e., aspects of cohesion) that are likely to contribute to the coherence of the mental representation, Coh-Metrix measurements provide indices of these cohesion characteristics. According to an empirical study, the Coh-Metrix L2 Reading Index performs significantly better than traditional readability formulas. See also L2 Syntactic Complexity Analyzer References External links An exposition of the report. Memphis.edu. Includes many detailed concepts under "discourse coherence" and "linguistic cohesion". Reading (process) Computational linguistics Applied linguistics Second language writing
Coh-Metrix
[ "Technology" ]
293
[ "Natural language and computing", "Computational linguistics" ]
2,341,503
https://en.wikipedia.org/wiki/Barometric%20light
Barometric light is a name for the light that is emitted by a mercury-filled barometer tube when the tube is shaken. The discovery of this phenomenon in 1675 revealed the possibility of electric lighting. The phenomenon and its explanation The earliest barometers were simply glass tubes that were closed at one end and filled with mercury. The tube was then inverted and its open end was submerged in a cup of mercury. The mercury then drained out of the tube until the pressure of the mercury in the tube—as measured at the surface of the mercury in the cup—equaled the atmosphere's pressure on the same surface. In order to produce barometric light, the glass tube must be very clean and the mercury must be pure. If the barometer is then shaken, a band of light will appear on the glass at the meniscus of the mercury whenever the mercury moves downward. When mercury contacts glass, the mercury transfers electrons to the glass. Whenever the mercury pulls free of the glass, these electrons are released from the glass into the surroundings, where they collide with gas molecules, causing the gas to glow—just as the collision of electrons and neon atoms causes a neon lamp to glow. History Barometric light was first observed in 1675 by the French astronomer Jean Picard: "Towards the year 1676, Monsieur Picard was transporting his barometer from the Observatory to Port Saint Michel during the night, [when] he noticed a light in a part of the tube where the mercury was moving; this phenomenon having surprised him, he immediately reported it to the sçavans, ... " The Swiss mathematician Johann Bernoulli studied the phenomenon while teaching at Groningen, the Netherlands, and in 1700 he demonstrated the phenomenon to the French Academy. After learning of the phenomenon from Bernoulli, the Englishman Francis Hauksbee investigated the subject extensively. Hauksbee showed that a complete vacuum was not essential to the phenomenon, for the same glow was apparent when mercury was shaken with air only partially rarefied, and that even without using the barometric tube, bulbs containing low-pressure gases could be made to glow via externally applied static electricity. The phenomenon was also studied by contemporaries of Hauksbee, including the Frenchman Pierre Polinière and a French mathematician, Gabriel-Philippe de la Hire, and subsequently by many others. References External links History of the development of the concept of the electric charge Lighting Luminescence Meteorological instrumentation and equipment Mercury (element) Electrostatics
Barometric light
[ "Chemistry", "Technology", "Engineering" ]
504
[ "Meteorological instrumentation and equipment", "Luminescence", "Molecular physics", "Measuring instruments" ]
2,342,451
https://en.wikipedia.org/wiki/Arithmetical%20set
In mathematical logic, an arithmetical set (or arithmetic set) is a set of natural numbers that can be defined by a formula of first-order Peano arithmetic. The arithmetical sets are classified by the arithmetical hierarchy. The definition can be extended to an arbitrary countable set A (e.g. the set of n-tuples of integers, the set of rational numbers, the set of formulas in some formal language, etc.) by using Gödel numbers to represent elements of the set and declaring a subset of A to be arithmetical if the set of corresponding Gödel numbers is arithmetical. A function is called arithmetically definable if the graph of is an arithmetical set. A real number is called arithmetical if the set of all smaller rational numbers is arithmetical. A complex number is called arithmetical if its real and imaginary parts are both arithmetical. Formal definition A set X of natural numbers is arithmetical or arithmetically definable if there is a first-order formula φ(n) in the language of Peano arithmetic such that each number n is in X if and only if φ(n) holds in the standard model of arithmetic. Similarly, a k-ary relation is arithmetical if there is a formula such that holds for all k-tuples of natural numbers. A function is called arithmetical if its graph is an arithmetical (k+1)-ary relation. A set A is said to be arithmetical in a set B if A is definable by an arithmetical formula that has B as a set parameter. Examples The set of all prime numbers is arithmetical. Every recursively enumerable set is arithmetical. Every computable function is arithmetically definable. The set encoding the halting problem is arithmetical. Chaitin's constant Ω is an arithmetical real number. Tarski's indefinability theorem shows that the (Gödel numbers of the) set of true formulas of first-order arithmetic is not arithmetically definable. Properties The complement of an arithmetical set is an arithmetical set. The Turing jump of an arithmetical set is an arithmetical set. The collection of arithmetical sets is countable, but the sequence of arithmetical sets is not arithmetically definable. Thus, there is no arithmetical formula φ(n,m) that is true if and only if m is a member of the nth arithmetical predicate. In fact, such a formula would describe a decision problem for all finite Turing jumps, and hence belongs to 0(ω), which cannot be formalized in first-order arithmetic, as it does not belong to the first-order arithmetical hierarchy. The set of real arithmetical numbers is countable, dense and order-isomorphic to the set of rational numbers. Implicitly arithmetical sets Each arithmetical set has an arithmetical formula that says whether particular numbers are in the set. An alternative notion of definability allows for a formula that does not say whether particular numbers are in the set but says whether the set itself satisfies some arithmetical property. A set Y of natural numbers is implicitly arithmetical or implicitly arithmetically definable if it is definable with an arithmetical formula that is able to use Y as a parameter. That is, if there is a formula in the language of Peano arithmetic with no free number variables and a new set parameter Z and set membership relation such that Y is the unique set Z such that holds. Every arithmetical set is implicitly arithmetical; if X is arithmetically defined by φ(n) then it is implicitly defined by the formula . Not every implicitly arithmetical set is arithmetical, however. In particular, the truth set of first-order arithmetic is implicitly arithmetical but not arithmetical. See also Arithmetical hierarchy Computable set Computable number Further reading Hartley Rogers Jr. (1967). Theory of recursive functions and effective computability. McGraw-Hill. Effective descriptive set theory Mathematical logic hierarchies Computability theory
Arithmetical set
[ "Mathematics" ]
844
[ "Computability theory", "Mathematical logic", "Mathematical logic hierarchies" ]
2,342,852
https://en.wikipedia.org/wiki/Aza-Diels%E2%80%93Alder%20reaction
The Aza-Diels–Alder reaction is a modification of the Diels–Alder reaction wherein a nitrogen replaces sp2 carbon. The nitrogen atom can be part of the diene or the dienophile. Mechanism The aza Diels-Alder (IDA) reaction may occur either by a concerted or stepwise process. The lowest-energy transition state for the concerted process places the imine lone pair (or coordinated Lewis acid) in an exo position. Thus, (E) imines, in which the lone pair and larger imine carbon substituent are cis, tend to give exo products. When the imine nitrogen is protonated or coordinated to a strong Lewis acid, the mechanism shifts to a stepwise, Mannich-Michael pathway. Attaching an electron-withdrawing group to the imine nitrogen increases the rate. The exo isomer usually predominates (particularly when cyclic dienes are used), although selectivities vary. Scope and limitations Stereoselective variants In many cases, cyclic dienes give higher diastereoselectivities than acyclic dienes. Use of amino-acid-based chiral auxiliaries, for instance, leads to good diastereoselectivities in reactions of cyclopentadiene, but not in reactions of acyclic dienes. Asymmetric variants Chiral auxiliaries have been employed on either the imino nitrogen or imino carbon to effect diastereoselection. In the enantioselective Diels–Alder reaction of an aniline, formaldehyde and a cyclohexenone catalyzed by (S)-proline even the diene is masked. In situ generated imines The imine is often generated in situ from an amine and formaldehyde. An example is the reaction of cyclopentadiene with benzylamine to an aza norbornene. The catalytic cycle starts with the reactions of the aromatic amine with formaldehyde to the imine and the reaction of the ketone with proline to the diene. The second step, an endo trig cyclisation, is driven to one of the two possible enantiomers (99% ee) because the imine nitrogen atom forms a hydrogen bond with the carboxylic acid group of proline on the Si face. Hydrolysis of the final complex releases the product and regenerates the catalyst. Tosylimines may be generated in situ from tosylisocyanate and aldehydes. Cycloadditions of these intermediates with dienes give single constitutional isomers, but proceed with moderate stereoselectivity. Lewis-acid catalyzed reactions of sulfonyl imines also exhibit moderate stereoselectivity. Simple unactivated imines react with hydrocarbon dienes only with the help of a Lewis acid; however, both electron-rich and electron-poor dienes react with unactivated imines when heated. Vinylketenes, for instance, afford dihydropyridones upon [4+2] cycloaddition with imines. Regio- and stereoselectivity are unusually high in reactions of this class of dienes. Vinylallenes react similarly in the presence of a Lewis acid, often with high diastereoselectivity. Acyliminium substrates Acyliminium ions also participate in cycloadditions. These cations are generated by removal of chloride from chloromethylated amides: The resulting acyl iminium cations serve as heterodienes as well as dienophile. Use in natural products synthesis The aza-Diels–Alder reaction has been applied to the synthesis of a number of alkaloid natural products. Danishefsky's diene is used to form a six-membered ring en route to phyllanthine. See also Oxo-Diels–Alder reaction References Cycloadditions Nitrogen heterocycle forming reactions Name reactions
Aza-Diels–Alder reaction
[ "Chemistry" ]
844
[ "Name reactions" ]
2,343,250
https://en.wikipedia.org/wiki/Dendrite%20%28metal%29
A dendrite in metallurgy is a characteristic tree-like structure of crystals growing as molten metal solidifies, the shape produced by faster growth along energetically favourable crystallographic directions. This dendritic growth has large consequences in regard to material properties. Formation Dendrites form in unary (one-component) systems as well as multi-component systems. The requirement is that the liquid (the molten material) be undercooled, aka supercooled, below the freezing point of the solid. Initially, a spherical solid nucleus grows in the undercooled melt. As the sphere grows, the spherical morphology becomes unstable and its shape becomes perturbed. The solid shape begins to express the preferred growth directions of the crystal. This growth direction may be due to anisotropy in the surface energy of the solid–liquid interface, or to the ease of attachment of atoms to the interface on different crystallographic planes, or both (for an example of the latter, see hopper crystal). In metallic systems, interface attachment kinetics is usually negligible (for non-negligible cases, see dendrite (crystal)). The solid then attempts to minimize the area of those surfaces with the highest surface energy. The dendrite thus exhibits a sharper and sharper tip as it grows. If the anisotropy is large enough, the dendrite may present a faceted morphology. The microstructural length scale is determined by the interplay or balance between the surface energy and the temperature gradient (which drives the heat/solute diffusion) in the liquid at the interface. As solidification proceeds, an increasing number of atoms lose their kinetic energy, making the process exothermic. For a pure material, latent heat is released at the solid–liquid interface so that the temperature remains constant until the melt has completely solidified. The growth rate of the resultant crystalline substance will depend on how fast this latent heat can be conducted away. A dendrite growing in an undercooled melt can be approximated as a parabolic needle-like crystal that grows in a shape-preserving manner at constant velocity. Nucleation and growth determine the grain size in equiaxed solidification while the competition between adjacent dendrites decides the primary spacing in columnar growth. Generally, if the melt is cooled slowly, nucleation of new crystals will be less than at large undercooling. The dendritic growth will result in dendrites of a large size. Conversely, a rapid cooling cycle with a large undercooling will increase the number of nuclei and thus reduce the size of the resulting dendrites (and often lead to small grains). Smaller dendrites generally lead to higher ductility of the product. One application where dendritic growth and resulting material properties can be seen is the process of welding. The dendrites are also common in cast products, where they may become visible by etching of a polished specimen. As dendrites develop further into the liquid metal, they get hotter because they continue to extract heat. If they get too hot, they will remelt. This remelting of the dendrites is called recalescence. Dendrites usually form under non-equilibrium conditions. Computational modeling The first computational model of dendritic solidification was published by Kobayashi, who used a phase-field model to solve two coupled partial differential equations describing the evolution of the phase-field, (with in the liquid phase and in the solid phase), and the temperature field, , for a pure material in two dimensions: which is an Allen-Cahn equation with an anisotropic gradient energy coefficient: where is an average value of , is the angle between the interface normal and the x-axis, and and are constants representing the strength and mode of anisotropy, respectively. The parameter describes the thermodynamic driving force for solidification, which Kobayashi defines for a supercooled melt as: where is a constant between 0 and 1, is a positive constant, and is the dimensionless equilibrium temperature. The temperature has been non-dimensionalized such that the equilibrium temperature is and the initial temperature of the undercooled melt is . The evolution equation for the temperature field is given by and is simply the heat equation with a source term due to the evolution of latent heat upon solidification, where is a constant representing the latent heat normalized by the strength of the cooling. When this system is numerically evolved, random noise representing thermal fluctuations is introduced to the interface via the term, where is the magnitude of the noise and is a random number distributed uniformly on . Application An application of dendritic growth in directional solidification is gas turbine engine blades which are used at high temperatures and must handle high stresses along the major axes. At high temperatures, grain boundaries are weaker than grains. In order to minimize the effect on properties, grain boundaries are aligned parallel to the dendrites. The first alloy used in this application was a nickel-based alloy (MAR M-200) with 12.5% tungsten, which accumulated in the dendrites during solidification. This resulted in blades with high strength and creep resistance extending along the length of the casting, giving improved properties compared to the traditionally-cast equivalent. See also Diana's Tree Whisker (metallurgy) References Metallurgy
Dendrite (metal)
[ "Chemistry", "Materials_science", "Engineering" ]
1,116
[ "Metallurgy", "Materials science", "nan" ]
2,343,811
https://en.wikipedia.org/wiki/Temperature%20measurement
Temperature measurement (also known as thermometry) describes the process of measuring a current temperature for immediate or later evaluation. Datasets consisting of repeated standardized measurements can be used to assess temperature trends. History Attempts at standardized temperature measurement prior to the 17th century were crude at best. For instance in 170 AD, physician Claudius Galenus mixed equal portions of ice and boiling water to create a "neutral" temperature standard. The modern scientific field has its origins in the works by Florentine scientists in the 1600s including Galileo constructing devices able to measure relative change in temperature, but subject also to confounding with atmospheric pressure changes. These early devices were called thermoscopes. The first sealed thermometer was constructed in 1654 by the Grand Duke of Tuscany, Ferdinand II. The development of today's thermometers and temperature scales began in the early 18th century, when Daniel Gabriel Fahrenheit produced a mercury thermometer and scale, both developed by Ole Christensen Rømer. Fahrenheit's scale is still in use, alongside the Celsius and Kelvin scales. Technologies Many methods have been developed for measuring temperature. Most of these rely on measuring some physical property of a working material that varies with temperature. One of the most common devices for measuring temperature is the glass thermometer. This consists of a glass tube filled with mercury or some other liquid, which acts as the working fluid. Temperature increase causes the fluid to expand, so the temperature can be determined by measuring the volume of the fluid. Such thermometers are usually calibrated so that one can read the temperature simply by observing the level of the fluid in the thermometer. Another type of thermometer that is not really used much in practice, but is important from a theoretical standpoint, is the gas thermometer. Other important devices for measuring temperature include: Thermocouples Thermistors Resistance temperature detector (RTD) Pyrometer Langmuir probes (for electron temperature of a plasma) Infrared thermometer Other thermometers One must be careful when measuring temperature to ensure that the measuring instrument (thermometer, thermocouple, etc.) is really the same temperature as the material that is being measured. Under some conditions heat from the measuring instrument can cause a temperature gradient, so the measured temperature is different from the actual temperature of the system. In such a case the measured temperature will vary not only with the temperature of the system, but also with the heat transfer properties of the system. What thermal comfort humans, animals and plants experience is related to more than temperature shown on a glass thermometer. Relative humidity levels in ambient air can induce more or less evaporative cooling. Measurement of the wet-bulb temperature normalizes this humidity effect. Mean radiant temperature also can affect thermal comfort. The wind chill factor makes the weather feel colder under windy conditions than calm conditions even though a glass thermometer shows the same temperature. Airflow increases the rate of heat transfer from or to the body, resulting in a larger change in body temperature for the same ambient temperature. The theoretical basis for thermometers is the zeroth law of thermodynamics which postulates that if you have three bodies, A, B and C, if A and B are at the same temperature, and B and C are at the same temperature then A and C are at the same temperature. B, of course, is the thermometer. The practical basis of thermometry is the existence of triple point cells. Triple points are conditions of pressure, volume and temperature such that three phases are simultaneously present, for example solid, vapor and liquid. For a single component there are no degrees of freedom at a triple point and any change in the three variables results in one or more of the phases vanishing from the cell. Therefore, triple point cells can be used as universal references for temperature and pressure (see Gibbs phase rule). Under some conditions it becomes possible to measure temperature by a direct use of the Planck's law of black-body radiation. For example, the cosmic microwave background temperature has been measured from the spectrum of photons observed by satellite observations such as the WMAP. In the study of the quark–gluon plasma through heavy-ion collisions, single particle spectra sometimes serve as a thermometer. Non-invasive thermometry During recent decades, many thermometric techniques have been developed. The most promising and widespread non-invasive thermometric techniques in a biotech context are based on the analysis of magnetic resonance images, computerized tomography images and echotomography. These techniques allow monitoring temperature within tissues without introducing a sensing element. In the field of reactive flows (e.g., combustion, plasmas), laser induced fluorescence (LIF), CARS, and laser absorption spectroscopy have been exploited to measure temperature inside engines, gas-turbines, shock-tubes, synthesis reactors etc. The capability of such optical-based techniques include rapid measurement (down to nanosecond timescales), notwithstanding the ability to not perturb the subject of measurement (e.g., the flame, shock-heated gases). US (ASME) Standards B40.200-2008: Thermometers, Direct Reading and Remotes Reading. PTC 19.3-1974(R2004): Performance test code for temperature measurement. Air temperature Standards The American Society of Mechanical Engineers (ASME) has developed two separate and distinct standards on temperature Measurement, B40.200 and PTC 19.3. B40.200 provides guidelines for bimetallic-actuated, filled-system, and liquid-in-glass thermometers. It also provides guidelines for thermowells. PTC 19.3 provides guidelines for temperature measurement related to Performance Test Codes with particular emphasis on basic sources of measurement errors and techniques for coping with them. Satellite temperature measurement See also Timeline of temperature and pressure measurement technology Conversion of scales of temperature Color temperature Planck temperature Temperature data logger References External links Another contemporaneous survey of related material. A detailed contemporaneous survey of thermometric theory and thermometer design. A comparison of different measurement technologies Atmospheric thermodynamics Thermodynamics Medical tests
Temperature measurement
[ "Physics", "Chemistry", "Mathematics" ]
1,293
[ "Thermodynamics", "Dynamical systems" ]
2,344,177
https://en.wikipedia.org/wiki/Santa%20Susana%20Field%20Laboratory
The Santa Susana Field Laboratory (SSFL), formerly known as Rocketdyne, is a complex of industrial research and development facilities located on a portion of Southern California in an unincorporated area of Ventura County in the Simi Hills between Simi Valley and Los Angeles. The site is located approximately northwest of Hollywood and approximately northwest of Downtown Los Angeles. Sage Ranch Park is adjacent on part of the northern boundary and the community of Bell Canyon is along the entire southern boundary. SSFL was used mainly for the development and testing of liquid-propellant rocket engines for the United States space program from 1949 to 2006, nuclear reactors from 1953 to 1980 and the operation of a U.S. government-sponsored liquid metals research center from 1966 to 1998. Throughout the years, about ten low-power nuclear reactors operated at SSFL, (including the Sodium Reactor Experiment, the first reactor in the United States to generate electrical power for a commercial grid, and the first commercial power plant in the world to experience a partial core meltdown) in addition to several "critical facilities" that helped develop nuclear science and applications. At least four of the ten nuclear reactors had accidents during their operation. The reactors located on the grounds of SSFL were considered experimental, and therefore had no containment structures. The site ceased research and development operations in 2006. The years of rocket testing, nuclear reactor testing, and liquid metal research have left the site "significantly contaminated". Environmental cleanup is ongoing. The public who live near the site have strongly urged a thorough cleanup of the site, citing cases of long term illnesses, including cancer cases at rates they claim are higher than normal. Experts have said that there is insufficient evidence to identify an explicit link between cancer rates and radioactive contamination in the area. Introduction Since 1947 the Santa Susana Field Laboratory location has been used by a number of companies and agencies. The first was Rocketdyne, originally a division of North American Aviation (NAA), which developed a variety of pioneering, successful, and reliable liquid rocket engines. Some were used in the Navaho cruise missile, the Redstone rocket, the Thor and Jupiter ballistic missiles, early versions of the Delta and Atlas rockets, the Saturn rocket family, and the Space Shuttle Main Engine. The Atomics International division of North American Aviation used a separate and dedicated portion of the Santa Susana Field Laboratory to build and operate the first commercial nuclear power plant in the United States, as well as for the testing and development of compact nuclear reactors, including the first and only known nuclear reactor launched into Low Earth Orbit by the United States, the SNAP-10A. Atomics International also operated the Energy Technology Engineering Center for the U.S. Department of Energy at the site. The Santa Susana Field Laboratory includes sites identified as historic by the American Institute of Aeronautics and Astronautics and by the American Nuclear Society. In 1996, The Boeing Company became the primary owner and operator of the Santa Susana Field Laboratory and later closed the site. Three California state agencies (Department of Toxic Substances Control, Department of Public Health Radiologic Health Branch, and the Los Angeles Regional Water Quality Control Board) and three federal agencies (Department of Energy, NASA, and EPA) have been overseeing a detailed investigation of environmental impacts from historical site operations since at least 1990. Concerns about the environmental impact of past nuclear energy and rocket test operations, and waste disposal practices, have inspired several lawsuits seeking payments from Boeing.  Litigation and legislation have also attempted to change established remediation and decommissioning processes.  Several interest groups (Committee to Bridge the Gap, Natural Resource Defense Council, Physicians for Social Responsibility - Los Angeles) and numerous others, are actively involved with steering the ongoing environmental investigation. The Santa Susana Field Laboratory is the focus of diverse interests. Burro Flats Painted Cave, listed on the National Register of Historic Places, is located within the Santa Susana Field Laboratory boundaries, on a portion of the site owned by the U.S. government. The drawings within the cave have been termed "the best preserved Indian pictograph in Southern California." Several tributary streams to the Los Angeles River have headwater watersheds on the SSFL property, including Bell Creek (90% of SSFL drainage), Dayton Creek, Woolsey Canyon, and Runkle Creek. History SSFL was a United States government facility dedicated to the development and testing of nuclear reactors, powerful rockets such as the Delta II, and the systems that powered the Apollo missions. The location of SSFL was chosen in 1947 for its remoteness in order to conduct work that was considered too dangerous and too noisy to be performed in more densely populated areas. In subsequent years, the Southern California population grew, along with housing developments surrounding the area. The site is divided into four production and two buffer areas (Area I, II, III, and IV, and the northern and southern buffer zones). Areas I through III were used for rocket testing, missile testing, and munitions development. Area IV was used primarily for nuclear reactor experimentation and development. Laser research for the Strategic Defense Initiative (popularly known as "Star Wars") was also conducted in Area IV. Rocket engine development Research, development and testing of rocket engines was conducted on a regular basis in Area II of the SSFL from the mid 1950s through the early 1980s. These activities were conducted by the U.S. Army, Air Force, and NASA. Subsequently, occasional testing took place until 2006. North American Aviation (NAA) began its development of liquid propellant rocket engines after the end of WWII. The Rocketdyne division of NAA, which came into being under its own name in the mid-1950s, designed and tested several rocket engines at the facility. They included engines for the Army's Redstone (an advanced short-range version of the German V-2), and the Army Jupiter intermediate range ballistic missile (IRBM) as well as the Air Force's counterpart IRBM, the Thor. Also included among those developed there, were engines for the Atlas Intercontinental Ballistic Missile (ICBM), as well as the twin combustion chamber alcohol/liquid oxygen booster engine for the Navaho, a large, intercontinental cruise missile that never became operational. Later, Rocketdyne designed and tested the J-2 liquid oxygen/hydrogen engine which was used on the second and third stages of the Saturn V launch rocket developed for the moon-bound Project Apollo mission. While the J-2 was tested at the facility, Rocketdyne's huge F-1 engine for the first stage of the Saturn V was tested in the Mojave desert near Edwards Air Force Base. This was due to safety and noise considerations, since SSFL was too close to populated areas. NASA conducted Space shuttle main engine tests at SSFL from 1974 to 1988. Nuclear and energy research and development The Atomics International Division of North American Aviation used SSFL Area IV as the site of United States first commercial nuclear power plant and the testing and development of the SNAP-10A, the first nuclear reactor launched into outer space by the United States. Atomics International also operated the Energy Technology Engineering Center at the site for the U.S. government. As overall interest in nuclear power declined, Atomics International made a transition to non-nuclear energy-related projects, such as coal gasification, and gradually, ceased designing and testing nuclear reactors. Atomics International eventually was merged with the Rocketdyne division in 1978. Sodium reactor experiment The Sodium Reactor Experiment (SRE) was an experimental nuclear reactor that operated at the site from 1957 to 1964 and was the first commercial power plant in the world to experience a core meltdown. There was a decades-long cover-up of the incident by the U.S. Department of Energy. The operation predated environmental regulation, so early disposal techniques are not recorded in detail. Thousands of pounds of sodium coolant from the time of the meltdown are not yet accounted for. The reactor and support systems were removed in 1981 and the building torn down in 1999. The 1959 sodium reactor incident was chronicled on History Channel's program Engineering Disasters 19. In August 2009, on the 50th anniversary of the SRE accident, the Department of Energy hosted a day-long public workshop for the community, employees, and retirees.  The workshop featured three experts – Dr. Paul Pickard of DOE's Sandia National Laboratories, Dr. Thomas Cochran of the Natural Resources Defense Council, and Dr. Richard Denning of Ohio State University – as well as a Q&A and discussion. All three experts agreed that there was not significant public harm from the release of radioactive noble gases, but held conflicting views about the amounts and health harms of other radioactive fission products potentially released. Energy Technology Engineering Center The Energy Technology Engineering Center (ETEC), was a government-owned, contractor-operated complex of industrial facilities located within Area IV of the Santa Susana Field Laboratory. The ETEC specialized in non-nuclear testing of components which were designed to transfer heat from a nuclear reactor using liquid metals instead of water or gas. The center operated from 1966 to 1998. The ETEC site has been closed and is now undergoing building removal and environmental remediation by the U.S. Department of Energy. Accidents and site contamination Nuclear reactors Throughout the years, approximately ten low-power nuclear reactors operated at SSFL, in addition to several "critical facilities": a sodium burn pit in which sodium-coated objects were burned in an open pit; a plutonium fuel fabrication facility; a uranium carbide fuel fabrication facility; and the purportedly largest "Hot Lab" facility in the United States at the time. (A hot lab is a facility used for remotely handling or machining radioactive material.) Irradiated nuclear fuel from other Atomic Energy Commission (AEC) and Department of Energy (DOE) facilities from around the country was shipped to SSFL to be decladded and examined. The hot lab suffered a number of fires involving radioactive materials. For example, in 1957, a fire in the hot cell "got out of control and ... massive contamination" resulted. At least four of the ten nuclear reactors suffered accidents: 1) The AE6 reactor experienced a release of fission gases in March 1959. 2) In July 1959, the SRE experienced a power excursion and partial meltdown that released 28 Curies of radioactive noble gases. According to DOE, the release resulted in the maximum off-site exposure of 0.099 millirem and an exposure of 0.018 millirem for the nearest residential building which is well within current limits today. 3) In 1964, the SNAP8ER experienced damage to 80% of its fuel. 4) In 1969 the SNAP8DR experienced similar damage to one-third of its fuel. A radioactive fire occurred in 1971, involving combustible primary reactor coolant (NaK) contaminated with mixed fission products. Sodium burn pits The sodium burn pit, an open-air pit for cleaning sodium-contaminated components, was also contaminated by the burning of radioactively and chemically contaminated items in it, in contravention of safety requirements. In an article in the Ventura County Star, James Palmer, a former SSFL worker, was interviewed. The article notes that "of the 27 men on Palmer's crew, 22 died of cancers." On some nights Palmer returned home from work and kissed "his wife [hello], only to burn her lips with the chemicals he had breathed at work." The report also noted that "During their breaks, Palmer's crew would fish in one of three ponds ... The men would use a solution that was 90 percent hydrogen peroxide to neutralize the contamination. Sometimes, the water was so polluted it bubbled. The fish died off." Palmer's interview ended with: "They had seven wells up there, water wells, and every damn one of them was contaminated," Palmer said, "It was a horror story." In 2002, a Department of Energy (DOE) official described typical waste disposal procedures used by Field Lab employees in the past. Workers would dispose of barrels filled with radioactive sodium by dumping them in a pond and then shooting the barrels with rifles so that they would explode and release their contents into the air. Since then, the pit has been remediated by having 22,000 cubic yards of soil removed down to bedrock. On 26 July 1994, two scientists, Otto K. Heiney and Larry A. Pugh were killed when the chemicals they were illegally burning in open pits exploded. After a grand jury investigation and FBI raid on the facility, three Rocketdyne officials pleaded guilty in June 2004 to illegally storing explosive materials. The jury deadlocked on the more serious charges related to illegal burning of hazardous waste. At trial, a retired Rocketdyne mechanic testified as to what he witnessed at the time of the explosion: "I assumed we were burning waste," Lee Wells testified, comparing the process used on 21 and 26 July 1994, to that once used to legally dispose of leftover chemicals at the company's old burn pit. As Heiney poured the chemicals for what would have been the third burn of the day, the blast occurred, Wells said. "[The background noise] was so loud I didn't hear anything ... I felt the blast and I looked down and my shirt was coming apart." When he realized what had occurred, Wells said, "I felt to see if I was all there ... I knew I was burned but I didn't know how bad." Wells suffered second- and third-degree burns to his face, arms and stomach. 2018 Woolsey fire The 2018 Woolsey Fire began at SSFL and burned about 80% of the site. After the fire, the Los Angeles County Department of Public Health found "no discernible level of radiation in the tested area" and the California Department of Toxic Substances Control, which is overseeing cleanup of the site, said in an interim report that "previously handled radioactive and hazardous materials were not affected by the fire." Bob Dodge, President of Physicians for Social Responsibility-Los Angeles, said "When it burns and becomes airborne in smoke and ash, there is real possibility of heightened exposure for area residents." In 2019, Risk Assessment Corporation (RAC) conducted soil sampling surrounding the SSFL, and performed source term estimation, atmospheric transport, and deposition modeling. The study reports published in 2023, concluded,Air measurement data collected during the Woolsey Fire, along with atmospheric dispersion modeling and an offsite soil sampling program designed specifically to look for impacts from the fire, showed no evidence of SSFL impact in offsite soils because of the Woolsey Fire. No anthropogenic radionuclides were measured at levels above those expected from global fallout. The soil sampling confirmed that no detectable levels of SSFL-derived radionuclides migrated from SSFL at the locations sampled because of the Woolsey Fire or from past operations of the SSFL.In 2020, the California Department of Toxic Substances Control (DTSC) stated in their final report that the fire did not cause contaminants to be released from the site into Simi Valley and other neighboring communities and that the risk from smoke exposure during the fire was not higher than what is normally associated with wildfire. In 2021 a study which collected 360 samples of dust, ash, and soils from homes and public lands three weeks after the fire found that most samples were at normal levels, ("Data did not support a finding of widespread deposition of radioactive particles.") but that two locations "contained high activities of radioactive isotopes associated with the Santa Susana Field Laboratory." Medical claims In October 2006, the Santa Susana Field Laboratory Advisory Panel, made up of independent scientists and researchers from around the United States, concluded that based on available data and computer models, contamination at the facility resulted in an estimated 260 cancer related deaths. The report also concluded that the SRE meltdown caused the release of more than 458 times the amount of radioactivity released by the Three Mile Island accident. While the nuclear core of the SRE released 10 times less radiation than the TMI incident, the lack of proper containment such as concrete structures caused this radiation to be released into the surrounding environment. The radiation released by the core of the TMI was largely contained. According to studies conducted by Hal Morgenstern between 1988 and 2002, residents living within of the laboratory are 60% more likely to be diagnosed with certain cancers compared to residents living from the laboratory, though Morgenstern said that the lab is not necessarily the cause. Cleanup Standards During its years of rocket engine tests and nuclear research and operations, SSFL’s soil became contaminated with chemicals and radionuclides. Several accidents occurred in nuclear facilities, including the 1959 SRE core damage accident (see section on the Sodium Reactor Experiment). In addition, groundwater under SSFL is contaminated (principally with the solvent, TCE) following some 30,000 rocket engine tests. Extensive characterization has been completed for chemicals and radionuclides. The majority of SSFL buildings and facilities have been decommissioned and removed, and numerous interim soil cleanups have been conducted. DTSC leads site cleanup involving responsible parties (Boeing, DOE, and NASA), agencies (DTSC, LARWQCB, CDPH), and other stakeholders (activist organizations, community members, state and federal legislators, and the media). Cleanup standards and remedial options (remedy selection) continue to be debated and litigated.  DTSC’s Final Program Environmental Impact Report (2023) estimates that soil cleanup will take another 15 years. The following summarizes in a generally chronological order, key events related to cleanup standards for both land and building structures and associated remedial options. 1996 DOE and CDHS Approves Boeing’s Radiological Cleanup Standards In March 1996, Rockwell proposed radiological cleanup standards for soil and buildings at SSFL. CDHS approved these standards in August 1996. DOE approved these standards in September 1996. Subsequently, Boeing issued final cleanup standards in February 1999. The soil cleanup goal was based on a dose rate of 15 mrem/y above background (300 mrem/y). This was consistent (and less than) NRC’s future 25 mrem/y License Termination Rule and USEPA’s proposed 15 mrem/y dose-based goal for CERCLA remediation sites developed during the late1990s. In May 1999, Senator Feinstein sent a series of letters to the Clinton Administration expressing concerns about nuclear decommissioning cleanup standards at SSFL. In June 1999, Boeing documented the basis for cleanup standards in use at SSFL, that were identical to standards used in the rest of the U.S. 2001 CDHS Adopts NRC’s Decommissioning Standards In 2001, the California Department of Health Services (CDHS) conducted a public hearing proposing to adopt by reference, the Nuclear Regulatory Commission’s 10 CFR 20 Subpart E, otherwise known as the License Termination Rule, that would codify the federal cleanup standard of 25 mrem/y. California, being an Agreement State, was obligated to utilize nuclear regulations, consistent with federal NRC regulations. 2002 CBG Sues CDHS In March 2002, the Committee to Bridge the Gap (CBG), the Southern California Federation of Scientists (SCFS) and the Physicians for Social Responsibility - Los Angeles (PSR-LA), sued CDHS, arguing that CDHS cannot adopt 10 CFR 20 Subpart E, and should comply with CEQA and the California APA, conduct an Environmental Impact Report (EIR) and conduct public hearings before adopting safe dose-based decommissioning standards. In April 2002 and June 2002, Judge Ohanesian, concurred with plaintiffs’ complaint. As of January 2024, twenty-two years later, CDHS (now CDPH) has ignored the Judge’s Order and still does not have a dose-based decommissioning standard or any numerical criteria for license termination of nuclear or radiological facilities. 2003 DOE Environmental Assessment In March 2003, DOE issued an Environmental Assessment (EA) that proposed a radiological cleanup standard of 15 mrem/y, that was safe and protective of public health, consistent with EPA’s one-time, draft dose-base standards and more restrictive than the NRC’s 25 mrem/y dose-based standard. 2004 NRDC Sues DOE In September 2004, NRDC, CBG and the City of Los Angeles sued DOE claiming that the 2003 EA had violated the National Environmental Policy Act (NEPA), the Comprehensive Environmental Response, Compensation and Liability Act (CERCLA) and Endangered Species Act (ESA). The lawsuit claimed that a full Environmental Impact Statement (EIS) should have been performed prior to selecting a soil cleanup remedy. In May 2007, US District Court Judge Samuel Conti found in favor of the plaintiffs stating that DOE had violated NEPA and should prepare a more detailed Environmental Impact Statement (EIS). In November 2018, DOE issued the final EIS, over 11 years after Judge Conti’s Order. As of January 2024, more than five years later, DOE has yet to issue a Record of Decision (ROD) on a soil cleanup standard for radionuclides and chemicals in Area IV. 2007 Technical Feasibility of Detecting Radionuclide Contamination In March 2007, Boeing issued a paper, utilizing EPA data, that detection of radionuclides at a 10−6 risk level for an agricultural land use scenario, was technically infeasible. This was prepared in response to initial California Senate hearings on SB 990 that would become California law nine months later (see later). 2007 Consent Order for Corrective Action In August 2007, DTSC, Boeing, DOE and NASA signed a Consent Order for Corrective Action, outlining planning, risk assessments and schedules for remediation at SSFL.  The Consent Order was focused exclusively on chemical remediation of soil and groundwater. It was silent on radiological remediation and nuclear decommissioning. The Consent Order established a timeline for site cleanup to be completed by 2017. 2007 Radiological Release Process In September 2007, Boeing issued “Radiological Release Process - Process for the Release of Land and Facilities for (Radiologically) Unrestricted Use” which described the key steps in a generic decommissioning process typical of that used elsewhere in the United States. 2007 SB 990 In October 2007, SB 990 (Kuehl) was passed in the California Senate, that mandated an agricultural risk-based cleanup standard for chemicals and radionuclides and transferred regulatory authority for radiological cleanup at SSFL from CDHS and DOE to DTSC. SB 990 became law on January 1, 2008. In October 2007, Boeing and Governor Schwarzenegger announced the intent to transfer SSFL to the State of California as open space parkland (following completion of remediation), along with an agreement from State Senator Sheila Kuehl that she would amend SB 990 to withdraw requirements for agricultural land use and DTSC land transfer approval. In January 2008, this agreement fell through, following objections by other parties (CBG, NRDC, Sierra Club, PSR-LA, SCFS, etc). These parties also objected to NPL listing by EPA since it would have taken control of cleanup out of the hands of DTSC (who would require cleanup-to-background in the future 2010 AOC) and given it to EPA (who would implement a CERCLA risk-based cleanup). Boeing remained committed to the future of SSFL as open space, as evidenced by the April 2017 conservation easement recorded with the North American Land Trust (NALT) to permanently preserve and protect Boeing’s 2,400 acres at the Santa Susana site. In November 2009, Boeing sued DTSC over SB 990, following months of unsuccessful negotiations between DTSC, Boeing, DOE and NASA attempting to incorporate the requirements of SB 990 into the 2007 Consent Order. In April 2011, Judge John Walter of the United States District Court (Central District of California) issued an order in favor of Boeing, stating, “SB 990 is declared invalid and unconstitutional in its entirety under the Supremacy Clause of the United States Constitution” and “DTSC is hereby enjoined from enforcing or implementing SB 990.” In September 2014, the United States Court of Appeals (Ninth Circuit) upheld and affirmed the lower Court’s judgement. 2010 AOCs Perhaps in anticipation of losing the SB 990 lawsuit to Boeing, in December 2010, DTSC “encouraged” DOE and NASA to sign two identical Administrative Orders on Consent (AOCs) in which both RPs agreed to (1) clean-up to background, (2) dispense with EPA’s CERCLA risk assessment guidelines, (3) define soil to include building structures, and (4) send all soil (and structures) that exceed background radionuclides to an out-of-state licensed low-level radioactive waste disposal facility. Boeing had refused to negotiate or sign its own AOC, being involved in litigation with the State, over SB 990. 2010-2013 Boeing Building Demolition Between 2010 and 2013, Boeing demolished 40 remaining Boeing-owned non-radiological buildings in Areas I, III and IV based on DTSC approved procedures. Subsequent proposals in 2013 to demolish 6 remaining, released-for-unrestrictive-use, former radiological buildings in Area IV met with resistance.  In August 2013, the Physicians for Social Responsibility - Los Angeles (PSR-LA) plus others, sued the DTSC, CDPH and Boeing, alleging that demolition debris from these buildings was LLRW and should be disposed out-of-state to a licensed low-level radioactive waste disposal facility. Five years later, in November 2018, the Superior Court of California found for the defendants. Five years later, in May 2023, the California Appeals Court reaffirmed the lower Court’s decision, denying plaintiffs’ petition. Subsequently Plaintiffs petitioned the California Supreme Court to review the case. The California Supreme Court denied the petition for review. 2020 DOE Building Demolition In May 2020, DTSC and DOE signed an Order on Consent for Interim Response Action at the Radioactive Material Handling Facility (RMHF) Complex. The Order on Consent required all demolition debris to be disposed out of the State of California at a licensed LLRW or MLLRW disposal facility or a DOE authorized LLRW or MLLRW disposal facility. In October 2020, DTSC and DOE signed an Amendment to Order on Consent for Interim Response Action at the Radioactive Materials Handling Facility (RMHF) Complex. The title was misleading since the agreement has nothing to do with the RMHF, but states requirements for the demolition and disposal of eight remaining DOE-owned, non-RMHF facilities. These eight buildings included two that had been surveyed, confirming that structures to be demolished met all federal and state cleanup standards; two buildings that had been decommissioned and released for unrestricted use by DOE; and four buildings that had no history of radiological use, but had nevertheless been surveyed and confirmed to be “indistinguishable from background.” Nevertheless, “out of an abundance of caution,” the Amendment caused all demolition debris from all eight buildings, to be disposed of, out of the State of California, to a licensed MLLRW disposal facility. NASA Building Demolition NASA, in contrast to Boeing and DOE, appeared to have escaped the attention of DTSC and their partners, and was not required to dispose of building debris to a licensed LLRW disposal facility, “out of an abundance of caution.” 2018 DOE Environmental Impact Statement (EIS) In January 2017, DOE issued its Draft SSFL Area IV Environmental Impact Statement. In November 2018, DOE issued its Final Environmental Impact Statement, eleven years after it was ordered by Judge Conti in 2007. DOE’s preferred alternative for remediation of soils is the Conservation of Natural Resources, Open Space Scenario. DOE identified this preferred alternative because it would be consistent with the risk assessment approach typically used at other DOE sites, other California Department of Toxic Substances Control (DTSC) regulated sites, and U.S. Environmental Protection Agency CERCLA sites, which accounts for the specific open-space recreational future land use of the site. Use of a risk assessment approach would be consistent with the Grant Deeds of Conservation Easement and Agreements that commit Boeing’s SSFL property, including Area IV and the NBZ, to remaining as open space. This scenario would use a CERCLA risk assessment approach that would be protective of human health and the environment. This does not comply with the DTSC 2010 AOC “cleanup to background” mandate. DOE and DTSC have yet to negotiate a Record of Decision (ROD) for soils. 2014-2020 NASA Environmental Impact Statement (EIS) In March 2014, NASA issued its Final Environmental Impact Statement for Proposed Demolition and Environmental Cleanup Activities at Santa Susana Field Laboratory. In July 2020, NASA issued its Final Supplemental EIS for Soil Cleanup Activities. In September 2020, NASA issued its Record of Decision (ROD) for its Supplemental EIS for soil cleanup. The ROD identified Alternative C, Suburban Residential Cleanup as the Agency-Preferred Alternative. This does not comply with the DTSC 2010 AOC “cleanup to background” mandate. NASA recognizes the need to take no action until DTSC issues its ROD based on its Program Environmental Impact Report (PEIR). 2017-2023 DTSC Program Environmental Impact Report (PEIR) In September 2017, DTSC issued its Draft Program Environmental Impact Report for the Santa Susana Field Laboratory. In June 2023, following community input, DTSC issued its Final Program Environmental Impact Report for the Santa Susana Field Laboratory. DTSC stated that the PEIR was not a decision document (i.e. ROD), but nevertheless made it clear that it still supports the 2010 AOC requirements to cleanup radionuclides and chemicals to background, that is in conflict with DOE’s and NASA’s preferred alternatives in their respective Final EISs. Curiously, DTSC also issued in June 2023, a revised version of its draft PEIR, with deletions and additions. It was not immediately obvious why this was necessary in addition to the Final PEIS. 2022 DTSC-Boeing Settlement Agreement In May 2022, DTSC and Boeing signed a Settlement Agreement (SA) including a commitment by Boeing to cleanup chemicals to a residential risk-based garden standard (100% consumption of garden-grown fruits and vegetables) and cleanup radionuclides to background, in its areas of responsibility, namely Area I and III and the southern buffer zone. The Settlement Agreement was criticized by community groups and local governments for being done in secret, without public input; they also allege that it weakened the cleanup standards. 2023 Surface Water Although separately, Boeing, DOE and NASA are responsible for remediation of soil and groundwater in Areas I/III, Area IV, and Area II respectively, Boeing alone is responsible for management and treatment of surface water (i.e. storm water) for the entire SSFL site. The SSFL National Pollution Discharge Elimination System (NPDES) Permit regulates discharge of surface water when, and if, it flows offsite. Radionuclide limits are identical to the EPA’s drinking water supplier limits. These are the same limits that regulate the water in our faucets, Chemical NPDES limits are, in general, even lower than EPA’s drinking water supplier limits, and are often based on ecological risk limits. The NPDES Permit has been in existence for decades. The current Permit was issued in October 2023. In August 2022, a Memorandum of Understanding (MOU) was signed between Boeing and LARWQCB that describes future storm water management requirements following completion of SSFL soil remediation. Community Involvement Community Advisory Group A petition to form a "CAG" or community advisory group was denied in March 2010 by DTSC. In 2012, the current CAG's petition was approved. The SSFL CAG recommends that all responsible parties execute a risk-based cleanup to EPA's suburban residential standard that will minimize excavation, soil removal and backfill and thus reduce danger to public health and functions of surrounding communities. However, SSFL Panel believes the CAG has a conflict of interest, as it is funded in large part by a grant from the U.S. Department of Energy, and three of its members are former employees of Boeing or its parent company, North American Aviation. It is believed that the SSFL CAG is no longer active. Documentary In 2021, the three hour documentary In the Dark of the Valley depicted mothers advocating for cleanup of the site who have children suffering from cancer believed to be caused by the contamination. See also Nuclear and radiation accidents and incidents Nuclear labor issues Nuclear reactor accidents in the United States References External links and sources Responsible parties and agencies Hosted by the California State Department of Toxic Substances Control which oversees the investigation and cleanup of chemicals in the soil and groundwater at the SSFL. Project status documents, reports and public comment materials are available Website hosted by The Boeing Company, the largest landowner of the Santa Susana Field Laboratory. This site contains general information for the soil and groundwater projects. Surface water discharge-related information for SSFL is posted in the Technical Library Section. Answers for frequently asked questions are provided. DOE-sponsored project website provides historical ETEC technology development, site usage and current closure project information. Interactive graphic found in Regulation section explains the various involved regulatory agencies and their roles at the site. Large number of documents located in the Reading Room NASA's environmental cleanup and closure operations at NASA's portion of SSFL The Los Angeles Regional Water Quality Control Board administers the surface water NPDES permit for the SSFL U.S. DOE ATSDR – Agency for Toxic Substances & Disease Registry Historic American Engineering Record (HAER) documentation, filed under 5800 Woolsey Canyon Road, Simi Valley, Ventura County, CA: National Park Service Heritage Documentation Programs Sage Ranch Park website Groups The Santa Susana Advisory Panel Media EnviroReporter.com: Investigative news website that has coverage of Rocketdyne issues since 1998, often in partnership with regional publications including the LA Weekly and Ventura County Reporter newspapers. An upcoming documentary film recounts the horrors and hazards of the work done at Boeing's Santa Susana Field Laboratory. This first installment focuses on the workers and their every-day exposure to the hazardous environment provided by the owners and operators of this lab. Joel Grover and Matthew Glasser LA'S Nuclear Secret, Part 1-5 NBC4, 21 September 2015, retrieved 23 December 2015. Reactor accident sources Release of Fission Gas from the AE-6 Reactor, hosted by RocketdyneWatch.org Analysis of SRE Power Excursion, hosted by RocketdyneWatch.org SRE Fuel Element Damage an Interim Report, hosted by RocketdyneWatch.org SRE Fuel Element Damage Final Report, hosted by RocketdyneWatch.org SNAP8 Experimental Reactor Fuel Element Behavior: Atomics International Task Force Review, hosted by RocketdyneWatch.org Postoperation Evaluation of Fuel Elements from the SNAP8 Experimental Reactor hosted by RocketdyneWatch.org Findings of the SNAP 8 Developmental Reactor (S8DR) Post-Test Examination, hosted by RocketdyneWatch.org 1947 establishments in California Atomics International Boeing Buildings and structures in Los Angeles County, California Buildings and structures in Simi Valley, California Buildings and structures in Ventura County, California Canoga Park, Los Angeles Civilian nuclear power accidents Disasters in California Energy infrastructure in California Environment of California Environmental disasters in the United States Historic American Engineering Record in California History of Los Angeles County, California History of Simi Valley, California History of the San Fernando Valley History of Ventura County, California North American Aviation Nuclear accidents and incidents in the United States Nuclear research institutes Nuclear research reactors Radioactively contaminated areas Rocketdyne Rocketry San Fernando Valley Santa Susana Mountains Simi Hills West Hills, Los Angeles
Santa Susana Field Laboratory
[ "Chemistry", "Technology", "Engineering" ]
7,372
[ "Nuclear research institutes", "Radioactively contaminated areas", "Nuclear organizations", "Radioactive contamination", "Civilian nuclear power accidents", "Rocketry", "Soil contamination", "Environmental impact of nuclear power", "Aerospace engineering" ]
9,842,455
https://en.wikipedia.org/wiki/Meerwein%20arylation
The Meerwein arylation is an organic reaction involving the addition of an aryl diazonium salt (ArN2X) to an electron-poor alkene usually supported by a metal salt. The reaction product is an alkylated arene compound. The reaction is named after Hans Meerwein, one of its inventors who first published it in 1939. An electron-withdrawing group (EWG) on the alkene makes it electron deficient and although the reaction mechanism is unclear, involvement of an aryl radical is presumed after loss of nitrogen in the diazonium salt followed by a free radical addition. In the primary reaction product the intermediate alkyl radical is then captured by the diazonium counterion X which is usually a halogen or a tetrafluoroborate. In a subsequent step an elimination reaction liberates HX (for instance hydrochloric acid) and an aryl vinyl compound is formed. The reaction mechanism from the arene's view ranks as a radical-nucleophilic aromatic substitution. In a general scope a Meerwein arylation is any reaction between an aryl radical and an alkene. The initial intermediate is an aryl enthenyl radical which can react with many trapping reagents such as hydrogen or halogens or with those based on nitrogen or sulfur. Scope A reported reaction of alkene acrylic acid with an aryl diazonium salt and copper(I) bromide and hydrobromic acid yields the α-bromocarboxylic acid. When the alkene is butadiene the initial reaction product with catalyst copper(II) chloride is a 4-chloro-2-butene and after an elimination the aryl substituted butadiene. In a so-called reductive arylation with 3-buten-2-one, titanium trichloride reduces the newly formed double bond. In a novel kilogram-scale metal-free Meerwein arylation the diazonium salt is formed from 2-nitroaniline, the alkene isopropenyl acetate is an adduct of propyne and acetic acid and the reaction product 2-nitrophenylacetone: See also Roskamp reaction – also sees substitution of a diazonium compound by a carbon centre Heck-Matsuda reaction – palladium catalysed version References Carbon-carbon bond forming reactions Substitution reactions Name reactions
Meerwein arylation
[ "Chemistry" ]
519
[ "Name reactions", "Carbon-carbon bond forming reactions", "Coupling reactions", "Organic reactions" ]
9,847,036
https://en.wikipedia.org/wiki/GTP-binding%20protein%20regulators
GTP-binding protein regulators regulate G proteins in several different ways. Small GTPases act as molecular switches in signaling pathways, which act to regulate functions of other proteins. They are active or 'ON' when it is bound to GTP and inactive or 'OFF' when bound to GDP. Activation and deactivation of small GTPases can be regarded as occurring in a cycle, between the GTP-bound and GDP-bound form, regulated by other regulatory proteins. Exchangers The inactive form of GTPases (GDP-form) are activated by a class of proteins called Guanosine nucleotide exchange factors (GEFs). GEFs catalyse nucleotide exchange by encouraging the release of GDP from the small GTPase (by displacement of the small GTPase-associated Mg2+ ion) and GDP's replacement by GTP (which is in at least a 10-fold excess within the cell) . Inactivation of the active small GTPase is achieved through hydrolysis of the GTP by the small GTPase's intrinsic GTP hydrolytic activity. Stimulators The rate of GTP hydrolysis for small GTPases is generally too slow to create physiologically relevant transient signals, and thus requires another class of regulatory proteins to accelerate this activity, the GTPase activating proteins (GAPs). Inhibitors Another class of regulatory proteins, the Guanosine nucleotide dissociation inhibitors (GDIs), bind to the GDP-bound form of Rho and Rab small GTPases and not only prevent exchange (maintaining the small GTPase in an off-state), but also prevent the small GTPase from localizing at the membrane, which is their place of action. References External links Proteins
GTP-binding protein regulators
[ "Chemistry" ]
369
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
9,847,653
https://en.wikipedia.org/wiki/Global%20atmospheric%20electrical%20circuit
A global atmospheric electrical circuit is the continuous movement of atmospheric charge carriers, such as ions, between an upper conductive layer (often an ionosphere) and surface. The global circuit concept is closely related to atmospheric electricity, but not all atmospheres necessarily have a global electric circuit. The basic concept of a global circuit is that through the balance of thunderstorms and fair weather, the atmosphere is subject to a continual and substantial electrical current. Principally, thunderstorms throughout the world carry negative charges to the ground, which is then discharged gradually through the air away from the storms, in conditions that are referred to as "fair weather". This atmospheric circuit is central to the study of atmospheric physics and meteorology. The global electrical circuit is also relevant to the study of human health and air pollution, due to the interaction of ions and aerosols. The effects of climate change and temperature-sensitivity of the Earth's electrical circuit are currently unknown. History The history of the global atmospheric electrical circuit is intertwined with the history of atmospheric electricity. For example, in the 18th century, scientists began understanding the link between lightning and electricity. In addition to the iconic kite experiments of Benjamin Franklin and Thomas-François Dalibard, some early studies of charge in a "cloudless atmosphere" (i.e. fair weather) were carried out by Giambatista Beccaria, John Canton, Louis-Guillaume Le Monnier and John Read. Fair weather measurements from the late 18th century onwards often found consistent diurnal variations. During the 19th century, several long series of observations were made. Measurements near cities were (and still are) heavily influenced by smoke pollution. In the early 20th century, balloon ascents provided information about the electric field well above the surface. Important work was done by the research vessel Carnegie, which produced standardised measurements around the world's oceans (where the air is relatively clean). C. T. R. Wilson was the first to present the concept of a global circuit in 1920. Mechanism Lightning There are about 40,000 thunderstorms per day across the globe, generating roughly 100 lightning strikes per second, which can be thought to charge the Earth like a battery. Thunderstorms generate an electrical potential difference between the Earth's surface and the ionosphere, mainly by means of lightning returning current to ground. Because of this, the ionosphere is positively charged relative to the ground. Consequently, there is always a small current of approximately 2pA per square metre transporting charged particles in the form of atmospheric ions between the ionosphere and the surface. Fair weather This current is carried by ions present in the atmosphere (generated mainly by cosmic rays in the free troposphere and above, and by radioactivity in the lowest 1km or so). The ions make the air weakly conductive; different locations, and meteorological conditions have different electrical conductivity. Fair weather describes the atmosphere away from thunderstorms where this weak electrical current between the ionosphere and the ground flows. Measurement The voltages involved in the Earth's circuit are significant. At sea level, the typical potential gradient in fair weather is 120 V/m. Nonetheless, since the conductivity of air is limited, the associated currents are also limited. A typical value is 1800 A over the entire planet. When it is not rainy or stormy, the amount of electricity within the atmosphere is typically between 1000 and 1800 amps. In fair weather, there are about 3.5 microamps per square kilometer (9 microamps per square mile). Carnegie curve The Earth's electrical current varies according to a daily pattern called the Carnegie curve, caused by the regular daily variations in atmospheric electrification associated with the Earth's stormy regions. The pattern also shows seasonal variation, linked to the Earth's solstices and equinoxes. It was named after the Carnegie Institution for Science. See also Atmospheric electricity Geophysics Earth's magnetic field Sprites and lightning Space charge Telluric currents External sources Publications Le Monnier, L.-G.: "Observations sur l'Electricité de l'Air", Histoire de l'Académie royale des sciences (1752), pp. 233ff. 1752. Sven Israelsson, On the Conception Fair Weather Condition in Atmospheric Electricity. 1977. Ogawa, T., "Fair-weather electricity". J. Geophys. Res., 90, 5951–5960, 1985. Wåhlin, L., "Elements of fair weather electricity". J. Geophys. Res., 99, 10767-10772, 1994 RB Bent, WCA Hutchinson, Electric space charge measurements and the electrode effect within the height of a 21 m mast. J. Atmos. Terr. Phys, 196. Bespalov P.A., Chugunov Yu. V. and Davydenko S.S., Planetary electric generator under fair-weather condition with altitude-dependent atmospheric conductivity, Journal of Atmospheric and Terrestrial Physics, v.58, #5,pp. 605–611,1996 DG Yerg, KR Johnson, Short-period fluctuations in the fair weather electric field. J. Geophys. Res., 1974. T Ogawa, Diurnal variation in atmospheric electricity. J. Geomag. Geoelect, 1960. R Reiter, Relationships Between Atmospheric Electric Phenomena and Simultaneous Meteorological Conditions. 1960 J. Law, The ionisation of the atmosphere near the ground in fair weather. Quarterly Journal of the Royal Meteorological Society, 1963 T. Marshall, W.D. Rust, M. Stolzenburg, W. Roeder, P. Krehbim A study of enhanced fair-weather electric fields occurring soon after sunrise. R Markson, Modulation of the earth's electric field by cosmic radiation. Nature, 1981 Clark, John Fulmer, The Fair Weather Atmospheric Electric Potential and its Gradient. P. A. Bespalov, Yu. V. Chugunov and S. S. Davydenko, Planetary electric generator under fair-weather conditions with altitude-dependent atmospheric conductivity. AM Selva, et al., A New Mechanism for the Maintenance of Fair Weather Electric Field and Cloud Electrification. M. J. Rycroft, S. Israelssonb and C. Pricec, The global atmospheric electric circuit, solar activity and climate change. A. Mary Selvam, A. S. Ramachandra Murty, G. K. Manohar, S. S. Kandalgaonkar, Bh. V.Ramana Murty, A New Mechanism for the Maintenance of Fair Weather Electric Field and Cloud Electrification. arXiv:physics/9910006 Ogawa, Toshio, Fair-Weather electricity. Journal of Geophysical Research, Volume 90, Issue D4, pp. 5951–5960. An auroral effect on the fair weather electric field. Nature 278, 239–241 (15 March 1979); Bespalov, P. A.; Chugunov, Yu. V., Plasmasphere rotation and origin of atmospheric electricity. Physics – Doklady, Volume 39, Issue 8, August 1994, pp. 553–555 Bespalov, P. A.; Chugunov, Yu. V.; Davydenko, S. S. Planetary electric generator under fair-weather conditions with altitude-dependent atmospheric conductivity. Journal of Atmospheric and Terrestrial Physics. A.J. Bennett, R.G. Harrison, A simple atmospheric electrical instrument for educational use References External links Atmospheric electricity
Global atmospheric electrical circuit
[ "Physics" ]
1,551
[ "Physical phenomena", "Electrical phenomena", "Atmospheric electricity" ]
9,850,794
https://en.wikipedia.org/wiki/JC3IEDM
JC3IEDM, or Joint Consultation, Command and Control Information Exchange Data Model is a model that, when implemented, aims to enable the interoperability of systems and projects required to share Command and Control (C2) information. JC3IEDM is an evolution of the C2IEDM standard that includes joint operational concepts, just as the Land Command and Control Information Exchange Data Model (LC2IEDM) was extended to become C2IEDM. The program is managed by the Multilateral Interoperability Programme (MIP). The Joint C3 Information Exchange Data Model JC3IEDM is produced by the MIP-NATO Management Board (MNMB) and ratified under NATO STANAG 5525. JC3IEDM a fully documented standard for an information exchange data model for the sharing of C2 information. The overall aim of JC3IEDM is to enable "international interoperability of C2 information systems at all levels from corps to battalion (or lowest appropriate level) in order to support multinational (including NATO), combined and joint operations and the advancement of digitisation in the international arena." According to JC3IEDM's documentation, this aim is attempted to be achieved by "specifying the minimum set of data that needs to be exchanged in coalition or multinational operations. Each nation, agency or community of interest is free to expand its own data dictionary to accommodate its additional information exchange requirements with the understanding that the added specifications will be valid only for the participating nation, agency or community of interest. Any addition that is deemed to be of general interest may be submitted as a change proposal within the configuration control process to be considered for inclusion in the next version of the specification." "JC3IEDM is intended to represent the core of the data identified for exchange across multiple functional areas and multiple views of the requirements. Toward that end, it lays down a common approach to describing the information to be exchanged in a command and control (C2) environment. The structure should be sufficiently generic to accommodate joint, land, sea, and air environmental concerns. The data model describes all objects of interest in the sphere of operations, e.g., organizations, persons, equipment, facilities, geographic features, weather phenomena, and military control measures such as boundaries. Objects of interest may be generic in terms of a class or a type and specific in terms of an individually identified item. All object items must be classified as being of some type (e.g. a specific tank that is identified by serial number WS62105B is an item of type "Challenger" that is a heavy UK main battle tank). An object must have the capability to perform a function or to achieve an end. Thus, a description of capability is needed to give meaning to the value of objects in the sphere of operations. It should be possible to assign a location to any item in the sphere of operations. In addition, various geometric shapes need to be represented in order to allow commanders to plan, direct, and monitor operations. Examples include boundaries, corridors, restricted areas, minefields, and any other control measures needed by commanders and their staffs. Several aspects of status of items need to be maintained. The model must permit a description of the composition of a type object in terms of other type objects. Such concepts include tables of organizations, equipment, or personnel. The model must reflect information about what is held, owned or possessed in terms of types by a specific object item. There is a need to record relationships between pairs of items. Key among these is the specification of unit task organizations and orders of battle. The model must support the specification of current, past, and future role of objects as part of plans, orders, and events. The same data structure should be used to record information for all objects, regardless of their hostility status. Provision must be made for the identification of sources of information, the effective and reporting times, and an indication of the validity of the data." JC3IEDM development history and current maturity JC3IEDM has been developed from the initial Generic Hub (GH) Data Model, which changed its name to Land C2 Information Exchange Data Model (LC2IEDM) in 1999. Development of the model continued in a Joint context and in November 2003 the C2 Information Exchange Data Model (C2IEDM) Edition 6.1 was released. Additional development to this model, incorporating the NATO Corporate Reference model, resulted in the model changing its name again to JC3IEDM with JC3IEDM Ed 0.5 being issued in December 2004. Subsequent releases have seen areas of the model developed in greater depth than others and there is variation in the number of sub-types and attributes for each type in the current version. An example is HARBOUR within the FACILITY type which has 43 attributes compared to a VESSEL-TYPE with 12 attributes or a WEAPON-TYPE with 4 attributes. The associated attributes of a certain type also lack support for exploiting with those of other types. For example, VESSEL-TYPE does not support the length or width of a vessel in its attributes but HARBOUR has both maximum vessel length and width attributes. National policies related to JC3IEDM The UK Ministry of Defence has mandated JC3IEDM as the C2 Information Exchange Model, in Joint Service Publication (JSP) 602:1007, for use on all systems and/or projects exchanging C2 information within and interoperating with the Land Environment at a Strategic and Operational Level. It is strongly recommended for other environments and mandated for all environments at the Tactical level. JSP 602:1005 for Collaborative Services has also mandated JC3IEDM in the tactical domain for all systems/projects providing data sharing collaborative services. References Note: Link for Ref 3 is broken. Link for Ref 5 is wrong. As of 05.05.2017 all MIP links are broken and point to different directions. External links MIP site Files comprising the JC3IEDM specification Data modeling Military technology
JC3IEDM
[ "Engineering" ]
1,207
[ "Data modeling", "Data engineering" ]
9,853,903
https://en.wikipedia.org/wiki/Structural%20element
In structural engineering, structural elements are used in structural analysis to split a complex structure into simple elements (each bearing a structural load). Within a structure, an element cannot be broken down (decomposed) into parts of different kinds (e.g., beam or column). Structural building components are specialized structural building products designed, engineered and manufactured under controlled conditions for a specific application. They are incorporated into the overall building structural system by a building designer. Examples are wood or steel roof trusses, floor trusses, floor panels, I-joists, or engineered beams and headers. A structural building component manufacturer or truss manufacturer is an individual or company regularly engaged in the manufacturing of components. Structural elements can be lines, surfaces or volumes. Line elements: Rod - axial loads Beam - axial and bending loads Pillar Post (structural) Struts or Compression members- compressive loads Ties, Tie rods, eyebars, guy-wires, suspension cables, or wire ropes - tension loads Surface elements: membrane - in-plane loads only shell - in plane and bending moments Concrete slab deck shear panel - shear loads only Volumes: Axial, shear and bending loads for all three dimensions See also Load-bearing wall Post and lintel Stressed member engine References Structural analysis Hardware (mechanical) Architectural elements
Structural element
[ "Physics", "Technology", "Engineering" ]
264
[ "Structural engineering", "Machines", "Building engineering", "Structural analysis", "Architecture", "Physical systems", "Construction", "Architectural elements", "Mechanical engineering", "Aerospace engineering", "Hardware (mechanical)", "Components" ]
9,134,092
https://en.wikipedia.org/wiki/Geodynamics
Geodynamics is a subfield of geophysics dealing with dynamics of the Earth. It applies physics, chemistry and mathematics to the understanding of how mantle convection leads to plate tectonics and geologic phenomena such as seafloor spreading, mountain building, volcanoes, earthquakes, faulting. It also attempts to probe the internal activity by measuring magnetic fields, gravity, and seismic waves, as well as the mineralogy of rocks and their isotopic composition. Methods of geodynamics are also applied to exploration of other planets. Overview Geodynamics is generally concerned with processes that move materials throughout the Earth. In the Earth's interior, movement happens when rocks melt or deform and flow in response to a stress field. This deformation may be brittle, elastic, or plastic, depending on the magnitude of the stress and the material's physical properties, especially the stress relaxation time scale. Rocks are structurally and compositionally heterogeneous and are subjected to variable stresses, so it is common to see different types of deformation in close spatial and temporal proximity. When working with geological timescales and lengths, it is convenient to use the continuous medium approximation and equilibrium stress fields to consider the average response to average stress. Experts in geodynamics commonly use data from geodetic GPS, InSAR, and seismology, along with numerical models, to study the evolution of the Earth's lithosphere, mantle and core. Work performed by geodynamicists may include: Modeling brittle and ductile deformation of geologic materials Predicting patterns of continental accretion and breakup of continents and supercontinents Observing surface deformation and relaxation due to ice sheets and post-glacial rebound, and making related conjectures about the viscosity of the mantle Finding and understanding the driving mechanisms behind plate tectonics. Deformation of rocks Rocks and other geological materials experience strain according to three distinct modes, elastic, plastic, and brittle depending on the properties of the material and the magnitude of the stress field. Stress is defined as the average force per unit area exerted on each part of the rock. Pressure is the part of stress that changes the volume of a solid; shear stress changes the shape. If there is no shear, the fluid is in hydrostatic equilibrium. Since, over long periods, rocks readily deform under pressure, the Earth is in hydrostatic equilibrium to a good approximation. The pressure on rock depends only on the weight of the rock above, and this depends on gravity and the density of the rock. In a body like the Moon, the density is almost constant, so a pressure profile is readily calculated. In the Earth, the compression of rocks with depth is significant, and an equation of state is needed to calculate changes in density of rock even when it is of uniform composition. Elastic Elastic deformation is always reversible, which means that if the stress field associated with elastic deformation is removed, the material will return to its previous state. Materials only behave elastically when the relative arrangement along the axis being considered of material components (e.g. atoms or crystals) remains unchanged. This means that the magnitude of the stress cannot exceed the yield strength of a material, and the time scale of the stress cannot approach the relaxation time of the material. If stress exceeds the yield strength of a material, bonds begin to break (and reform), which can lead to ductile or brittle deformation. Ductile Ductile or plastic deformation happens when the temperature of a system is high enough so that a significant fraction of the material microstates (figure 1) are unbound, which means that a large fraction of the chemical bonds are in the process of being broken and reformed. During ductile deformation, this process of atomic rearrangement redistributes stress and strain towards equilibrium faster than they can accumulate. Examples include bending of the lithosphere under volcanic islands or sedimentary basins, and bending at oceanic trenches. Ductile deformation happens when transport processes such as diffusion and advection that rely on chemical bonds to be broken and reformed redistribute strain about as fast as it accumulates. Brittle When strain localizes faster than these relaxation processes can redistribute it, brittle deformation occurs. The mechanism for brittle deformation involves a positive feedback between the accumulation or propagation of defects especially those produced by strain in areas of high strain, and the localization of strain along these dislocations and fractures. In other words, any fracture, however small, tends to focus strain at its leading edge, which causes the fracture to extend. In general, the mode of deformation is controlled not only by the amount of stress, but also by the distribution of strain and strain associated features. Whichever mode of deformation ultimately occurs is the result of a competition between processes that tend to localize strain, such as fracture propagation, and relaxational processes, such as annealing, that tend to delocalize strain. Deformation structures Structural geologists study the results of deformation, using observations of rock, especially the mode and geometry of deformation to reconstruct the stress field that affected the rock over time. Structural geology is an important complement to geodynamics because it provides the most direct source of data about the movements of the Earth. Different modes of deformation result in distinct geological structures, e.g. brittle fracture in rocks or ductile folding. Thermodynamics The physical characteristics of rocks that control the rate and mode of strain, such as yield strength or viscosity, depend on the thermodynamic state of the rock and composition. The most important thermodynamic variables in this case are temperature and pressure. Both of these increase with depth, so to a first approximation the mode of deformation can be understood in terms of depth. Within the upper lithosphere, brittle deformation is common because under low pressure rocks have relatively low brittle strength, while at the same time low temperature reduces the likelihood of ductile flow. After the brittle-ductile transition zone, ductile deformation becomes dominant. Elastic deformation happens when the time scale of stress is shorter than the relaxation time for the material. Seismic waves are a common example of this type of deformation. At temperatures high enough to melt rocks, the ductile shear strength approaches zero, which is why shear mode elastic deformation (S-Waves) will not propagate through melts. Forces The main motive force behind stress in the Earth is provided by thermal energy from radioisotope decay, friction, and residual heat. Cooling at the surface and heat production within the Earth create a metastable thermal gradient from the hot core to the relatively cool lithosphere. This thermal energy is converted into mechanical energy by thermal expansion. Deeper and hotter rocks often have higher thermal expansion and lower density relative to overlying rocks. Conversely, rock that is cooled at the surface can become less buoyant than the rock below it. Eventually this can lead to a Rayleigh-Taylor instability (Figure 2), or interpenetration of rock on different sides of the buoyancy contrast. Negative thermal buoyancy of the oceanic plates is the primary cause of subduction and plate tectonics, while positive thermal buoyancy may lead to mantle plumes, which could explain intraplate volcanism. The relative importance of heat production vs. heat loss for buoyant convection throughout the whole Earth remains uncertain and understanding the details of buoyant convection is a key focus of geodynamics. Methods Geodynamics is a broad field which combines observations from many different types of geological study into a broad picture of the dynamics of Earth. Close to the surface of the Earth, data includes field observations, geodesy, radiometric dating, petrology, mineralogy, drilling boreholes and remote sensing techniques. However, beyond a few kilometers depth, most of these kinds of observations become impractical. Geologists studying the geodynamics of the mantle and core must rely entirely on remote sensing, especially seismology, and experimentally recreating the conditions found in the Earth in high pressure high temperature experiments.(see also Adams–Williamson equation). Numerical modeling Because of the complexity of geological systems, computer modeling is used to test theoretical predictions about geodynamics using data from these sources. There are two main ways of geodynamic numerical modeling. Modelling to reproduce a specific observation: This approach aims to answer what causes a specific state of a particular system. Modelling to produce basic fluid dynamics: This approach aims to answer how a specific system works in general. Basic fluid dynamics modelling can further be subdivided into instantaneous studies, which aim to reproduce the instantaneous flow in a system due to a given buoyancy distribution, and time-dependent studies, which either aim to reproduce a possible evolution of a given initial condition over time or a statistical (quasi) steady-state of a given system. See also Cytherodynamics References Bibliography External links Geological Survey of Canada - Geodynamics Program Geodynamics Homepage - JPL/NASA NASA Planetary geodynamics Los Alamos National Laboratory–Geodynamics & National Security Computational Infrastructure for Geodynamics Geophysics Geodesy Plate tectonics
Geodynamics
[ "Physics", "Mathematics" ]
1,874
[ "Applied mathematics", "Applied and interdisciplinary physics", "Geodesy", "Geophysics" ]
9,136,981
https://en.wikipedia.org/wiki/Radiogram%20%28message%29
A radiogram is a formal written message transmitted by radio. Also known as a radio telegram or radio telegraphic message, radiograms use a standardized message format, form and radiotelephone and/or radiotelegraph transmission procedures. These procedures typically provide a means of transmitting the content of the messages without including the names of the various headers and message sections, so as to minimize the time needed to transmit messages over limited and/or congested radio channels. Various formats have been used historically by maritime radio services, military organizations, and Amateur Radio organizations. Radiograms are typically employed for conducting Record communications, which provides a message transmission and delivery audit trail. Sometimes these records are kept for proprietary purposes internal to the organization sending them, but are also sometimes legally defined as public records. For example, maritime Mayday/SOS messages transmitted by radio are defined by international agreements as public records. Historical development From 1850 to the mid 20th century industrial countries used the electric telegraph as a long distance person-to-person text message service. A telegraph system consisted of two or more geographically separated stations linked by wire supported on telegraph poles. A message was sent by an operator in one station tapping on a telegraph key, which sent pulses of current from a battery or generator down the wire to the receiving station, spelling out the text message in Morse code. At the receiving station the current would activate a telegraph sounder which would produce a series of audible clicks, and a receiving operator who knew Morse code would translate the clicks to text and write down the message. By the 1870s, most industrial nations had nationwide telegraph networks with telegraph offices in most towns, allowing citizens to send a message called a telegram for a fee to any person in the country. Submarine telegraph cables allowed intercontinental messages called cablegrams. The invention of radiotelegraphy (wireless telegraphy) communication around 1900 allowed telegraph signals to be sent by radio. An operator at a radio transmitter would tap on a telegraph key, turning the transmitter on and off, sending pulses of radio waves through the air, and at the receiving station a radio receiver would receive the pulses and make them audible as a sequence of beeps in the earphone, and the receiving operator would translate the Morse code to text and write it down. High speed systems used paper tape to send and record the message. Guglielmo Marconi's demonstration of transatlantic radiotelegraphy transmission in 1901 showed that the wireless telegraph could be a useful long-distance communication technology which didn't require the costly installation of a telegraph wire. Around 1906 industrial nations began building powerful transoceanic radiotelegraphy stations to communicate with other countries and their overseas colonies. By World War I these were integrated with landline telegraph networks, so citizens could go to a telegraph office and send a person-to-person telegraph message by radio to another country. This was written down on a standardized form called a radiogram. International radiotelegraphy was expensive so radiograms were mostly used for business and commercial communication. The concept of the standard message format originated in the wired telegraph services. Each telegraph company likely had its own format, but soon after radio telegraph services began, some elements of the message exchange format were codified in international conventions (such as the International Radiotelegraph Convention, Washington, 1927), and these were then often duplicated in domestic radio communications regulations (such as the FCC in the U.S.) and in military procedure documentation. Military organizations independently developed their own procedures, and in addition to differing from the international procedures, they sometimes differed between different branches of the military within the same country. For example, the publication "Communication Instructions, 1929", from the U.S. Navy Department, includes: One procedure for messages transmitted "in naval form over nonnaval systems" (Part II: Radio, Chapter 15) One procedure for exchanging messages with commercial radio stations (Part II: Radio, Chapter 16, pages 36–37 for examples; see also Part I: Chapter 7) One procedure for messages transmitted within the Navy (Part IV: Procedure and Examples, Chapter 32, especially pages 21 & 22 for the format) One format for exchanging messages between the Army and Navy (Part IV: Appendix A), called the "Joint Army and Navy Radiotelegraph Procedure", with the format shown on page 70. Notable characteristics of radiograms include headers that include information such as the from and to addresses, date and time filed, and precedence (e.g. emergency, priority, or routine), so that the radio operators can determine which messages need to be delivered first during times of congestion. Chronology of the commercial radiogram format International Telegraph Conference (London, 1903; including Order of transmission beginning on page 40) International Telegraph Conference (Paris, 1925) International Radiotelegraph Convention (Washington, 1927) International Radiotelegraph Conference (Madrid, 1932) was redrafted to include general principles common to telegraph, telephone and radio services. Maritime radio service radiotelegrams The message format for communications transmitted to sea-going vessels is defined in Rec. ITU-R M.1171, § 28: radiotelegram begins: from . . . (name of ship or aircraft); number . . . (serial number of radiotelegram); number of words . . . ; date . . . ; time . . . (time radiotelegram was handed in aboard ship or aircraft); service indicators (if any); address . . . ; text . . . ; signature . . . (if any); radiotelegram ends, over Airline Teletype Message The international airline industry continues to use a radioteletype message format originally designed for transmission to Teleprinters, Airline Teletype System, which is now disseminated via e-mail and other modern electronic formats. However, the relationship of the IATA Type B message to other radio telegram message formats is clearly visible in a typical message: QD AAABBCC .XXXYYZZ 111301 ASM UTC 27SEP03899E001/TSTF DL Y NEW BA667/13APR J 319 C1M25VVA4C26 LHR1340 BCN1610 LHRQQQ 99/1 QQQBCN 98/A QQQQQQ 906/PAYDIV B LHRQQQ 999/1 QQQBCN 998/A SI Military radiograms Military organizations have historically used radiograms for transmitting messages. One notable example is the notification of the air raid on Pearl Harbor that brought the United States into World War II. The standard military radiogram format (in NATO allied nations) is known as the 16-line message format, for the manner in which a paper message form is transcribed through voice, Morse code, or TTY transmission formats. Each format line contains pre-defined content. When sent as an ACP-126 message over teletype, a 16-line format radiogram would appear similar to this: RFHT DE RFG NR 114 R 151412Z MAR FM CG FIFTH CORPS TO CG THIRD INFDIV WD GRNC BT UNCLAS PLAINDRESS SINGLE ADDRESS MESSAGES WILL BE TRANSMITTED OVER TELETIPWRITER <!-- sic --> CIRCUITS AS INDICATED IN THIS EXAMPLE BT C WA OVER TELETYPEWRITER NNNN Some of the format lines in the above example have been omitted for efficiency. The translation of this abbreviate format follows: This radiotelegraph message format (also "radio teletype message format", "teletypewriter message format", and "radiotelephone message format") and transmission procedures have been documented in numerous military standards, including the World War II-era U.S. Army Manuals TM 11-454 (The Radio Operator), FM 24-5 (Basic Field Manual, Signal Communication), FM 24-6 (Radio Operator's Manual), TM 1-460 (Radiotelephone Procedure), FM 24-18 (Radio Communication), FM-24-19 (Radio Operator's Handbook), FM 101-5-2 (U.S. Army Report and Message Formats), TM 11-380, FM 11-490-7 (Military Affiliate Radio System), AR 105–75, Navy Department Communication Instructions 1929, and their modern decedents in the Allied Communications Procedures, including ACP 124 (messages relayed by telegraphy), ACP 125 (messages relayed by voice), ACP 126 (messages relayed by radio teletype), ACP 127 (messages relayed by automated tape), AR 25–6, U.S. Navy Signalman training courses and others. At one point before World War II, the U.S. FCC defined (at least for domestic police radio traffic) a station serial number as a sequential message number that was reset at the beginning of each calendar month. The Communications Standard Dictionary defines radiotelegraph message format as "The prescribed arrangement of the parts of a message that has been prepared for radiotelegraph transmission." MARS radiograms The Military Affiliate Radio System uses radiograms, or MARSgrams, to transmit health & welfare message between military members and their families, and also for emergency communications. Some MARS radio procedure documents include instructions on how to exchange ARRL NTS Radiograms over a MARS radio net. Both formats include a procedure for counting the number of word groups (words in NTS, groups in the ACP/MARS format), but differ in how word groups are counted, for instance, so the counting method must be resolved when converting messages between formats. U.S. Department of State ACP-127 radiograms The U.S. Department of State uses the military's automated message delivery version of the 16-line format, known as ACP-127, with its own structured definitions of the format lines. Police Radiogram Police radiograms had their own format, likely derived from the commercial radiogram format. Example radiogram from A National Training Manual and Procedural Guide for Police and Public Safety Radio Communications Personnel, 1968. 15 SHRF LEE COUNTY ILL 12-20-66 (A. Preamble) PD CARBONDALE ILL (B. Address) DATA AND DISPOSITION RED 62 CHEVROLET (C. Text) 4 DOOR ILL LL1948 VIN 21723T58723 ABANDONED DIXON ILLINOIS THREE DAYS HELD ANDREWS GARAGE FRONT END DAMAGED NOT DRIVEABLE NO APPREHENSIONS WILL BE RELEASED TO OWNER ON PROOF OF OWNERSHIP SHERIFF LEE COUNTY ILLINOIS JRM 1530 CST (D. Signature) Section A6.6 Message Form From the above training manual: A formal message is one constructed, transmitted and recorded according to a standard prescribed form (see Sec. 4). A formal message should contain the following essential P A R T S: Preamble - message number, point of origin or agency identifier, date. Address - to whom the message is directed. Reference - to previous message, if any. Text - the message. Signature or Authority - department requesting the message. ARRL radiogram An ARRL radiogram is an instance of formal written message traffic routed by a network of amateur radio operators through traffic nets, called the National Traffic System (NTS). It is a plaintext message, along with relevant metadata (headers), that is placed into a traffic net by an amateur radio operator. Each radiogram is relayed, possibly through one or more other amateur radio operators, to a radio operator who volunteers to deliver the radiogram content to its destination. VOA Radiogram VOA Radiogram was an experimental Voice of America program, aired from 2012 to 2017, which broadcasts digital text and images via shortwave radiograms This digital stream can be decoded using a basic AM shortwave receiver and freely downloadable software of the Fldigi family. This software is available for Windows, Apple (macOS), Linux, and FreeBSD systems. The mode used most often on VOA Radiogram, for both text and images, is MFSK32, but other modes are occasionally transmitted. Broadcasts were made via the Edward R. Murrow transmitting station in North Carolina on the following schedule: Due to the retirement of Dr. Kim Andrew Elliott from VOA and the decision of VOA to not replace his role with the program, VOA Radiogram program's final airing was on June 17–18, 2017, however Elliott will be continuing to air Radiograms via commercial shortwave stations under the name of "Shortwave Radiogram." References Radio communications ITU-R recommendations
Radiogram (message)
[ "Engineering" ]
2,599
[ "Telecommunications engineering", "Radio communications" ]
4,382,964
https://en.wikipedia.org/wiki/Josephson%20vortex
In superconductivity, a Josephson vortex (after Brian Josephson from Cambridge University) is a quantum vortex of supercurrents in a Josephson junction (see Josephson effect). The supercurrents circulate around the vortex center which is situated inside the Josephson barrier, unlike Abrikosov vortices in type-II superconductors, which are located in the superconducting condensate. Abrikosov vortices (after Alexei Abrikosov) in superconductors are characterized by normal cores where the superconducting condensate is destroyed on a scale of the superconducting coherence length ξ (typically 5-100 nm) . The cores of Josephson vortices are more complex and depend on the physical nature of the barrier. In Superconductor-Normal Metal-Superconductor (SNS) Josephson junctions there exist measurable superconducting correlations induced in the N-barrier by proximity effect from the two neighbouring superconducting electrodes. Similarly to Abrikosov vortices in superconductors, Josephson vortices in SNS Josephson junctions are characterized by cores in which the correlations are suppressed by destructive quantum interference and the normal state is recovered. However, unlike Abrikosov cores, having a size ~ξ, the size of the Josephson ones is not defined by microscopic parameters only. Rather, it depends on supercurrents circulating in superconducting electrodes, applied magnetic field etc. In Superconductor-Insulator-Superconductor (SIS) Josephson tunnel junctions the cores are not expected to have a specific spectral signature; they were not observed. Usually the Josephson vortex's supercurrent loops create a magnetic flux which equals, in long enough Josephson junctions, to Φ0—a single flux quantum. Yet fractional vortices may also exist in Superconductor-Ferromagnet-Superconductor Josephson junctions or in junctions in which superconducting phase discontinuities are present. It was demonstrated by Hilgenkamp et al. that Josephson vortices in the so-called 0-π Long Josephson Junctions can also carry half of the flux quantum, and are called semifluxons. It has been shown that under certain conditions a propagating Josephson vortex can initiate another Josephson vortex. This effect is called flux cloning (or fluxon cloning). Although a second vortex appears, this does not violate the conservation of the single flux quantum. See also Fluxon Josephson effect Josephson penetration depth Long Josephson junction Shape waves Sine-Gordon equation References Josephson effect Vortices
Josephson vortex
[ "Chemistry", "Materials_science", "Mathematics" ]
571
[ "Josephson effect", "Vortices", "Superconductivity", "Fluid dynamics", "Dynamical systems" ]
4,383,086
https://en.wikipedia.org/wiki/Supercurrent
A supercurrent is a superconducting current, that is, electric current which flows without dissipation in a superconductor. Under certain conditions, an electric current can also flow without dissipation in microscopically small non-superconducting metals. However, currents in such perfect conductors are not called supercurrents, but persistent currents. References Superconductivity
Supercurrent
[ "Physics", "Materials_science", "Engineering" ]
86
[ "Materials science stubs", "Physical quantities", "Superconductivity", "Materials science", "Condensed matter physics", "Electromagnetism stubs", "Electrical resistance and conductance" ]
4,385,154
https://en.wikipedia.org/wiki/Site-specific%20recombinase%20technology
Site-specific recombinase technologies are genome engineering tools that depend on recombinase enzymes to replace targeted sections of DNA. History In the late 1980s gene targeting in murine embryonic stem cells (ESCs) enabled the transmission of mutations into the mouse germ line, and emerged as a novel option to study the genetic basis of regulatory networks as they exist in the genome. Still, classical gene targeting proved to be limited in several ways as gene functions became irreversibly destroyed by the marker gene that had to be introduced for selecting recombinant ESCs. These early steps led to animals in which the mutation was present in all cells of the body from the beginning leading to complex phenotypes and/or early lethality. There was a clear need for methods to restrict these mutations to specific points in development and specific cell types. This dream became reality when groups in the USA were able to introduce bacteriophage and yeast-derived site-specific recombination (SSR-) systems into mammalian cells as well as into the mouse. Classification, properties and dedicated applications Common genetic engineering strategies require a permanent modification of the target genome. To this end great sophistication has to be invested in the design of routes applied for the delivery of transgenes. Although for biotechnological purposes random integration is still common, it may result in unpredictable gene expression due to variable transgene copy numbers, lack of control about integration sites and associated mutations. The molecular requirements in the stem cell field are much more stringent. Here, homologous recombination (HR) can, in principle, provide specificity to the integration process, but for eukaryotes it is compromised by an extremely low efficiency. Although meganucleases, zinc-finger- and transcription activator-like effector nucleases (ZFNs and TALENs) are actual tools supporting HR, it was the availability of site-specific recombinases (SSRs) which triggered the rational construction of cell lines with predictable properties. Nowadays both technologies, HR and SSR can be combined in highly efficient "tag-and-exchange technologies". Many site-specific recombination systems have been identified to perform these DNA rearrangements for a variety of purposes, but nearly all of these belong to either of two families, tyrosine recombinases (YR) and serine recombinases (SR), depending on their mechanism. These two families can mediate up to three types of DNA rearrangements (integration, excision/resolution, and inversion) along different reaction routes based on their origin and architecture. The founding member of the YR family is the lambda integrase, encoded by bacteriophage λ, enabling the integration of phage DNA into the bacterial genome. A common feature of this class is a conserved tyrosine nucleophile attacking the scissile DNA-phosphate to form a 3'-phosphotyrosine linkage. Early members of the SR family are closely related resolvase / DNA invertases from the bacterial transposons Tn3 and γδ, which rely on a catalytic serine responsible for attacking the scissile phosphate to form a 5'-phosphoserine linkage. These undisputed facts, however, were compromised by a good deal of confusion at the time other members entered the scene, for instance the YR recombinases Cre and Flp (capable of integration, excision/resolution as well as inversion), which were nevertheless welcomed as new members of the "integrase family". The converse examples are PhiC31 and related SRs, which were originally introduced as resolvase/invertases although, in the absence of auxiliary factors, integration is their only function. Nowadays the standard activity of each enzyme determines its classification reserving the general term "recombinase" for family members which, per se, comprise all three routes, INT, RES and INV: Our table extends the selection of the conventional SSR systems and groups these according to their performance. All of these enzymes recombine two target sites, which are either identical (subfamily A1) or distinct (phage-derived enzymes in A2, B1 and B2). Whereas for A1 these sites have individual designations ("FRT" in case of Flp-recombinase, loxP for Cre-recombinase), the terms "attP" and "attB" (attachment sites on the phage and bacterial part, respectively) are valid in the other cases. In case of subfamily A1 we have to deal with short (usually 34 bp-) sites consisting of two (near-)identical 13 bp arms (arrows) flanking an 8 bp spacer (the crossover region, indicated by red line doublets). Note that for Flp there is an alternative, 48 bp site available with three arms, each accommodating a Flp unit (a so-called "protomer"). attP- and attB-sites follow similar architectural rules, but here the arms show only partial identity (indicated by the broken lines) and differ in both cases. These features account for relevant differences: recombination of two identical educt sites leads to product sites with the same composition, although they contain arms from both substrates; these conversions are reversible; in case of attP x attB recombination crossovers can only occur between these complementary partners in processes that lead to two different products (attP x attB → attR + attL) in an irreversible fashion. In order to streamline this chapter the following implementations will be focused on two recombinases (Flp and Cre) and just one integrase (PhiC31) since their spectrum covers the tools which, at present, are mostly used for directed genome modifications. This will be done in the framework of the following overview. Reaction routes The mode integration/resolution and inversion (INT/RES and INV) depend on the orientation of recombinase target sites (RTS), among these pairs of attP and attB. Section C indicates, in a streamlined fashion, the way recombinase-mediated cassette exchange (RMCE) can be reached by synchronous double-reciprocal crossovers (rather than integration, followed by resolution). Tyr-Recombinases are reversible, while the Ser-Integrase is unidirectional. Of note is the way reversible Flp (a Tyr recombinase) integration/resolution is modulated by 48 bp (in place of 34 bp minimal) FRT versions: the extra 13 bp arm serves as a Flp "landing path" contributing to the formation of the synaptic complex, both in the context of Flp-INT and Flp-RMCE functions (see the respective equilibrium situations). While it is barely possible to prevent the (entropy-driven) reversion of integration in section A for Cre and hard to achieve for Flp, RMCE can be completed if the donor plasmid is provided at an excess due to the bimolecular character of both the forward- and the reverse reaction. Posing both FRT sites in an inverse manner will lead to an equilibrium of both orientations for the insert (green arrow). In contrast to Flp, the Ser integrase PhiC31 (bottom representations) leads to unidirectional integration, at least in the absence of an recombinase-directionality (RDF-)factor. Relative to Flp-RMCE, which requires two different ("heterospecific") FRT-spacer mutants, the reaction partner (attB) of the first reacting attP site is hit arbitrarily, such that there is no control over the direction the donor cassette enters the target (cf. the alternative products). Also different from Flp-RMCE, several distinct RMCE targets cannot be mounted in parallel, owing to the lack of heterospecific (non-crossinteracting) attP/attB combinations. Cre recombinase Cre recombinase (Cre) is able to recombine specific sequences of DNA without the need for cofactors. The enzyme recognizes 34 base pair DNA sequences called loxP ("locus of crossover in phage P1"). Depending on the orientation of target sites with respect to one another, Cre will integrate/excise or invert DNA sequences. Upon the excision (called "resolution" in case of a circular substrate) of a particular DNA region, normal gene expression is considerably compromised or terminated. Due to the pronounced resolution activity of Cre, one of its initial applications was the excision of loxP-flanked ("floxed") genes leading to cell-specific gene knockout of such a floxed gene after Cre becomes expressed in the tissue of interest. Current technologies incorporate methods, which allow for both the spatial and temporal control of Cre activity. A common method facilitating the spatial control of genetic alteration involves the selection of a tissue-specific promoter to drive Cre expression. Placement of Cre under control of such a promoter results in localized, tissue-specific expression. As an example, Leone et al. have placed the transcription unit under the control of the regulatory sequences of the myelin proteolipid protein (PLP) gene, leading to induced removal of targeted gene sequences in oligodendrocytes and Schwann cells. The specific DNA fragment recognized by Cre remains intact in cells, which do not express the PLP gene; this in turn facilitates empirical observation of the localized effects of genome alterations in the myelin sheath that surround nerve fibers in the central nervous system (CNS) and the peripheral nervous system (PNS). Selective Cre expression has been achieved in many other cell types and tissues as well. In order to control temporal activity of the excision reaction, forms of Cre which take advantage of various ligand binding domains have been developed. One successful strategy for inducing specific temporal Cre activity involves fusing the enzyme with a mutated ligand-binding domain for the human estrogen receptor (ERt). Upon the introduction of tamoxifen (an estrogen receptor antagonist), the Cre-ERt construct is able to penetrate the nucleus and induce targeted mutation. ERt binds tamoxifen with greater affinity than endogenous estrogens, which allows Cre-ERt to remain cytoplasmic in animals untreated with tamoxifen. The temporal control of SSR activity by tamoxifen permits genetic changes to be induced later in embryogenesis and/or in adult tissues. This allows researchers to bypass embryonic lethality while still investigating the function of targeted genes. Recent extensions of these general concepts led to generating the "Cre-zoo", i.e. collections of hundreds of mouse strains for which defined genes can be deleted by targeted Cre expression. Flp recombinase In its natural host (S. cerevisiae) the Flp/FRT system enables replication of a "2μ plasmid" by the inversion of a segment that is flanked by two identical, but oppositely oriented FRT sites ("flippase" activity). This inversion changes the relative orientation of replication forks within the plasmid enabling "rolling circle"—amplification of the circular 2μ entity before the multimeric intermediates are resolved to release multiple monomeric products. Whereas 34 bp minimal FRT sites favor excision/resolution to a similar extent as the analogue loxP sites for Cre, the natural, more extended 48 bp FRT variants enable a higher degree of integration, while overcoming certain promiscuous interactions as described for phage enzymes like Cre- and PhiC31. An additional advantage is the fact, that simple rules can be applied to generate heterospecific FRT sites which undergo crossovers with equal partners but nor with wild type FRTs. These facts have enabled, since 1994, the development and continuous refinements of recombinase-mediated cassette exchange (RMCE-)strategies permitting the clean exchange of a target cassette for an incoming donor cassette. Based on the RMCE technology, a particular resource of pre-characterized ES-strains that lends itself to further elaboration has evolved in the framework of the EUCOMM (European Conditional Mouse Mutagenesis) program, based on the now established Cre- and/or Flp-based "FlExing" (Flp-mediated excision/inversion) setups, involving the excision and inversion activities. Initiated in 2005, this project focused first on saturation mutagenesis to enable complete functional annotation of the mouse genome (coordinated by the International Knockout-Mouse Consortium, IKMC) with the ultimate goal to have all protein genes mutated via gene trapping and -targeting in murine ES cells. These efforts mark the top of various "tag-and-exchange" strategies, which are dedicated to tagging a distinct genomic site such that the "tag" can serve as an address to introduce novel (or alter existing) genetic information. The tagging step per se may address certain classes of integration sites by exploiting integration preferences of retroviruses or even site specific integrases like PhiC31, both of which act in an essentially unidirectional fashion. The traditional, laborious "tag-and-exchange" procedures relied on two successive homologous recombination (HR-)steps, the first one ("HR1") to introduce a tag consisting of a selection marker gene. "HR2" was then used to replace the marker by the "GOI. In the first ("knock-out"-) reaction the gene was tagged with a selectable marker, typically by insertion of a hygtk ([+/-]) cassette providing G418 resistance. In the following "knock-in" step, the tagged genomic sequence was replaced by homologous genomic sequences with certain mutations. Cell clones could then be isolated by their resistance to ganciclovir due to loss of the HSV-tk gene, i.e. ("negative selection"). This conventional two-step tag-and-exchange procedure could be streamlined after the advent of RMCE, which could take over and add efficiency to the knock-in step. PhiC31 integrase Without much doubt, Ser integrases are the current tools of choice for integrating transgenes into a restricted number of well-understood genomic acceptor sites that mostly (but not always) mimic the phage attP site in that they attract an attB-containing donor vector. At this time the most prominent member is PhiC31-INT with proven potential in the context of human and mouse genomes. Contrary to the above Tyr recombinases, PhiC31-INT as such acts in a unidirectional manner, firmly locking in the donor vector at a genomically anchored target. An obvious advantage of this system is that it can rely on unmodified, native attP (acceptor) and attB donor sites. Additional benefits (together with certain complications) may arise from the fact that mouse and human genomes per se contain a limited number of endogenous targets (so called "attP-pseudosites"). Available information suggests that considerable DNA sequence requirements let the integrase recognize fewer sites than retroviral or even transposase-based integration systems opening its career as a superior carrier vehicle for the transport and insertion at a number of well established genomic sites, some of which with so called "safe-harbor" properties. Exploiting the fact of specific (attP x attB) recombination routes, RMCE becomes possible without requirements for synthetic, heterospecific att-sites. This obvious advantage, however comes at the expense of certain shortcomings, such as lack of control about the kind or directionality of the entering (donor-) cassette. Further restrictions are imposed by the fact that irreversibility does not permit standard multiplexing-RMCE setups including "serial RMCE" reactions, i.e., repeated cassette exchanges at a given genomic locus. Outlook and perspectives Annotation of the human and mouse genomes has led to the identification of >20 000 protein-coding genes and >3 000 noncoding RNA genes, which guide the development of the organism from fertilization through embryogenesis to adult life. Although dramatic progress is noted, the relevance of rare gene variants has remained a central topic of research. As one of the most important platforms for dealing with vertebrate gene functions on a large scale, genome-wide genetic resources of mutant murine ES cells have been established. To this end four international programs aimed at saturation mutagenesis of the mouse genome have been founded in Europe and North America (EUCOMM, KOMP, NorCOMM, and TIGM). Coordinated by the International Knockout Mouse Consortium (IKSC) these ES-cell repositories are available for exchange between international research units. Present resources comprise mutations in 11 539 unique genes, 4 414 of these conditional. The relevant technologies have now reached a level permitting their extension to other mammalian species and to human stem cells, most prominently those with an iPS (induced pluripotent) status. See also Site-specific recombination Recombinase-mediated cassette exchange Cre recombinase Cre-Lox recombination FLP-FRT recombination Genetic recombination Homologous recombination References External links http://www.knockoutmouse.org/ Genetic engineering
Site-specific recombinase technology
[ "Chemistry", "Engineering", "Biology" ]
3,728
[ "Biological engineering", "Genetic engineering", "Molecular biology" ]
4,385,997
https://en.wikipedia.org/wiki/Voltage%20droop
Voltage droop is the intentional loss in output voltage from a device as it drives a load. Adding droop in a voltage regulation circuit increases the headroom for load transients. All electrical systems have some amount of resistance between the regulator output and the load. At high currents, even a small resistance results in substantial voltage drop between the regulator and the load. Conversely, when the output current is (near) zero, the voltage at the load is higher. This follows from Ohm's law. Rather than increasing output voltage at high current to try to maintain the same load voltage, droop instead simply allows this drop to take place and designs around it. The behaviour of the system with and without droop is as follows: In a regulator not employing droop, when the load is suddenly increased very rapidly (i.e. a transient), the output voltage will momentarily sag. Conversely, when a heavy load is suddenly disconnected, the voltage will show a peak. The output decoupling capacitors have to "absorb" these transients before the control loop has a chance to compensate. A diagram of such transients is shown below. The maximum allowed voltage swing in such a transient is . Comparing this to a regulator with droop, we find that the maximum allowed swing has doubled: it is now . This increased tolerance to transients allows us to decrease the number of output capacitors, or get better regulation with the same number of capacitors. References https://www.firgelliauto.com/blogs/news/voltage-drop-calculator Firgelli application note] Speed Droop and Power Generation. Application Note 01302. Woodward Governor Company (2004). Intersil Application Note 1021 (June 2002) Electrical parameters Droop
Voltage droop
[ "Physics", "Engineering" ]
365
[ "Physical quantities", "Voltage", "Electrical engineering", "Voltage stability", "Electrical parameters" ]
4,387,116
https://en.wikipedia.org/wiki/Atom%20optics
Atom optics (or atomic optics) "refers to techniques to manipulate the trajectories and exploit the wave properties of neutral atoms". Typical experiments employ beams of cold, slowly moving neutral atoms, as a special case of a particle beam. Like an optical beam, the atomic beam may exhibit diffraction and interference, and can be focused with a Fresnel zone plate or a concave atomic mirror. For comprehensive overviews of atom optics, see the 1994 review by Adams, Sigel, and Mlynek or the 2009 review by Cronin, Jörg, and Pritchard. More bibliography about Atom Optics can be found in the 2017 Resource Letter in the American Journal of Physics. For quantum atom optics see the 2018 review by Pezzè et al. History Interference of atom matter waves was first observed by Esterman and Stern in 1930, when a Na beam was diffracted off a surface of NaCl. The short de Broglie wavelength of atoms prevented progress for many years until two technological breakthroughs revived interest: microlithography allowing precise small devices and laser cooling allowing atoms to be slowed, increasing their de Broglie wavelength. Until 2006, the resolution of imaging systems based on atomic beams was not better than that of an optical microscope, mainly due to the poor performance of the focusing elements. Such elements use small numerical aperture; usually, atomic mirrors use grazing incidence, and the reflectivity drops drastically with increase of the grazing angle; for efficient normal reflection, atoms should be ultracold, and dealing with such atoms usually involves magnetic, magneto-optical or optical traps. At the beginning of the 21st century scientific publications about "atom nano-optics", evanescent field lenses and ridged mirrors showed significant improvement. In particular, an atomic hologram can be realized. See also Atom interferometer Atomic nanoscope Electron microscope Quantum reflection External links Former website of the Arizona research group. References Books Atomic, molecular, and optical physics
Atom optics
[ "Physics", "Chemistry" ]
402
[ " and optical physics stubs", " molecular", "Atomic", "Physical chemistry stubs", " and optical physics" ]
4,387,406
https://en.wikipedia.org/wiki/Equations%20for%20a%20falling%20body
A set of equations describing the trajectories of objects subject to a constant gravitational force under normal Earth-bound conditions. Assuming constant acceleration g due to Earth's gravity, Newton's law of universal gravitation simplifies to F = mg, where F is the force exerted on a mass m by the Earth's gravitational field of strength g. Assuming constant g is reasonable for objects falling to Earth over the relatively short vertical distances of our everyday experience, but is not valid for greater distances involved in calculating more distant effects, such as spacecraft trajectories. History Galileo was the first to demonstrate and then formulate these equations. He used a ramp to study rolling balls, the ramp slowing the acceleration enough to measure the time taken for the ball to roll a known distance. He measured elapsed time with a water clock, using an "extremely accurate balance" to measure the amount of water. The equations ignore air resistance, which has a dramatic effect on objects falling an appreciable distance in air, causing them to quickly approach a terminal velocity. The effect of air resistance varies enormously depending on the size and geometry of the falling object—for example, the equations are hopelessly wrong for a feather, which has a low mass but offers a large resistance to the air. (In the absence of an atmosphere all objects fall at the same rate, as astronaut David Scott demonstrated by dropping a hammer and a feather on the surface of the Moon.) The equations also ignore the rotation of the Earth, failing to describe the Coriolis effect for example. Nevertheless, they are usually accurate enough for dense and compact objects falling over heights not exceeding the tallest man-made structures. Overview Near the surface of the Earth, the acceleration due to gravity  = 9.807 m/s2 (metres per second squared, which might be thought of as "metres per second, per second"; or 32.18 ft/s2 as "feet per second per second") approximately. A coherent set of units for , , and is essential. Assuming SI units, is measured in metres per second squared, so must be measured in metres, in seconds and in metres per second. In all cases, the body is assumed to start from rest, and air resistance is neglected. Generally, in Earth's atmosphere, all results below will therefore be quite inaccurate after only 5 seconds of fall (at which time an object's velocity will be a little less than the vacuum value of 49 m/s (9.8 m/s2 × 5 s) due to air resistance). Air resistance induces a drag force on any body that falls through any atmosphere other than a perfect vacuum, and this drag force increases with velocity until it equals the gravitational force, leaving the object to fall at a constant terminal velocity. Terminal velocity depends on atmospheric drag, the coefficient of drag for the object, the (instantaneous) velocity of the object, and the area presented to the airflow. Apart from the last formula, these formulas also assume that negligibly varies with height during the fall (that is, they assume constant acceleration). The last equation is more accurate where significant changes in fractional distance from the centre of the planet during the fall cause significant changes in . This equation occurs in many applications of basic physics. The following equations start from the general equations of linear motion: and equation for universal gravitation (r+d= distance of object above the ground from the center of mass of planet): Equations Example The first equation shows that, after one second, an object will have fallen a distance of 1/2 × 9.8 × 12 = 4.9 m. After two seconds it will have fallen 1/2 × 9.8 × 22 = 19.6 m; and so on. On the other hand, the penultimate equation becomes grossly inaccurate at great distances. If an object fell 10000 m to Earth, then the results of both equations differ by only 0.08%; however, if it fell from geosynchronous orbit, which is 42164 km, then the difference changes to almost 64%. Based on wind resistance, for example, the terminal velocity of a skydiver in a belly-to-earth (i.e., face down) free-fall position is about 195 km/h (122 mph or 54 m/s). This velocity is the asymptotic limiting value of the acceleration process, because the effective forces on the body balance each other more and more closely as the terminal velocity is approached. In this example, a speed of 50% of terminal velocity is reached after only about 3 seconds, while it takes 8 seconds to reach 90%, 15 seconds to reach 99% and so on. Higher speeds can be attained if the skydiver pulls in his or her limbs (see also freeflying). In this case, the terminal velocity increases to about 320 km/h (200 mph or 90 m/s), which is almost the terminal velocity of the peregrine falcon diving down on its prey. The same terminal velocity is reached for a typical .30-06 bullet dropping downwards—when it is returning to earth having been fired upwards, or dropped from a tower—according to a 1920 U.S. Army Ordnance study. For astronomical bodies other than Earth, and for short distances of fall at other than "ground" level, in the above equations may be replaced by where is the gravitational constant, is the mass of the astronomical body, is the mass of the falling body, and is the radius from the falling object to the center of the astronomical body. Removing the simplifying assumption of uniform gravitational acceleration provides more accurate results. We find from the formula for radial elliptic trajectories: The time taken for an object to fall from a height to a height , measured from the centers of the two bodies, is given by: where is the sum of the standard gravitational parameters of the two bodies. This equation should be used whenever there is a significant difference in the gravitational acceleration during the fall. Note that when this equation gives , as expected; and when it gives , which is the time to collision. Acceleration relative to the rotating Earth Centripetal force causes the acceleration measured on the rotating surface of the Earth to differ from the acceleration that is measured for a free-falling body: the apparent acceleration in the rotating frame of reference is the total gravity vector minus a small vector toward the north–south axis of the Earth, corresponding to staying stationary in that frame of reference. See also De motu antiquiora and Two New Sciences (the earliest modern investigations of the motion of falling bodies) Equations of motion Free fall Gravity Mean speed theorem, the foundation of the law of falling bodies Radial trajectory Notes References External links Falling body equations calculator Gravity Equations Falling
Equations for a falling body
[ "Mathematics" ]
1,395
[ "Mathematical objects", "Equations" ]
4,387,697
https://en.wikipedia.org/wiki/Structure%E2%80%93activity%20relationship
The structure–activity relationship (SAR) is the relationship between the chemical structure of a molecule and its biological activity. This idea was first presented by Alexander Crum Brown and Thomas Richard Fraser at least as early as 1868. The analysis of SAR enables the determination of the chemical group responsible for evoking a target biological effect in the organism. This allows modification of the effect or the potency of a bioactive compound (typically a drug) by changing its chemical structure. Medicinal chemists use the techniques of chemical synthesis to insert new chemical groups into the biomedical compound and test the modifications for their biological effects. This method was refined to build mathematical relationships between the chemical structure and the biological activity, known as quantitative structure–activity relationships (QSAR). A related term is structure affinity relationship (SAFIR). Structure-biodegradability relationship The large number of synthetic organic chemicals currently in production presents a major challenge for timely collection of detailed environmental data on each compound. The concept of structure biodegradability relationships (SBR) has been applied to explain variability in persistence among organic chemicals in the environment. Early attempts generally consisted of examining the degradation of a homologous series of structurally related compounds under identical conditions with a complex "universal" inoculum, typically derived from numerous sources. This approach revealed that the nature and positions of substituents affected the apparent biodegradability of several chemical classes, with resulting general themes, such as halogens generally conferring persistence under aerobic conditions. Subsequently, more quantitative approaches have been developed using principles of QSAR and often accounting for the role of sorption (bioavailability) in chemical fate. See also Combinatorial chemistry Congener Conformation activity relationship Quantitative structure–activity relationship Pharmacophore References External links Molecular Property Explorer QSAR World Medicinal chemistry
Structure–activity relationship
[ "Chemistry", "Biology" ]
372
[ "Biochemistry", "nan", "Medicinal chemistry" ]
386,214
https://en.wikipedia.org/wiki/Wallpaper%20group
A wallpaper group (or plane symmetry group or plane crystallographic group) is a mathematical classification of a two-dimensional repetitive pattern, based on the symmetries in the pattern. Such patterns occur frequently in architecture and decorative art, especially in textiles, tiles, and wallpaper. The simplest wallpaper group, Group p1, applies when there is no symmetry beyond simple translation of a pattern in two dimensions. The following patterns have more forms of symmetry, including some rotational and reflectional symmetries: Examples A and B have the same wallpaper group; it is called p4m in the IUCr notation and *442 in the orbifold notation. Example C has a different wallpaper group, called p4g or 4*2 . The fact that A and B have the same wallpaper group means that they have the same symmetries, regardless of the designs' superficial details; whereas C has a different set of symmetries. The number of symmetry groups depends on the number of dimensions in the patterns. Wallpaper groups apply to the two-dimensional case, intermediate in complexity between the simpler frieze groups and the three-dimensional space groups. A proof that there are only 17 distinct groups of such planar symmetries was first carried out by Evgraf Fedorov in 1891 and then derived independently by George Pólya in 1924. The proof that the list of wallpaper groups is complete came only after the much harder case of space groups had been done. The seventeen wallpaper groups are listed below; see . Symmetries of patterns A symmetry of a pattern is, loosely speaking, a way of transforming the pattern so that it looks exactly the same after the transformation. For example, translational symmetry is present when the pattern can be translated (in other words, shifted) some finite distance and appear unchanged. Think of shifting a set of vertical stripes horizontally by one stripe. The pattern is unchanged. Strictly speaking, a true symmetry only exists in patterns that repeat exactly and continue indefinitely. A set of only, say, five stripes does not have translational symmetry—when shifted, the stripe on one end "disappears" and a new stripe is "added" at the other end. In practice, however, classification is applied to finite patterns, and small imperfections may be ignored. The types of transformations that are relevant here are called Euclidean plane isometries. For example: If one shifts example B one unit to the right, so that each square covers the square that was originally adjacent to it, then the resulting pattern is exactly the same as the starting pattern. This type of symmetry is called a translation. Examples A and C are similar, except that the smallest possible shifts are in diagonal directions. If one turns example B clockwise by 90°, around the centre of one of the squares, again one obtains exactly the same pattern. This is called a rotation. Examples A and C also have 90° rotations, although it requires a little more ingenuity to find the correct centre of rotation for C. One can also flip example B across a horizontal axis that runs across the middle of the image. This is called a reflection. Example B also has reflections across a vertical axis, and across two diagonal axes. The same can be said for A. However, example C is different. It only has reflections in horizontal and vertical directions, not across diagonal axes. If one flips across a diagonal line, one does not get the same pattern back, but the original pattern shifted across by a certain distance. This is part of the reason that the wallpaper group of A and B is different from the wallpaper group of C. Another transformation is "Glide", a combination of reflection and translation parallel to the line of reflection. Formal definition and discussion Mathematically, a wallpaper group or plane crystallographic group is a type of topologically discrete group of isometries of the Euclidean plane that contains two linearly independent translations. Two such isometry groups are of the same type (of the same wallpaper group) if they are the same up to an affine transformation of the plane. Thus e.g. a translation of the plane (hence a translation of the mirrors and centres of rotation) does not affect the wallpaper group. The same applies for a change of angle between translation vectors, provided that it does not add or remove any symmetry (this is only the case if there are no mirrors and no glide reflections, and rotational symmetry is at most of order 2). Unlike in the three-dimensional case, one can equivalently restrict the affine transformations to those that preserve orientation. It follows from the Bieberbach conjecture that all wallpaper groups are different even as abstract groups (as opposed to e.g. frieze groups, of which two are isomorphic with Z). 2D patterns with double translational symmetry can be categorized according to their symmetry group type. Isometries of the Euclidean plane Isometries of the Euclidean plane fall into four categories (see the article Euclidean plane isometry for more information). Translations, denoted by Tv, where v is a vector in R2. This has the effect of shifting the plane applying displacement vector v. Rotations, denoted by Rc,θ, where c is a point in the plane (the centre of rotation), and θ is the angle of rotation. Reflections, or mirror isometries, denoted by FL, where L is a line in R2. (F is for "flip"). This has the effect of reflecting the plane in the line L, called the reflection axis or the associated mirror. Glide reflections, denoted by GL,d, where L is a line in R2 and d is a distance. This is a combination of a reflection in the line L and a translation along L by a distance d. The independent translations condition The condition on linearly independent translations means that there exist linearly independent vectors v and w (in R2) such that the group contains both Tv and Tw. The purpose of this condition is to distinguish wallpaper groups from frieze groups, which possess a translation but not two linearly independent ones, and from two-dimensional discrete point groups, which have no translations at all. In other words, wallpaper groups represent patterns that repeat themselves in two distinct directions, in contrast to frieze groups, which only repeat along a single axis. (It is possible to generalise this situation. One could for example study discrete groups of isometries of Rn with m linearly independent translations, where m is any integer in the range 0 ≤ m ≤ n.) The discreteness condition The discreteness condition means that there is some positive real number ε, such that for every translation Tv in the group, the vector v has length at least ε (except of course in the case that v is the zero vector, but the independent translations condition prevents this, since any set that contains the zero vector is linearly dependent by definition and thus disallowed). The purpose of this condition is to ensure that the group has a compact fundamental domain, or in other words, a "cell" of nonzero, finite area, which is repeated through the plane. Without this condition, one might have for example a group containing the translation Tx for every rational number x, which would not correspond to any reasonable wallpaper pattern. One important and nontrivial consequence of the discreteness condition in combination with the independent translations condition is that the group can only contain rotations of order 2, 3, 4, or 6; that is, every rotation in the group must be a rotation by 180°, 120°, 90°, or 60°. This fact is known as the crystallographic restriction theorem, and can be generalised to higher-dimensional cases. Notations for wallpaper groups Crystallographic notation Crystallography has 230 space groups to distinguish, far more than the 17 wallpaper groups, but many of the symmetries in the groups are the same. Thus one can use a similar notation for both kinds of groups, that of Carl Hermann and Charles-Victor Mauguin. An example of a full wallpaper name in Hermann-Mauguin style (also called IUCr notation) is p31m, with four letters or digits; more usual is a shortened name like cmm or pg. For wallpaper groups the full notation begins with either p or c, for a primitive cell or a face-centred cell; these are explained below. This is followed by a digit, n, indicating the highest order of rotational symmetry: 1-fold (none), 2-fold, 3-fold, 4-fold, or 6-fold. The next two symbols indicate symmetries relative to one translation axis of the pattern, referred to as the "main" one; if there is a mirror perpendicular to a translation axis that is the main one (or if there are two, one of them). The symbols are either m, g, or 1, for mirror, glide reflection, or none. The axis of the mirror or glide reflection is perpendicular to the main axis for the first letter, and either parallel or tilted 180°/n (when n > 2) for the second letter. Many groups include other symmetries implied by the given ones. The short notation drops digits or an m that can be deduced, so long as that leaves no confusion with another group. A primitive cell is a minimal region repeated by lattice translations. All but two wallpaper symmetry groups are described with respect to primitive cell axes, a coordinate basis using the translation vectors of the lattice. In the remaining two cases symmetry description is with respect to centred cells that are larger than the primitive cell, and hence have internal repetition; the directions of their sides is different from those of the translation vectors spanning a primitive cell. Hermann-Mauguin notation for crystal space groups uses additional cell types. Examples p2 (p2): Primitive cell, 2-fold rotation symmetry, no mirrors or glide reflections. p4gm (p4gm): Primitive cell, 4-fold rotation, glide reflection perpendicular to main axis, mirror axis at 45°. c2mm (c2mm): Centred cell, 2-fold rotation, mirror axes both perpendicular and parallel to main axis. p31m (p31m): Primitive cell, 3-fold rotation, mirror axis at 60°. Here are all the names that differ in short and full notation. {| class="wikitable" |+ Crystallographic short and full names ! Short ! pm ! pg ! cm ! pmm ! pmg ! pgg ! cmm ! p4m ! p4g ! p6m |- style="background:white" ! Full || p1m1 || p1g1 || c1m1 || p2mm || p2mg || p2gg || c2mm || p4mm || p4gm || p6mm |} The remaining names are p1, p2, p3, p3m1, p31m, p4, and p6. Orbifold notation Orbifold notation for wallpaper groups, advocated by John Horton Conway (Conway, 1992) (Conway 2008), is based not on crystallography, but on topology. One can fold the infinite periodic tiling of the plane into its essence, an orbifold, then describe that with a few symbols. A digit, n, indicates a centre of n-fold rotation corresponding to a cone point on the orbifold. By the crystallographic restriction theorem, n must be 2, 3, 4, or 6. An asterisk, *, indicates a mirror symmetry corresponding to a boundary of the orbifold. It interacts with the digits as follows: Digits before * denote centres of pure rotation (cyclic). Digits after * denote centres of rotation with mirrors through them, corresponding to "corners" on the boundary of the orbifold (dihedral). A cross, ×, occurs when a glide reflection is present and indicates a crosscap on the orbifold. Pure mirrors combine with lattice translation to produce glides, but those are already accounted for so need no notation. The "no symmetry" symbol, o, stands alone, and indicates there are only lattice translations with no other symmetry. The orbifold with this symbol is a torus; in general the symbol o denotes a handle on the orbifold. The group denoted in crystallographic notation by cmm will, in Conway's notation, be 2*22. The 2 before the * says there is a 2-fold rotation centre with no mirror through it. The * itself says there is a mirror. The first 2 after the * says there is a 2-fold rotation centre on a mirror. The final 2 says there is an independent second 2-fold rotation centre on a mirror, one that is not a duplicate of the first one under symmetries. The group denoted by pgg will be 22×. There are two pure 2-fold rotation centres, and a glide reflection axis. Contrast this with pmg, Conway 22*, where crystallographic notation mentions a glide, but one that is implicit in the other symmetries of the orbifold. Coxeter's bracket notation is also included, based on reflectional Coxeter groups, and modified with plus superscripts accounting for rotations, improper rotations and translations. Why there are exactly seventeen groups An orbifold can be viewed as a polygon with face, edges, and vertices which can be unfolded to form a possibly infinite set of polygons which tile either the sphere, the plane or the hyperbolic plane. When it tiles the plane it will give a wallpaper group and when it tiles the sphere or hyperbolic plane it gives either a spherical symmetry group or Hyperbolic symmetry group. The type of space the polygons tile can be found by calculating the Euler characteristic, χ = V − E + F, where V is the number of corners (vertices), E is the number of edges and F is the number of faces. If the Euler characteristic is positive then the orbifold has an elliptic (spherical) structure; if it is zero then it has a parabolic structure, i.e. a wallpaper group; and if it is negative it will have a hyperbolic structure. When the full set of possible orbifolds is enumerated it is found that only 17 have Euler characteristic 0. When an orbifold replicates by symmetry to fill the plane, its features create a structure of vertices, edges, and polygon faces, which must be consistent with the Euler characteristic. Reversing the process, one can assign numbers to the features of the orbifold, but fractions, rather than whole numbers. Because the orbifold itself is a quotient of the full surface by the symmetry group, the orbifold Euler characteristic is a quotient of the surface Euler characteristic by the order of the symmetry group. The orbifold Euler characteristic is 2 minus the sum of the feature values, assigned as follows: A digit n without or before a * counts as . A digit n after a * counts as . Both * and × count as 1. The "no symmetry" o counts as 2. For a wallpaper group, the sum for the characteristic must be zero; thus the feature sum must be 2. Examples 632: + + = 2 3*3: + 1 + = 2 4*2: + 1 + = 2 22×: + + 1 = 2 Now enumeration of all wallpaper groups becomes a matter of arithmetic, of listing all feature strings with values summing to 2. Feature strings with other sums are not nonsense; they imply non-planar tilings, not discussed here. (When the orbifold Euler characteristic is negative, the tiling is hyperbolic; when positive, spherical or bad). Guide to recognizing wallpaper groups To work out which wallpaper group corresponds to a given design, one may use the following table. See also this overview with diagrams. The seventeen groups Each of the groups in this section has two cell structure diagrams, which are to be interpreted as follows (it is the shape that is significant, not the colour): On the right-hand side diagrams, different equivalence classes of symmetry elements are colored (and rotated) differently. The brown or yellow area indicates a fundamental domain, i.e. the smallest part of the pattern that is repeated. The diagrams on the right show the cell of the lattice corresponding to the smallest translations; those on the left sometimes show a larger area. Group p1 (o) [[File:SymBlend p1.svg|thumb|Example and diagram for p1]] Orbifold signature: o Coxeter notation (rectangular): [∞+,2,∞+] or [∞]+×[∞]+ Lattice: oblique Point group: C1 The group p1 contains only translations; there are no rotations, reflections, or glide reflections. Examples of group p1 The two translations (cell sides) can each have different lengths, and can form any angle. Group p2 (2222) [[File:SymBlend p2.svg|right|thumb|Example and diagram for p2]] Orbifold signature: 2222 Coxeter notation (rectangular): [∞,2,∞]+ Lattice: oblique Point group: C2 The group p2 contains four rotation centres of order two (180°), but no reflections or glide reflections. Examples of group p2 Group pm (**) Orbifold signature: ** Coxeter notation: [∞,2,∞+] or [∞+,2,∞] Lattice: rectangular Point group: D1 The group pm has no rotations. It has reflection axes, they are all parallel. Examples of group pm (The first three have a vertical symmetry axis, and the last two each have a different diagonal one.) Group pg (××) Orbifold signature: ×× Coxeter notation: [(∞,2)+,∞+] or [∞+,(2,∞)+] Lattice: rectangular Point group: D1 The group pg contains glide reflections only, and their axes are all parallel. There are no rotations or reflections. Examples of group pg Without the details inside the zigzag bands the mat is pmg; with the details but without the distinction between brown and black it is pgg. Ignoring the wavy borders of the tiles, the pavement is pgg. Group cm (*×) Orbifold signature: *× Coxeter notation: [∞+,2+,∞] or [∞,2+,∞+] Lattice: rhombic Point group: D1 The group cm contains no rotations. It has reflection axes, all parallel. There is at least one glide reflection whose axis is not a reflection axis; it is halfway between two adjacent parallel reflection axes. This group applies for symmetrically staggered rows (i.e. there is a shift per row of half the translation distance inside the rows) of identical objects, which have a symmetry axis perpendicular to the rows. Examples of group cm Group pmm (*2222) Orbifold signature: *2222 Coxeter notation (rectangular): [∞,2,∞] or [∞]×[∞] Coxeter notation (square): [4,1+,4] or [1+,4,4,1+] Lattice: rectangular Point group: D2 The group pmm has reflections in two perpendicular directions, and four rotation centres of order two (180°) located at the intersections of the reflection axes. Examples of group pmm Group pmg (22*) Orbifold signature: 22* Coxeter notation: [(∞,2)+,∞] or [∞,(2,∞)+] Lattice: rectangular Point group: D2 The group pmg has two rotation centres of order two (180°), and reflections in only one direction. It has glide reflections whose axes are perpendicular to the reflection axes. The centres of rotation all lie on glide reflection axes. Examples of group pmg Group pgg (22×) Orbifold signature: 22× Coxeter notation (rectangular): [((∞,2)+,(∞,2)+)] Coxeter notation (square): [4+,4+] Lattice: rectangular Point group: D2 The group pgg contains two rotation centres of order two (180°), and glide reflections in two perpendicular directions. The centres of rotation are not located on the glide reflection axes. There are no reflections. Examples of group pgg Group cmm (2*22) Orbifold signature: 2*22 Coxeter notation (rhombic): [∞,2+,∞] Coxeter notation (square): [(4,4,2+)] Lattice: rhombic Point group: D2 The group cmm has reflections in two perpendicular directions, and a rotation of order two (180°) whose centre is not on a reflection axis. It also has two rotations whose centres are on a reflection axis. This group is frequently seen in everyday life, since the most common arrangement of bricks in a brick building (running bond) utilises this group (see example below). The rotational symmetry of order 2 with centres of rotation at the centres of the sides of the rhombus is a consequence of the other properties. The pattern corresponds to each of the following: symmetrically staggered rows of identical doubly symmetric objects a checkerboard pattern of two alternating rectangular tiles, of which each, by itself, is doubly symmetric a checkerboard pattern of alternatingly a 2-fold rotationally symmetric rectangular tile and its mirror image Examples of group cmm Group p4 (442) [[File:SymBlend p4.svg|right|thumb|Example and diagram for p4]] [[File:Wallpaper group diagram p4 square.svg|left|thumb|Cell structure for p4]] Orbifold signature: 442 Coxeter notation: [4,4]+ Lattice: square Point group: C4 The group p4 has two rotation centres of order four (90°), and one rotation centre of order two (180°). It has no reflections or glide reflections. Examples of group p4 A p4 pattern can be looked upon as a repetition in rows and columns of equal square tiles with 4-fold rotational symmetry. Also it can be looked upon as a checkerboard pattern of two such tiles, a factor smaller and rotated 45°. Group p4m (*442) Orbifold signature: *442 Coxeter notation: [4,4] Lattice: square Point group: D4 The group p4m has two rotation centres of order four (90°), and reflections in four distinct directions (horizontal, vertical, and diagonals). It has additional glide reflections whose axes are not reflection axes; rotations of order two (180°) are centred at the intersection of the glide reflection axes. All rotation centres lie on reflection axes. This corresponds to a straightforward grid of rows and columns of equal squares with the four reflection axes. Also it corresponds to a checkerboard pattern of two of such squares. Examples of group p4m Examples displayed with the smallest translations horizontal and vertical (like in the diagram): Examples displayed with the smallest translations diagonal: Group p4g (4*2) Orbifold signature: 4*2 Coxeter notation: [4+,4] Lattice: square Point group: D4 The group p4g has two centres of rotation of order four (90°), which are each other's mirror image, but it has reflections in only two directions, which are perpendicular. There are rotations of order two (180°) whose centres are located at the intersections of reflection axes. It has glide reflections axes parallel to the reflection axes, in between them, and also at an angle of 45° with these. A p4g pattern can be looked upon as a checkerboard pattern of copies of a square tile with 4-fold rotational symmetry, and its mirror image. Alternatively it can be looked upon (by shifting half a tile) as a checkerboard pattern of copies of a horizontally and vertically symmetric tile and its 90° rotated version. Note that neither applies for a plain checkerboard pattern of black and white tiles, this is group p4m (with diagonal translation cells). Examples of group p4g Group p3 (333) [[File:SymBlend p3.svg|right|thumb|Example and diagram for p3]] [[File:Wallpaper group diagram p3.svg|left|thumb|Cell structure for p3]] Orbifold signature: 333 Coxeter notation: [(3,3,3)]+ or [3[3]]+ Lattice: hexagonal Point group: C3 The group p3 has three different rotation centres of order three (120°), but no reflections or glide reflections. Imagine a tessellation of the plane with equilateral triangles of equal size, with the sides corresponding to the smallest translations. Then half of the triangles are in one orientation, and the other half upside down. This wallpaper group corresponds to the case that all triangles of the same orientation are equal, while both types have rotational symmetry of order three, but the two are not equal, not each other's mirror image, and not both symmetric (if the two are equal it is p6, if they are each other's mirror image it is p31m, if they are both symmetric it is p3m1; if two of the three apply then the third also, and it is p6m). For a given image, three of these tessellations are possible, each with rotation centres as vertices, i.e. for any tessellation two shifts are possible. In terms of the image: the vertices can be the red, the blue or the green triangles. Equivalently, imagine a tessellation of the plane with regular hexagons, with sides equal to the smallest translation distance divided by . Then this wallpaper group corresponds to the case that all hexagons are equal (and in the same orientation) and have rotational symmetry of order three, while they have no mirror image symmetry (if they have rotational symmetry of order six it is p6, if they are symmetric with respect to the main diagonals it is p31m, if they are symmetric with respect to lines perpendicular to the sides it is p3m1; if two of the three apply then the third also, it is p6m). For a given image, three of these tessellations are possible, each with one third of the rotation centres as centres of the hexagons. In terms of the image: the centres of the hexagons can be the red, the blue or the green triangles. Examples of group p3 Group p3m1 (*333) [[File:SymBlend p3m1.svg|right|thumb|Example and diagram for p3m1]] [[File:Wallpaper group diagram p3m1.svg|left|thumb|Cell structure for p3m1]] Orbifold signature: *333 Coxeter notation: [(3,3,3)] or [3[3]] Lattice: hexagonal Point group: D3 The group p3m1 has three different rotation centres of order three (120°). It has reflections in the three sides of an equilateral triangle. The centre of every rotation lies on a reflection axis. There are additional glide reflections in three distinct directions, whose axes are located halfway between adjacent parallel reflection axes. Like for p3, imagine a tessellation of the plane with equilateral triangles of equal size, with the sides corresponding to the smallest translations. Then half of the triangles are in one orientation, and the other half upside down. This wallpaper group corresponds to the case that all triangles of the same orientation are equal, while both types have rotational symmetry of order three, and both are symmetric, but the two are not equal, and not each other's mirror image. For a given image, three of these tessellations are possible, each with rotation centres as vertices. In terms of the image: the vertices can be the red, the blue or the green triangles. Examples of group p3m1 Group p31m (3*3) Orbifold signature: 3*3 Coxeter notation: [6,3+] Lattice: hexagonal Point group: D3 The group p31m has three different rotation centres of order three (120°), of which two are each other's mirror image. It has reflections in three distinct directions. It has at least one rotation whose centre does not lie on a reflection axis. There are additional glide reflections in three distinct directions, whose axes are located halfway between adjacent parallel reflection axes. Like for p3 and p3m1, imagine a tessellation of the plane with equilateral triangles of equal size, with the sides corresponding to the smallest translations. Then half of the triangles are in one orientation, and the other half upside down. This wallpaper group corresponds to the case that all triangles of the same orientation are equal, while both types have rotational symmetry of order three and are each other's mirror image, but not symmetric themselves, and not equal. For a given image, only one such tessellation is possible. In terms of the image: the vertices must be the red triangles, not the blue triangles. Examples of group p31m Group p6 (632) [[File:SymBlend p6.svg|right|thumb|Example and diagram for p6]] [[File:Wallpaper group diagram p6.svg|left|thumb|Cell structure for p6]] Orbifold signature: 632 Coxeter notation: [6,3]+ Lattice: hexagonal Point group: C6 The group p6 has one rotation centre of order six (60°); two rotation centres of order three (120°), which are each other's images under a rotation of 60°; and three rotation centres of order two (180°) which are also each other's images under a rotation of 60°. It has no reflections or glide reflections. A pattern with this symmetry can be looked upon as a tessellation of the plane with equal triangular tiles with C3 symmetry, or equivalently, a tessellation of the plane with equal hexagonal tiles with C6 symmetry (with the edges of the tiles not necessarily part of the pattern). Examples of group p6 Group p6m (*632) Orbifold signature: *632 Coxeter notation: [6,3] Lattice: hexagonal Point group: D6 The group p6m has one rotation centre of order six (60°); it has two rotation centres of order three, which only differ by a rotation of 60° (or, equivalently, 180°), and three of order two, which only differ by a rotation of 60°. It has also reflections in six distinct directions. There are additional glide reflections in six distinct directions, whose axes are located halfway between adjacent parallel reflection axes. A pattern with this symmetry can be looked upon as a tessellation of the plane with equal triangular tiles with D3 symmetry, or equivalently, a tessellation of the plane with equal hexagonal tiles with D6 symmetry (with the edges of the tiles not necessarily part of the pattern). Thus the simplest examples are a triangular lattice with or without connecting lines, and a hexagonal tiling with one color for outlining the hexagons and one for the background. Examples of group p6m Lattice types There are five lattice types or Bravais lattices, corresponding to the five possible wallpaper groups of the lattice itself. The wallpaper group of a pattern with this lattice of translational symmetry cannot have more, but may have less symmetry than the lattice itself. In the 5 cases of rotational symmetry of order 3 or 6, the unit cell consists of two equilateral triangles (hexagonal lattice, itself p6m). They form a rhombus with angles 60° and 120°. In the 3 cases of rotational symmetry of order 4, the cell is a square (square lattice, itself p4m). In the 5 cases of reflection or glide reflection, but not both, the cell is a rectangle (rectangular lattice, itself pmm). It may also be interpreted as a centered rhombic lattice. Special cases: square. In the 2 cases of reflection combined with glide reflection, the cell is a rhombus (rhombic lattice, itself cmm). It may also be interpreted as a centered rectangular lattice. Special cases: square, hexagonal unit cell. In the case of only rotational symmetry of order 2, and the case of no other symmetry than translational, the cell is in general a parallelogram (parallelogrammatic or oblique lattice, itself p2). Special cases: rectangle, square, rhombus, hexagonal unit cell. Symmetry groups The actual symmetry group should be distinguished from the wallpaper group. Wallpaper groups are collections of symmetry groups. There are 17 of these collections, but for each collection there are infinitely many symmetry groups, in the sense of actual groups of isometries. These depend, apart from the wallpaper group, on a number of parameters for the translation vectors, the orientation and position of the reflection axes and rotation centers. The numbers of degrees of freedom are: 6 for p25 for pmm, pmg, pgg, and cmm 4 for the rest. However, within each wallpaper group, all symmetry groups are algebraically isomorphic. Some symmetry group isomorphisms: p1: Z2pm: Z × D∞ pmm: D∞ × D∞. Dependence of wallpaper groups on transformations The wallpaper group of a pattern is invariant under isometries and uniform scaling (similarity transformations). Translational symmetry is preserved under arbitrary bijective affine transformations. Rotational symmetry of order two ditto; this means also that 4- and 6-fold rotation centres at least keep 2-fold rotational symmetry. Reflection in a line and glide reflection are preserved on expansion/contraction along, or perpendicular to, the axis of reflection and glide reflection. It changes p6m, p4g, and p3m1 into cmm, p3m1 into cm, and p4m, depending on direction of expansion/contraction, into pmm or cmm. A pattern of symmetrically staggered rows of points is special in that it can convert by expansion/contraction from p6m to p4m. Note that when a transformation decreases symmetry, a transformation of the same kind (the inverse) obviously for some patterns increases the symmetry. Such a special property of a pattern (e.g. expansion in one direction produces a pattern with 4-fold symmetry) is not counted as a form of extra symmetry. Change of colors does not affect the wallpaper group if any two points that have the same color before the change, also have the same color after the change, and any two points that have different colors before the change, also have different colors after the change. If the former applies, but not the latter, such as when converting a color image to one in black and white, then symmetries are preserved, but they may increase, so that the wallpaper group can change. Web demo and software Several software graphic tools will let you create 2D patterns using wallpaper symmetry groups. Usually you can edit the original tile and its copies in the entire pattern are updated automatically. MadPattern, a free set of Adobe Illustrator templates that support the 17 wallpaper groups Tess, a shareware tessellation program for multiple platforms, supports all wallpaper, frieze, and rosette groups, as well as Heesch tilings. Wallpaper Symmetry is a free online JavaScript drawing tool supporting the 17 groups. The main page has an explanation of the wallpaper groups, as well as drawing tools and explanations for the other planar symmetry groups as well. TALES GAME, a free software designed for educational purposes which includes the tessellation function. Kali , online graphical symmetry editor Java applet (not supported by default in browsers). Kali , free downloadable Kali for Windows and Mac Classic. Inkscape, a free vector graphics editor, supports all 17 groups plus arbitrary scales, shifts, rotates, and color changes per row or per column, optionally randomized to a given degree. (See ) SymmetryWorks is a commercial plugin for Adobe Illustrator, supports all 17 groups. EscherSketch is a free online JavaScript drawing tool supporting the 17 groups. Repper is a commercial online drawing tool supporting the 17 groups plus a number of non-periodic tilings See also List of planar symmetry groups (summary of this page) Aperiodic tiling Crystallography Layer group Mathematics and art M. C. Escher Point group Symmetry groups in one dimension Tessellation Notes References The Grammar of Ornament (1856), by Owen Jones. Many of the images in this article are from this book; it contains many more. John H. Conway (1992). "The Orbifold Notation for Surface Groups". In: M. W. Liebeck and J. Saxl (eds.), Groups, Combinatorics and Geometry, Proceedings of the L.M.S. Durham Symposium, July 5–15, Durham, UK, 1990; London Math. Soc. Lecture Notes Series 165. Cambridge University Press, Cambridge. pp. 438–447 John H. Conway, Heidi Burgiel and Chaim Goodman-Strauss (2008): The Symmetries of Things. Worcester MA: A.K. Peters. . Branko Grünbaum and G. C. Shephard (1987): Tilings and patterns. New York: Freeman. . Pattern Design, Lewis F. Day External links International Tables for Crystallography Volume A: Space-group symmetry by the International Union of Crystallography The 17 plane symmetry groups by David E. Joyce Introduction to wallpaper patterns by Chaim Goodman-Strauss and Heidi Burgiel Description by Silvio Levy Example tiling for each group, with dynamic demos of properties Overview with example tiling for each group, by Brian Sanderson Escher Web Sketch, a java applet with interactive tools for drawing in all 17 plane symmetry groups Burak, a Java applet for drawing symmetry groups. A JavaScript app for drawing wallpaper patterns Circle-Pattern on Roman Mosaics in Greece Seventeen Kinds of Wallpaper Patterns the 17 symmetries found in traditional Japanese patterns. Crystallography Discrete groups Euclidean symmetries Ornaments
Wallpaper group
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
8,100
[ "Functions and mappings", "Euclidean symmetries", "Mathematical objects", "Materials science", "Crystallography", "Mathematical relations", "Condensed matter physics", "Symmetry" ]
386,499
https://en.wikipedia.org/wiki/Constructive%20solid%20geometry
Constructive solid geometry (CSG; formerly called computational binary solid geometry) is a technique used in solid modeling. Constructive solid geometry allows a modeler to create a complex surface or object by using Boolean operators to combine simpler objects, potentially generating visually complex objects by combining a few primitive ones. In 3D computer graphics and CAD, CSG is often used in procedural modeling. CSG can also be performed on polygonal meshes, and may or may not be procedural and/or parametric. Contrast CSG with polygon mesh modeling and box modeling. Workings The simplest solid objects used for the representation are called geometric primitives. Typically they are the objects of simple shape: cuboids, cylinders, prisms, pyramids, spheres, cones. The set of allowable primitives is limited by each software package. Some software packages allow CSG on curved objects while other packages do not. An object is constructed from primitives by means of allowable operations, which are typically Boolean operations on sets: union, intersection and difference, as well as geometric transformations of those sets. A primitive can typically be described by a procedure which accepts some number of parameters; for example, a sphere may be described by the coordinates of its center point, along with a radius value. These primitives can be combined into compound objects using operations like these: Combining these elementary operations, it is possible to build up objects with high complexity starting from simple ones. Ray tracing Rendering of constructive solid geometry is particularly simple when ray tracing. Ray tracers intersect a ray with both primitives that are being operated on, apply the operator to the intersection intervals along the 1D ray, and then take the point closest to the camera along the ray as being the result. Applications Constructive solid geometry has a number of practical uses. It is used in cases where simple geometric objects are desired, or where mathematical accuracy is important. Nearly all engineering CAD packages use CSG (where it may be useful for representing tool cuts, and features where parts must fit together). The Quake engine and Unreal Engine both use this system, as does Hammer (the native Source engine level editor), and Torque Game Engine/Torque Game Engine Advanced. CSG is popular because a modeler can use a set of relatively simple objects to create very complicated geometry. When CSG is procedural or parametric, the user can revise their complex geometry by changing the position of objects or by changing the Boolean operation used to combine those objects. One of the advantages of CSG is that it can easily assure that objects are "solid" or water-tight if all of the primitive shapes are water-tight. This can be important for some manufacturing or engineering computation applications. By comparison, when creating geometry based upon boundary representations, additional topological data is required, or consistency checks must be performed to assure that the given boundary description specifies a valid solid object. A convenient property of CSG shapes is that it is easy to classify arbitrary points as being either inside or outside the shape created by CSG. The point is simply classified against all the underlying primitives and the resulting boolean expression is evaluated. This is a desirable quality for some applications such as ray tracing. Conversion from meshes to CSG With CSG models being parameterized by construction, they are often favorable over usual meshes when it comes to applications where the goal is to fabricate customized models. For such applications it can be interesting to convert already existing meshes to CSG trees. This problem of automatically converting meshes to CSG trees is called inverse CSG. A resulting CSG tree is required to occupy the same volume in 3D space as the input mesh while having a minimal number of nodes. Simple solutions are preferred to ensure that the resulting model is easy to edit. Solving this problem is a challenge because of the large search space that has to be explored. It combines continuous parameters such as dimension and size of the primitive shapes, and discrete parameters such as the Boolean operators used to build the final CSG tree. Deductive methods solve this problem by building a set of half-spaces that describe the interior of the geometry. These half-spaces are used to describe primitives that can be combined to get the final model. Another approach decouples the detection of primitive shapes and the computation of the CSG tree that defines the final model. This approach exploits the ability of modern program synthesis tools to find a CSG tree with minimal complexity. There are also approaches that use genetic algorithms to iteratively optimize an initial shape towards the shape of the desired mesh. Notable applications with CSG support Generic modelling languages and software HyperFun PLaSM Ray tracing and particle transport PhotoRealistic RenderMan POV-Ray Computer-aided design AutoCAD Autodesk Inventor Autodesk Fusion 360 BRL-CAD CATIA FreeCAD NX CAD SolveSpace Onshape OpenSCAD PTC Creo Parametric (formerly known as Pro/Engineer) Realsoft 3D Rhino Solid Edge SolidWorks Tinkercad Vectorworks Gaming Dreams Godot GtkRadiant LittleBigPlanet Roblox Unity, via free or paid plug-ins from the Unity Asset Store. UnrealEd Valve Hammer Editor Others 3Delight Aqsis (as of version 0.6.0) Blender – primarily a surface mesh editor, but capable of simple CSG using meta objects and using the Boolean modifier on mesh objects. Clara.io Geant4 Magica CSG MCNP SketchUp Womp References Computer-aided design 3D computer graphics Euclidean solid geometry
Constructive solid geometry
[ "Physics", "Engineering" ]
1,130
[ "Computer-aided design", "Design engineering", "Euclidean solid geometry", "Space", "Spacetime" ]
387,457
https://en.wikipedia.org/wiki/Thermohaline%20circulation
Thermohaline circulation (THC) is a part of the large-scale ocean circulation that is driven by global density gradients created by surface heat and freshwater fluxes. The adjective thermohaline derives from thermo- referring to temperature and referring to salt content, factors which together determine the density of sea water. Wind-driven surface currents (such as the Gulf Stream) travel polewards from the equatorial Atlantic Ocean, cooling en route, and eventually sinking at high latitudes (forming North Atlantic Deep Water). This dense water then flows into the ocean basins. While the bulk of it upwells in the Southern Ocean, the oldest waters (with a transit time of about 1000 years) upwell in the North Pacific. Extensive mixing therefore takes place between the ocean basins, reducing differences between them and making the Earth's oceans a global system. The water in these circuits transport both energy (in the form of heat) and mass (dissolved solids and gases) around the globe. As such, the state of the circulation has a large impact on the climate of the Earth. The thermohaline circulation is sometimes called the ocean conveyor belt, the great ocean conveyor, or the global conveyor belt, coined by climate scientist Wallace Smith Broecker. It is also referred to as the meridional overturning circulation, or MOC. This name is used because not every circulation pattern caused by temperature and salinity gradients is necessarily part of a single global circulation. Further, it is difficult to separate the parts of the circulation driven by temperature and salinity alone from those driven by other factors, such as the wind and tidal forces. This global circulation has two major limbs - Atlantic meridional overturning circulation (AMOC), centered in the north Atlantic Ocean, and Southern Ocean overturning circulation or Southern Ocean meridional circulation (SMOC), around Antarctica. Because 90% of the human population lives in the Northern Hemisphere, the AMOC has been far better studied, but both are very important for the global climate. Both of them also appear to be slowing down due to climate change, as the melting of the ice sheets dilutes salty flows such as the Antarctic bottom water. Either one could outright collapse to a much weaker state, which would be an example of tipping points in the climate system. The hemisphere which experiences the collapse of its circulation would experience less precipitation and become drier, while the other hemisphere would become wetter. Marine ecosystems are also likely to receive fewer nutrients and experience greater ocean deoxygenation. In the Northern Hemisphere, AMOC's collapse would also substantially lower the temperatures in many European countries, while the east coast of North America would experience accelerated sea level rise. The collapse of either circulation is generally believed to be more than a century away and may only occur under high warming, but there is a lot of uncertainty about these projections. History of research It has long been known that wind can drive ocean currents, but only at the surface. In the 19th century, some oceanographers suggested that the convection of heat could drive deeper currents. In 1908, Johan Sandström performed a series of experiments at a Bornö Marine Research Station which proved that the currents driven by thermal energy transfer can exist, but require that "heating occurs at a greater depth than cooling". Normally, the opposite occurs, because ocean water is heated from above by the Sun and becomes less dense, so the surface layer floats on the surface above the cooler, denser layers, resulting in ocean stratification. However, wind and tides cause mixing between these water layers, with diapycnal mixing caused by tidal currents being one example. This mixing is what enables the convection between ocean layers, and thus, deep water currents. In the 1920s, Sandström's framework was expanded by accounting for the role of salinity in ocean layer formation. Salinity is important because like temperature, it affects water density. Water becomes less dense as its temperature increases and the distance between its molecules expands, but more dense as the salinity increases, since there is a larger mass of salts dissolved within that water. Further, while fresh water is at its most dense at 4 °C, seawater only gets denser as it cools, up until it reaches the freezing point. That freezing point is also lower than for fresh water due to salinity, and can be below −2 °C, depending on salinity and pressure. Structure These density differences caused by temperature and salinity ultimately separate ocean water into distinct water masses, such as the North Atlantic Deep Water (NADW) and Antarctic Bottom Water (AABW). These two waters are the main drivers of the circulation, which was established in 1960 by Henry Stommel and Arnold B. Arons. They have chemical, temperature and isotopic ratio signatures (such as 231Pa / 230Th ratios) which can be traced, their flow rate calculated, and their age determined. NADW is formed because North Atlantic is a rare place in the ocean where precipitation, which adds fresh water to the ocean and so reduces its salinity, is outweighed by evaporation, in part due to high windiness. When water evaporates, it leaves salt behind, and so the surface waters of the North Atlantic are particularly salty. North Atlantic is also an already cool region, and evaporative cooling reduces water temperature even further. Thus, this water sinks downward in the Norwegian Sea, fills the Arctic Ocean Basin and spills southwards through the Greenland-Scotland-Ridge – crevasses in the submarine sills that connect Greenland, Iceland and Great Britain. It cannot flow towards the Pacific Ocean due to the narrow shallows of the Bering Strait, but it does slowly flow into the deep abyssal plains of the south Atlantic. In the Southern Ocean, strong katabatic winds blowing from the Antarctic continent onto the ice shelves will blow the newly formed sea ice away, opening polynyas in locations such as Weddell and Ross Seas, off the Adélie Coast and by Cape Darnley. The ocean, no longer protected by sea ice, suffers a brutal and strong cooling (see polynya). Meanwhile, sea ice starts reforming, so the surface waters also get saltier, hence very dense. In fact, the formation of sea ice contributes to an increase in surface seawater salinity; saltier brine is left behind as the sea ice forms around it (pure water preferentially being frozen). Increasing salinity lowers the freezing point of seawater, so cold liquid brine is formed in inclusions within a honeycomb of ice. The brine progressively melts the ice just beneath it, eventually dripping out of the ice matrix and sinking. This process is known as brine rejection. The resulting Antarctic bottom water sinks and flows north and east. It is denser than the NADW, and so flows beneath it. AABW formed in the Weddell Sea will mainly fill the Atlantic and Indian Basins, whereas the AABW formed in the Ross Sea will flow towards the Pacific Ocean. At the Indian Ocean, a vertical exchange of a lower layer of cold and salty water from the Atlantic and the warmer and fresher upper ocean water from the tropical Pacific occurs, in what is known as overturning. In the Pacific Ocean, the rest of the cold and salty water from the Atlantic undergoes haline forcing, and becomes warmer and fresher more quickly. The out-flowing undersea of cold and salty water makes the sea level of the Atlantic slightly lower than the Pacific and salinity or halinity of water at the Atlantic higher than the Pacific. This generates a large but slow flow of warmer and fresher upper ocean water from the tropical Pacific to the Indian Ocean through the Indonesian Archipelago to replace the cold and salty Antarctic Bottom Water. This is also known as 'haline forcing' (net high latitude freshwater gain and low latitude evaporation). This warmer, fresher water from the Pacific flows up through the South Atlantic to Greenland, where it cools off and undergoes evaporative cooling and sinks to the ocean floor, providing a continuous thermohaline circulation. Upwelling As the deep waters sink into the ocean basins, they displace the older deep-water masses, which gradually become less dense due to continued ocean mixing. Thus, some water is rising, in what is known as upwelling. Its speeds are very slow even compared to the movement of the bottom water masses. It is therefore difficult to measure where upwelling occurs using current speeds, given all the other wind-driven processes going on in the surface ocean. Deep waters have their own chemical signature, formed from the breakdown of particulate matter falling into them over the course of their long journey at depth. A number of scientists have tried to use these tracers to infer where the upwelling occurs. Wallace Broecker, using box models, has asserted that the bulk of deep upwelling occurs in the North Pacific, using as evidence the high values of silicon found in these waters. Other investigators have not found such clear evidence. Computer models of ocean circulation increasingly place most of the deep upwelling in the Southern Ocean, associated with the strong winds in the open latitudes between South America and Antarctica. Direct estimates of the strength of the thermohaline circulation have also been made at 26.5°N in the North Atlantic, by the UK-US RAPID programme. It combines direct estimates of ocean transport using current meters and subsea cable measurements with estimates of the geostrophic current from temperature and salinity measurements to provide continuous, full-depth, basin-wide estimates of the meridional overturning circulation. However, it has only been operating since 2004, which is too short when the timescale of the circulation is measured in centuries. Effects on global climate The thermohaline circulation plays an important role in supplying heat to the polar regions, and thus in regulating the amount of sea ice in these regions, although poleward heat transport outside the tropics is considerably larger in the atmosphere than in the ocean. Changes in the thermohaline circulation are thought to have significant impacts on the Earth's radiation budget. Large influxes of low-density meltwater from Lake Agassiz and deglaciation in North America are thought to have led to a shifting of deep water formation and subsidence in the extreme North Atlantic and caused the climate period in Europe known as the Younger Dryas. Slowdown or collapse of AMOC Slowdown or collapse of SMOC See also References Other sources External links Ocean Conveyor Belt THOR FP7 projects http://arquivo.pt/wayback/20141126093524/http%3A//www.eu%2Dthor.eu/ investigates on the topic "Thermohaline overturning- at risk?" and the predictability of changes of the THC. THOR is financed by the 7th Framework Programme of the European Commission. Physical oceanography Chemical oceanography Articles containing video clips
Thermohaline circulation
[ "Physics", "Chemistry" ]
2,281
[ "Chemical oceanography", "Applied and interdisciplinary physics", "Physical oceanography" ]
5,821,113
https://en.wikipedia.org/wiki/Pulay%20stress
The Pulay stress or Pulay forces (named for Peter Pulay) is an error that occurs in the stress tensor (or Jacobian matrix) obtained from self-consistent field calculations (Hartree–Fock or density functional theory) due to the incompleteness of the basis set. A plane-wave density functional calculation on a crystal with specified lattice vectors will typically include in the basis set all plane waves with energies below the specified energy cutoff. This corresponds to all points on the reciprocal lattice that lie within a sphere whose radius is related to the energy cutoff. Consider what happens when the lattice vectors are varied, resulting in a change in the reciprocal lattice vectors. The points on the reciprocal lattice which represent the basis set will no longer correspond to a sphere, but an ellipsoid. This change in the basis set will result in errors in the calculated ground state energy change. The Pulay stress is often nearly isotropic, and tends to result in an underestimate of the equilibrium volume. Pulay stress can be reduced by increasing the energy cutoff. Another way to mitigate the effect of Pulay stress on the equilibrium cell shape is to calculate the energy at different lattice vectors with a fixed energy cutoff. Similarly, the error occurs in any calculation where the basis set explicitly depends on the position of atomic nuclei (which are to change during the geometry optimization). In this case, the Hellmann–Feynman theorem – which is used to avoid derivation of many-parameter wave function (expanded in a basis set) – is only valid for the complete basis set. Otherwise, the terms in theorem's expression containing derivatives of the wavefunction persist, giving rise to additional forces – the Pulay forces: The presence of Pulay forces makes the optimized geometry parameters converge slower with increasing basis set. The way to eliminate the erroneous forces is to use nuclear-position-independent basis functions, to explicitly calculate and then subtract them from the conventionally obtained forces, or to self-consistently optimize the center of localization of the orbitals. References Density functional theory
Pulay stress
[ "Physics", "Chemistry" ]
431
[ "Density functional theory", "Quantum chemistry", "Quantum mechanics" ]
5,821,204
https://en.wikipedia.org/wiki/Excluded%20volume
The concept of excluded volume was introduced by Werner Kuhn in 1934 and applied to polymer molecules shortly thereafter by Paul Flory. Excluded volume gives rise to depletion forces. In liquid state theory In liquid state theory, the 'excluded volume' of a molecule is the volume that is inaccessible to other molecules in the system as a result of the presence of the first molecule. The excluded volume of a hard sphere is eight times its volume—however, for a two-molecule system, this volume is distributed among the two particles, giving the conventional result of four times the volume; this is an important quantity in the Van der Waals equation of state. The calculation of the excluded volume for particles with non-spherical shapes is usually difficult, since it depends on the relative orientation of the particles. The distance of closest approach of hard ellipses and their excluded area has been recently considered. In polymer science In polymer science, excluded volume refers to the idea that one part of a long chain molecule can not occupy space that is already occupied by another part of the same molecule. Excluded volume causes the ends of a polymer chain in a solution to be further apart (on average) than they would be were there no excluded volume (e.g. in case of ideal chain model). The recognition that excluded volume was an important factor in analyzing long-chain molecules in solutions provided an important conceptual breakthrough, and led to the explanation of several puzzling experimental results of the day. It also led to the concept of the theta point, the set of conditions at which an experiment can be conducted that causes the excluded volume effect to be neutralized. At the theta point, the chain reverts to ideal chain characteristics. The long-range interactions arising from excluded volume are eliminated, allowing the experimenter to more easily measure short-range features such as structural geometry, bond rotation potentials, and steric interactions between near-neighboring groups. Flory correctly identified that the chain dimension in polymer melts would have the size computed for a chain in ideal solution if excluded volume interactions were neutralized by experimenting at the theta point. See also Distance of closest approach Steric effects Mayer f-function References Polymer physics Rubber properties
Excluded volume
[ "Chemistry", "Materials_science" ]
444
[ "Polymer physics", "Polymer stubs", "Organic chemistry stubs", "Polymer chemistry" ]
5,824,713
https://en.wikipedia.org/wiki/Self-healing%20material
Self-healing materials are artificial or synthetically created substances that have the built-in ability to automatically repair damages to themselves without any external diagnosis of the problem or human intervention. Generally, materials will degrade over time due to fatigue, environmental conditions, or damage incurred during operation. Cracks and other types of damage on a microscopic level have been shown to change thermal, electrical, and acoustical properties of materials, and the propagation of cracks can lead to eventual failure of the material. In general, cracks are hard to detect at an early stage, and manual intervention is required for periodic inspections and repairs. In contrast, self-healing materials counter degradation through the initiation of a repair mechanism that responds to the micro-damage. Some self-healing materials are classed as smart structures, and can adapt to various environmental conditions according to their sensing and actuation properties. Although the most common types of self-healing materials are polymers or elastomers, self-healing covers all classes of materials, including metals, ceramics, and cementitious materials. Healing mechanisms vary from an instrinsic repair of the material to the addition of a repair agent contained in a microscopic vessel. For a material to be strictly defined as autonomously self-healing, it is necessary that the healing process occurs without human intervention. Self-healing polymers may, however, activate in response to an external stimulus (light, temperature change, etc.) to initiate the healing processes. A material that can intrinsically correct damage caused by normal usage could prevent costs incurred by material failure and lower costs of a number of different industrial processes through longer part lifetime, and reduction of inefficiency caused by degradation over time. History The ancient Romans used a form of lime mortar that has been found to have self-healing properties. By 2014, geologist Marie Jackson and her colleagues had recreated the type of mortar used in Trajan's Market and other Roman structures such as the Pantheon and the Colosseum and studied its response to cracking. The Romans mixed a particular type of volcanic ash called Pozzolane Rosse, from the Alban Hills volcano, with quicklime and water. They used it to bind together decimeter-sized chunks of tuff, an aggregate of volcanic rock. As a result of pozzolanic activity as the material cured, the lime interacted with other chemicals in the mix and was replaced by crystals of a calcium aluminosilicate mineral called strätlingite. Crystals of platey strätlingite grow in the cementitious matrix of the material including the interfacial zones where cracks would tend to develop. This ongoing crystal formation holds together the mortar and the coarse aggregate, countering crack formation and resulting in a material that has lasted for 1,900 years. Materials science Related processes in concrete have been studied microscopically since the 19th century. Self healing materials only emerged as a widely recognized field of study in the 21st century. The first international conference on self-healing materials was held in 2007. The field of self-healing materials is related to biomimetic materials as well as to other novel materials and surfaces with the embedded capacity for self-organization, such as the self-lubricating and self-cleaning materials. Biomimetics Plants and animals have the capacity to seal and heal wounds. In all plants and animals examined, firstly a self-sealing phase and secondly a self-healing phase can be identified. In plants, the rapid self-sealing prevents the plants from desiccation and from infection by pathogenic germs. This gives time for the subsequent self-healing of the injury which in addition to wound closure also results in the (partly) restoration of mechanical properties of the plant organ. Based on a variety of self-sealing and self-healing processes in plants, different functional principles were transferred into bio-inspired self-repairing materials. The connecting link between the biological model and the technical application is an abstraction describing the underlying functional principle of the biological model which can be for example an analytical model or a numerical model. In cases where mainly physical-chemical processes are involved a transfer is especially promising. There is evidence in the academic literature of these biomimetic design approaches being used in the development of self-healing systems for polymer composites. The DIW structure from above can be used to essentially mimic the structure of skin. Toohey et al. did this with an epoxy substrate containing a grid of microchannels containing dicyclopentadiene (DCPD), and incorporated Grubbs' catalyst to the surface. This showed partial recovery of toughness after fracture, and could be repeated several times because of the ability to replenish the channels after use. The process is not repeatable forever, because the polymer in the crack plane from previous healings would build up over time. Inspired by rapid self-sealing processes in the twining liana Aristolochia macrophylla and related species (pipevines) a biomimetic PU-foam coating for pneumatic structures was developed. With respect to low coating weight and thickness of the foam layer maximum repair efficiencies of 99.9% and more have been obtained. Other role models are latex bearing plants as the weeping fig (Ficus benjamina), the rubber tree (Hevea brasiliensis) and spurges (Euphorbia spp.), in which the coagulation of latex is involved in the sealing of lesions. Different self-sealing strategies for elastomeric materials were developed showing significant mechanical restoration after a macroscopic lesion. Self-healing polymers and elastomers In the last century, polymers became a base material in everyday life for products like plastics, rubbers, films, fibres or paints. This huge demand has forced to extend their reliability and maximum lifetime, and a new design class of polymeric materials that are able to restore their functionality after damage or fatigue was envisaged. These polymer materials can be divided into two different groups based on the approach to the self-healing mechanism: intrinsic or extrinsic. Autonomous self-healing polymers follow a three-step process very similar to that of a biological response. In the event of damage, the first response is triggering or actuation, which happens almost immediately after damage is sustained. The second response is transport of materials to the affected area, which also happens very quickly. The third response is the chemical repair process. This process differs depending on the type of healing mechanism that is in place (e.g., polymerization, entanglement, reversible cross-linking). These materials can be classified according to three mechanisms (capsule-based, vascular-based, and intrinsic), which can be correlated chronologically through four generations. While similar in some ways, these mechanisms differ in the ways that response is hidden or prevented until actual damage is sustained. Polymer breakdown From a molecular perspective, traditional polymers yield to mechanical stress through cleavage of sigma bonds. While newer polymers can yield in other ways, traditional polymers typically yield through homolytic or heterolytic bond cleavage. The factors that determine how a polymer will yield include: type of stress, chemical properties inherent to the polymer, level and type of solvation, and temperature. From a macromolecular perspective, stress induced damage at the molecular level leads to larger scale damage called microcracks. A microcrack is formed where neighboring polymer chains have been damaged in close proximity, ultimately leading to the weakening of the fiber as a whole. Homolytic bond cleavage Polymers have been observed to undergo homolytic bond cleavage through the use of radical reporters such as DPPH (2,2-diphenyl-1-picrylhydrazyl) and PMNB (pentamethylnitrosobenzene.) When a bond is cleaved homolytically, two radical species are formed that can recombine to repair damage or can initiate other homolytic cleavages which can in turn lead to more damage. Heterolytic bond cleavage Polymers have also been observed to undergo heterolytic bond cleavage through isotope labeling experiments. When a bond is cleaved heterolytically, cationic and anionic species are formed that can in turn recombine to repair damage, can be quenched by solvent, or can react destructively with nearby polymers. Reversible bond cleavage Certain polymers yield to mechanical stress in an atypical, reversible manner. Diels-Alder-based polymers undergo a reversible cycloaddition, where mechanical stress cleaves two sigma bonds in a retro Diels-Alder reaction. This stress results in additional pi-bonded electrons as opposed to radical or charged moieties. Supramolecular breakdown Supramolecular polymers are composed of monomers that interact non-covalently. Common interactions include hydrogen bonds, metal coordination, and van der Waals forces. Mechanical stress in supramolecular polymers causes the disruption of these specific non-covalent interactions, leading to monomer separation and polymer breakdown. Intrinsic polymer-based systems In intrinsic systems, the material is inherently able to restore its integrity. While extrinsic approaches are generally autonomous, intrinsic systems often require an external trigger for the healing to take place (such as thermo-mechanical, electrical, photo-stimuli, etc.). It is possible to distinguish among 5 main intrinsic self-healing strategies. The first one is based on reversible reactions, and the most widely used reaction scheme is based on Diels-Alder (DA) and retro-Diels-Alder (rDA) reactions. Another strategy achieves the self-healing in thermoset matrices by incorporating meltable thermoplastic additives. A temperature trigger allows the redispertion of thermoplastic additives into cracks, giving rise to mechanical interlocking. Polymer interlockings based on dynamic supramolecular bonds or ionomers represent a third and fourth scheme. The involved supramolecular interactions and ionomeric clusters are generally reversible and act as reversible cross-links, thus can equip polymers with self-healing ability. Finally, an alternative method for achieving intrinsic self-healing is based on molecular diffusion. Reversible bond-based polymers Reversible systems are polymeric systems that can revert to the initial state whether it is monomeric, oligomeric, or non-cross-linked. Since the polymer is stable under normal condition, the reversible process usually requires an external stimulus for it to occur. For a reversible healing polymer, if the material is damaged by means such as heating and reverted to its constituents, it can be repaired or "healed" to its polymer form by applying the original condition used to polymerize it. Polymer systems based on covalent bond formation and breakage Diels-Alder and retro-Diels-Alder Among the examples of reversible healing polymers, the Diels-Alder (DA) reaction and its retro-Diels-Alder (RDA) analogue seems to be very promising due to its thermal reversibility. In general, the monomer containing the functional groups such as furan or maleimide form two carbon-carbon bonds in a specific manner and construct the polymer through DA reaction. This polymer, upon heating, breaks down to its original monomeric units via RDA reaction and then reforms the polymer upon cooling or through any other conditions that were initially used to make the polymer. During the last few decades, two types of reversible polymers have been studied: (i) polymers where the pendant groups, such as furan or maleimide groups, cross-link through successive DA coupling reactions; (ii) polymers where the multifunctional monomers link to each other through successive DA coupling reactions. Cross-linked polymers In this type of polymer, the polymer forms through the cross linking of the pendant groups from the linear thermoplastics. For example, Saegusa et al. have shown the reversible cross-linking of modified poly(N-acetylethyleneimine)s containing either maleimide or furancarbonyl pendant moideties. The reaction is shown in Scheme 3. They mixed the two complementary polymers to make a highly cross-linked material through DA reaction of furan and maleimide units at room temperature, as the cross-linked polymer is more thermodynamically stable than the individual starting materials. However, upon heating the polymer to 80 °C for two hours in a polar solvent, two monomers were regenerated via RDA reaction, indicating the breaking of polymers. This was possible because the heating energy provided enough energy to go over the energy barrier and results in the two monomers. Cooling the two starting monomers, or damaged polymer, to room temperature for 7 days healed and reformed the polymer. The reversible DA/RDA reaction is not limited to furan-meleimides based polymers as it is shown by the work of Schiraldi et al. They have shown the reversible cross-linking of polymers bearing pendent anthracene group with maleimides. However, the reversible reaction occurred only partially upon heating to 250 °C due to the competing decomposition reaction. Polymerization of multifunctional monomers In these systems, the DA reaction takes place in the backbone itself to construct the polymer, not as a link. For polymerization and healing processes of a DA-step-growth furan-maleimide based polymer (3M4F) were demonstrated by subjecting it to heating/cooling cycles. Tris-maleimide (3M) and tetra-furan (4F) formed a polymer through DA reaction and, when heated to 120 °C, de-polymerized through RDA reaction, resulting in the starting materials. Subsequent heating to 90–120 °C and cooling to room temperature healed the polymer, partially restoring its mechanical properties through intervention. The reaction is shown in Scheme 4. Thiol-based polymers The thiol-based polymers have disulfide bonds that can be reversibly cross-linked through oxidation and reduction. Under reducing condition, the disulfide (SS) bridges in the polymer breaks and results in monomers, however, under oxidizing condition, the thiols (SH) of each monomer forms the disulfide bond, cross-linking the starting materials to form the polymer. Chujo et al. have shown the thiol-based reversible cross-linked polymer using poly(N-acetylethyleneimine). (Scheme 5) Poly(urea-urethane) A soft poly(urea-urethane) network uses the metathesis reaction in aromatic disulphides to provide room-temperature self-healing properties, without the need for external catalysts. This chemical reaction is naturally able to create covalent bonds at room temperature, allowing the polymer to autonomously heal without an external source of energy. Left to rest at room temperature, the material mended itself with 80 percent efficiency after only two hours and 97 percent after 24 hours. In 2014 a polyurea elastomer-based material was shown to be self-healing, melding together after being cut in half, without the addition of catalysts or other chemicals. The material also include inexpensive commercially available compounds. The elastomer molecules were tweaked, making the bonds between them longer. The resulting molecules are easier to pull apart from one another and better able to rebond at room temperature with almost the same strength. The rebonding can be repeated. Stretchy, self-healing paints and other coatings recently took a step closer to common use, thanks to research being conducted at the University of Illinois. Scientists there have used "off-the-shelf" components to create a polymer that melds back together after being cut in half, without the addition of catalysts or other chemicals. The urea-urethane polymers however have glassy transition temperatures below 273 K therefore at room temperature they are gels and their tensile strength is low. To optimize the tensile strength the reversible bonding energy, or the polymer length must be increased to increase the degree of covalent or mechanical interlocking respectively. However, increase polymer length inhibits mobility and thereby impairs the ability for polymers to re-reversibly bond. Thus at each polymer length an optimal reversible bonding energy exists. Vitrimers Vitrimers are a subset of polymers that bridge the gap between thermoplastics and thermosets. Their dependence on dissociative and associative exchange within dynamic covalent adaptable networks allows for a variety of chemical systems to be accessed that allow for the synthesis of mechanically robust materials with the ability to be reprocessed many times while maintaining their structural properties and mechanical strength. The self-healing aspect of these materials is due to the bond exchange of crosslinked species as a response to applied external stimuli, such as heat. Dissociative exchange is the process by which crosslinks are broken prior to recombination of crosslinking species, thereby recovering the crosslink density after exchange. Examples of dissociative exchange include reversible pericyclic reactions, nucleophilic transalkylation, and aminal transamination. Associative exchange involves the substitution reaction with an existing crosslink and the retention of crosslinks throughout exchange. Examples of associative exchange include transesterification, transamination of vinylogous urethanes, imine exchange, and transamination of diketoneamines.  Vitrimers possessing nanoscale morphology are being studied, through the use of block copolymer vitrimers in comparison to statistical copolymer analogues, to understand the effects of self-assembly on exchange rates, viscoelastic properties, and reprocessability. Other than recycling, vitrimer materials show promise for applications in medicine, for example self-healable bioepoxy, and applications in self-healing electronic screens. While these polymeric systems are still in their infancy they serve to produce commercially relevant, recyclable materials in the coming future as long as more work is done to tailor these chemical systems to commercially relevant monomers and polymers, as well as develop better mechanical testing and understanding of material properties throughout the lifetime of these materials (i.e. post reprocess cycles). Copolymers with van der Waals force If perturbation of van der Waals forces upon mechanical damage is energetically unfavourable, interdigitated alternating or random copolymer motifs will self-heal to an energetically more favourable state without external intervention. This self-healing behavior occurs within a relatively narrow compositional range depended on a viscoelastic response that energetically favours self-recovery upon chain separation, owing to ‘key-and-lock’ associations of the neighbouring chains. In essence, van der Waals forces stabilize neighbouring copolymers, which is reflected in enhanced cohesive-energy density (CED) values. Urban etc. illustrates how induced dipole interactions for alternating or random poly(methyl methacrylate-alt-ran-n-butyl acrylate) (p(MMA-alt-ran-nBA)) copolymers owing to directional van der Waals forces may enhance the CED at equilibrium (CEDeq) of entangled and side-by-side copolymer chains. Extrinsic polymer-based systems In extrinsic systems, the healing chemistries are separated from the surrounding polymer in microcapsules or vascular networks which, after material damage/cracking, release their content into the crack plane, reacting and allowing the restoration of material functionalities. These systems can be further subdivided in several categories. While capsule-based polymers sequester the healing agents in little capsules that only release the agents if they are ruptured, vascular self-healing materials sequester the healing agent in capillary type hollow channels that can be interconnected one dimensionally, two dimensionally, or three dimensionally. After one of these capillaries is damaged, the network can be refilled by an outside source or another channel that was not damaged. Intrinsic self-healing materials do not have a sequestered healing agent but instead have a latent self-healing functionality that is triggered by damage or by an outside stimulus. Extrinsic self-healing materials can achieve healing efficiencies over 100% even when the damage is large. Microcapsule healing Capsule-based systems have in common that healing agents are encapsulated into suitable microstructures that rupture upon crack formation and lead to a follow-up process in order to restore the materials' properties. If the walls of the capsule are created too thick, they may not fracture when the crack approaches, but if they are too thin, they may rupture prematurely. In order for this process to happen at room temperature, and for the reactants to remain in a monomeric state within the capsule, a catalyst is also imbedded into the thermoset. The catalyst lowers the energy barrier of the reaction and allows the monomer to polymerize without the addition of heat. The capsules around the monomer are important to maintain separation until the crack facilitates the reaction. In the capsule-catalyst system, the encapsulated healing agent is released into the polymer matrix and reacts with the catalyst, already present in the matrix. There are many challenges in designing this type of material. First, the reactivity of the catalyst must be maintained even after it is enclosed in wax. Additionally, the monomer must flow at a sufficient rate (have low enough viscosity) to cover the entire crack before it is polymerized, or full healing capacity will not be reached. Finally, the catalyst must quickly dissolve into the monomer in order to react efficiently and prevent the crack from spreading further. This process has been demonstrated with dicyclopentadiene (DCPD) and Grubbs' catalyst (benzylidene-bis(tricyclohexylphosphine)dichlororuthenium). Both DCPD and Grubbs' catalyst are imbedded in an epoxy resin. The monomer on its own is relatively unreactive and polymerization does not take place. When a microcrack reaches both the capsule containing DCPD and the catalyst, the monomer is released from the core–shell microcapsule and comes in contact with exposed catalyst, upon which the monomer undergoes ring opening metathesis polymerization (ROMP). The metathesis reaction of the monomer involves the severance of the two double bonds in favor of new bonds. The presence of the catalyst allows for the energy barrier (energy of activation) to be lowered, and the polymerization reaction can proceed at room temperature. The resulting polymer allows the epoxy composite material to regain 67% of its former strength. Grubbs' catalyst is a good choice for this type of system because it is insensitive to air and water, thus robust enough to maintain reactivity within the material. Using a live catalyst is important to promote multiple healing actions. The major drawback is the cost. It was shown that using more of the catalyst corresponded directly to higher degree of healing. Ruthenium is quite costly, which makes it impractical for commercial applications. In contrast, in multicapsule systems both the catalyst and the healing agent are encapsulated in different capsules. In a third system, called latent functionality, a healing agent is encapsulated, that can react with the polymerizer component that is present in the matrix in the form of residual reactive functionalities. In the last approach (phase separation), either the healing agent or the polymerizer is phase-separated in the matrix material. Vascular approaches The same strategies can be applied in 1D, 2D and 3D vascular based systems. Hollow tube approach For the first method, fragile glass capillaries or fibers are imbedded within a composite material. (Note: this is already a commonly used practice for strengthening materials. See Fiber-reinforced plastic.) The resulting porous network is filled with monomer. When damage occurs in the material from regular use, the tubes also crack and the monomer is released into the cracks. Other tubes containing a hardening agent also crack and mix with the monomer, causing the crack to be healed. There are many things to take into account when introducing hollow tubes into a crystalline structure. First to consider is that the created channels may compromise the load bearing ability of the material due to the removal of load bearing material. Also, the channel diameter, degree of branching, location of branch points, and channel orientation are some of the main things to consider when building up microchannels within a material. Materials that don't need to withstand much mechanical strain, but want self-healing properties, can introduce more microchannels than materials that are meant to be load bearing. There are two types of hollow tubes: discrete channels, and interconnected channels. Discrete channels Discrete channels can be built independently of building the material and are placed in an array throughout the material. When creating these microchannels, one major factor to take into account is that the closer the tubes are together, the lower the strength will be, but the more efficient the recovery will be. A sandwich structure is a type of discrete channels that consists of tubes in the center of the material, and heals outwards from the middle. The stiffness of sandwich structures is high, making it an attractive option for pressurized chambers. For the most part in sandwich structures, the strength of the material is maintained as compared to vascular networks. Also, material shows almost full recovery from damage. Interconnected networks Interconnected networks are more efficient than discrete channels, but are harder and more expensive to create. The most basic way to create these channels is to apply basic machining principles to create micro scale channel grooves. These techniques yield channels from 600 to 700 micrometers. This technique works great on the two-dimensional plane, but when trying to create a three-dimensional network, they are limited. Direct ink writing The Direct Ink Writing (DIW) technique is a controlled extrusion of viscoelastic inks to create three-dimensional interconnected networks. It works by first setting organic ink in a defined pattern. Then the structure is infiltrated with a material like an epoxy. This epoxy is then solidified, and the ink can be sucked out with a modest vacuum, creating the hollow tubes. Carbon nanotube networks Through dissolving a linear polymer inside a solid three-dimensional epoxy matrix, so that they are miscible to each other, the linear polymer becomes mobile at a certain temperature When carbon nanotubes are also incorporated into epoxy material, and a direct current is run through the tubes, a significant shift in sensing curve indicates permanent damage to the polymer, thus ‘sensing’ a crack. When the carbon nanotubes sense a crack within the structure, they can be used as thermal transports to heat up the matrix so the linear polymers can diffuse to fill the cracks in the epoxy matrix. Thus healing the material. SLIPS A different approach was suggested by Prof. J. Aizenberg from Harvard University, who suggested to use Slippery Liquid-Infused Porous Surfaces (SLIPS), a porous material inspired by the carnivorous pitcher plant and filled with a lubricating liquid immiscible with both water and oil. SLIPS possess self-healing and self-lubricating properties as well as icephobicity and were successfully used for many purposes. Sacrificial thread stitching Organic threads (such as polylactide filament for example) are stitched through laminate layers of fiber reinforced polymer, which are then boiled and vacuumed out of the material after curing of the polymer, leaving behind empty channels than can be filled with healing agents. Self-healing fibre-reinforced polymer composites Methods for the implementation of self-healing functionality into filled composites and fibre reinforced polymers (FRPs) are almost exclusively based on extrinsic systems and thus can be broadly classified into two approaches; discrete capsule-based systems and continuous vascular systems. In contrast to non-filled polymers, the success of an intrinsic approach based on bond reversibility has yet to be proven in FRPs. To date, self-healing of FRPs has mostly been applied to simple structures such as flat plates and panels. There is however a somewhat limited application of self-healing in flat panels, as access to the panel surface is relatively simple and repair methods are very well established in industry. Instead, there has been a strong focus on implementing self-healing in more complex and industrially relevant structures such as T-Joints and Aircraft Fuselages. Capsule-based systems The creation of a capsule-based system was first reported by White et al. in 2001, and this approach has since been adapted by a number of authors for introduction into fibre reinforced materials. This method relies on the release of an encapsulated healing agent into the damage zone, and is generally a once off process as the functionality of the encapsulated healing agent cannot be restored. Even so, implemented systems are able to restore material integrity to almost 100% and remain stable over the material lifetime. Vascular systems A vascular or fibre-based approach may be more appropriate for self-healing impact damage in fibre-reinforced polymer composite materials. In this method, a network of hollow channels known as vascules, similar to the blood vessels within human tissue, are placed within the structure and used for the introduction of a healing agent. During a damage event cracks propagate through the material and into the vascules causing them to be cleaved open. A liquid resin is then passed through the vascules and into the damage plane, allowing the cracks to be repaired. Vascular systems have a number of advantages over microcapsule based systems, such as the ability to continuously deliver large volumes of repair agents and the potential to be used for repeated healing. The hollow channels themselves can also be used for additional functionality, such as thermal management and structural health monitoring. A number of methods have been proposed for the introduction of these vascules, including the use of hollow glass fibres (HGFs), 3D printing, a "lost wax" process and a solid preform route. Self-healing coatings Coatings allow the retention and improvement of bulk properties of a material. They can provide protection for a substrate from environmental exposure. Thus, when damage occurs (often in the form of microcracks), environmental elements like water and oxygen can diffuse through the coating and may cause material damage or failure. Microcracking in coatings can result in mechanical degradation or delamination of the coating, or in electrical failure in fibre-reinforced composites and microelectronics, respectively. As the damage is on such a small scale, repair, if possible, is often difficult and costly. Therefore, a coating that can automatically heal itself (“self-healing coating”) could prove beneficial by automatic recovering properties (such as mechanical, electrical and aesthetic properties), and thus extending the lifetime of the coating. The majority of the approaches that are described in literature regarding self-healing materials can be applied to make “self-healing” coatings, including microencapsulation and the introduction of reversible physical bonds such as hydrogen bonding, ionomers and chemical bonds (Diels-Alder chemistry). Microencapsulation is the most common method to develop self-healing coatings. The capsule approach originally described by White et al., using microencapsulated dicyclopentadiene (DCPD) monomer and Grubbs' catalyst to self-heal epoxy polymer was later adapted to epoxy adhesive films that are commonly used in the aerospace and automotive industries for bonding metallic and composite substrates. Recently, microencapsulated liquid suspensions of metal or carbon black were used to restore electrical conductivity in a multilayer microelectronic device and battery electrodes respectively; however the use of microencapsulation for restoration of electrical properties in coatings is limited. Liquid metal microdroplets have also been suspended within silicone elastomer to create stretchable electrical conductors that maintain electrical conductivity when damaged, mimicking the resilience of soft biological tissue. The most common application of this technique is proven in polymer coatings for corrosion protection. Corrosion protection of metallic materials is of significant importance on an economical and ecological scale. To prove the effectiveness of microcapsules in polymer coatings for corrosion protection, researchers have encapsulated a number of materials. These materials include isocyanates monomers such as DCPD GMA epoxy resin, linseed oil and tung oil., and drugs. By using the aforementioned materials for self healing in coatings, it was proven that microencapsulation effectively protects the metal against corrosion and extends the lifetime of a coating. Coatings in high temperature applications may be designed to exhibit self-healing performance through the formation of a glass. In such situations, such as high emissivity coatings, the viscosity of the glass formed determines the self healing ability of the coating, which may compete with defect formation due to oxidation or ablation. Silicate glass based self-healing materials are of particular value in thermal barrier coatings and towards space applications such as heat shields. Composite materials based on Molybdenum disilicide are the subject of various studies towards enhancing their glass-based self healing performance in coating applications. Self-healing cementitious materials Cementitious materials have existed since the Roman era. These materials have a natural ability to self-heal, which was first reported by the French Academy of Science in 1836. This ability can be improved by the integration of chemical and biochemical strategies. Autogenous healing Autogenous healing is the natural ability of cementitious materials to repair cracks. This ability is principally attributed to further hydration of unhydrated cement particles and carbonation of dissolved calcium hydroxide. Cementitious materials in fresh-water systems can autogenously heal cracks up to 0.2 mm over a period of 7 weeks. In order to promote autogenous healing and to close wider cracks, superabsorbent polymers can be added to a cementitious mixture. Addition of 1 m% of selected superabsorbent polymer versus cement to a cementitious material, stimulated further hydration with nearly 40% in comparison with a traditional cementitious material, if 1 h water contact per day was allowed. Chemical additives based healing Self-healing of cementitious materials can be achieved through the reaction of certain chemical agents. Two main strategies exist for housing these agents, namely capsules and vascular tubes. These capsules and vascular tubes, once ruptured, release these agents and heal the crack damage. Studies have mainly focused on improving the quality of these housings and encapsulated materials in this field. Bio-based healing According to a 1996 study by H. L. Erlich in Chemical Geology journal, the self-healing ability of concrete has been improved by the incorporation of bacteria, which can induce calcium carbonate precipitation through their metabolic activity. These precipitates can build up and form an effective seal against crack related water ingress. At the First International Conference on Self Healing Materials held in April, 2007 in The Netherlands, Henk M. Jonkers and Erik Schlangen presented their research in which they had successfully used the "alkaliphilic spore-forming bacteria" as a "self-healing agent in concrete". They were the first to incorporate bacteria within cement paste for the development of self-healing concrete. It was found that the bacteria directly added to the paste only remained viable for 4 months. Later studies saw Jonkers use expanded clay particles and Van Tittlelboom use glass tubes, to protect the bacteria inside the concrete. Other strategies to protect the bacteria have also since been reported. Self-healing ceramics Generally, ceramics are superior in strength to metals at high temperatures, however, they are brittle and sensitive to flaws, and this brings into question their integrity and reliability as structural materials. M_{\mathit n+1}AX_\mathit{n} phase ceramics, also known as MAX Phases, can autonomously heal crack damage by an intrinsic healing mechanism. Micro cracks caused by wear or thermal stress are filled with oxides formed from the MAX phase constituents, commonly the A-element, during high temperature exposure to air. Crack gap filling was first demonstrated for Ti3AlC2 by oxidation at 1200 °C in air. Ti2AlC and Cr2AlC have also demonstrated said ability, and more ternary carbides and nitrides are expected to be able to autonomously self-heal. The process is repeatable up to the point of element depletion, distinguishing MAX phases from other self-healing materials that require external healing agents (extrinsic healing) for single crack gap filling. Depending on the filling-oxide, improvement of the initial properties such as local strength can be achieved. On the other hand, mullite, alumina and zirconia do not have the ability to heal intrinsically, but could be endowed with self-healing capabilities by embedding second phase components into the matrix. Upon cracking, these particles are exposed to oxygen, and in the presence of heat, they react to form new materials which fill the crack gap under volume expansion. This concept has been proven using SiC to heal cracks in an Alumina matrix, and further studies have investigated the high temperature strength, and the static and cyclic fatigue strength of the healed part. The strength and bonding between the matrix and the healing agent are of prime importance and thus govern the selection of the healing particles. Self-healing metals When exposed for long times to high temperatures and moderate stresses, metals exhibit premature and low-ductility creep fracture, arising from the formation and growth of cavities. Those defects coalesce into cracks which ultimately cause macroscopic failure. Self-healing of early stage damage is thus a promising new approach to extend the lifetime of the metallic components. In metals, self-healing is intrinsically more difficult to achieve than in most other material classes, due to their high melting point and, as a result, low atom mobility. Generally, defects in the metals are healed by the formation of precipitates at the defect sites that immobilize further crack growth. Improved creep and fatigue properties have been reported for underaged aluminium alloys compared to the peak hardening Al alloys, which is due to the heterogeneous precipitation at the crack tip and its plastic zone. The first attempts to heal creep damage in steels were focused on the dynamic precipitation of either Cu or BN at the creep-cavity surface. Cu precipitation has only a weak preference for deformation-induced defects as a large fraction of spherical Cu precipitates is simultaneously formed with the matrix. Recently, gold atoms were recognized as a highly efficient healing agents in Fe-based alloys. A defect-induced mechanism is indicated for the Au precipitation, i.e. the Au solute remains dissolved until defects are formed. Autonomous repair of high-temperature creep damage was reported by alloying with a small amount of Au. Healing agents selectively precipitate at the free surface of a creep cavity, resulting in pore filling. For the lower stress levels up to 80% filling of the creep cavities with Au precipitates is achieved resulting in a substantial increase in creep life time. Work to translate the concept of creep damage healing in simple binary or ternary model systems to real multicomponent creep steels is ongoing. In 2023 the Sandia National Laboratories reported the finding of self-healing of fatigue cracks in metal and reported that the observations seems to confirm a 2013 study predicting the effect. Self-healing hydrogels Hydrogels are soft solids consisting of a three dimensional network of natural or synthetic polymers with a high water content. Hydrogels based on non-covalent interactions or dynamic covalent chemistry can exhibit self-healing properties after cutting or breaking. Hydrogels that can fully fluidize followed by self-healing are of particular interest in biomedical engineering for the development of injectable hydrogels for tissue regeneration or 3D bioprinting inks. Self-healing organic dyes Recently, several classes of organic dyes were discovered that self-heal after photo-degradation when doped in PMMA and other polymer matrices. This is also known as reversible photo-degradation. It was shown that, unlike common process like molecular diffusion, the mechanism is caused by dye-polymer interaction. Self-healing of ice It has recently been shown that micrometer-sized defects in a pristine layer of ice heal spontaneously within a matter of several hours. The generated curvature by any defect causes a local increased vapor pressure and therefore enhances the volatility of the surface molecules. Hence, the mobility of the upper layer of water molecules increases significantly. The main mechanism, that dominates this healing effect is therefore sublimation from, and condensation onto the surface. This opposes earlier work that describes sintering of ice spheres by surface diffusion. Further applications Self-healing epoxies can be incorporated onto metals in order to prevent corrosion. A substrate metal showed major degradation and rust formation after 72 hours of exposure. But after being coated with the self-healing epoxy, there was no visible damage under SEM after 72 hours of the same exposure. Assessment of self-healing efficacy Numerous methodologies for the assessment of self-healing capabilities have been developed for each material class (Table 1). Hence, when self-healing is assessed, different parameters need to be considered: type of stimulus (if any), healing time, maximum amount of healing cycles the material can tolerate, and degree of recovery, all whilst considering the material's virgin properties. This typically takes account of relevant physical parameters such as tensile modulus, elongation at break, fatigue-resistance, barrier properties, colour and transparency. The self-healing ability of a given material generally refers to the recovery of a specific property relative to the virgin material, designated as the self-healing efficiency. The self-healing efficiency can be quantified by comparing the respective experimental value obtained for the undamaged virgin sample (fvirgin) with the healed sample (fhealed) (eq. ) In a variation of this definition that is relevant to extrinsic self-healing materials, the healing efficiency takes into consideration the modification of properties caused by introducing the healing agent. Accordingly, the healed sample property is compared to that of an undamaged control equipped with self-healing agent fnon-healed (equation ). For a certain property Pi of a specific material, an optimal self-healing mechanism and process is characterized by the full restoration of the respective material property after a suitable, normalized damaging process. For a material where 3 different properties are assessed, it should be determined 3 efficiencies given as ƞ1(P1), ƞ2(P2) and ƞ3(P3). The final average efficiency based on a number n of properties for a self-healing material is accordingly determined as the harmonic mean given by equation . The harmonic mean is more appropriate than the traditional arithmetic mean, as it is less sensitive to large outliers. Commercialization At least two companies are attempting to bring the newer applications of self-healing materials to the market. Arkema, a leading chemicals company, announced in 2009 the beginning of industrial production of self-healing elastomers. As of 2012, Autonomic Materials Inc., had raised over three million US dollars. References External links Biomaterials Catalysis Materials science Smart materials
Self-healing material
[ "Physics", "Chemistry", "Materials_science", "Engineering", "Biology" ]
9,068
[ "Biomaterials", "Catalysis", "Applied and interdisciplinary physics", "Materials science", "Materials", "Smart materials", "nan", "Chemical kinetics", "Matter", "Medical technology" ]
5,824,791
https://en.wikipedia.org/wiki/Muffle%20furnace
A muffle furnace or muffle oven (sometimes retort furnace in historical usage) is a furnace in which the subject material is isolated from the fuel and all of the products of combustion, including gases and flying ash. After the development of high-temperature heating elements and widespread electrification in developed countries, new muffle furnaces quickly moved to electric designs. Modern furnace Today, a muffle furnace is often a front-loading box-type oven or kiln for high-temperature applications such as fusing glass, creating enamel coatings, ceramics and soldering and brazing articles. They are also used in many research facilities, for example by chemists in order to determine what proportion of a sample is non-combustible and non-volatile (i.e., ash). Some models incorporate programmable digital controllers, allowing automatic execution of ramping, soaking, and sintering steps. Also, advances in materials for heating elements, such as molybdenum disilicide, can now produce working temperatures up to , which facilitate more sophisticated metallurgical applications. The heat source may be gas or oil burners, but more often they are now electric. The term muffle furnace may also be used to describe another oven constructed on many of the same principles as the box-type kiln mentioned above, but takes the form of a long, wide, and thin hollow tube used in roll-to-roll manufacturing processes. Both of the above-mentioned furnaces are usually heated to desired temperatures by conduction, convection, or blackbody radiation from electrical resistance heater elements. Therefore, there is (usually) no combustion involved in the temperature control of the system, which allows for much greater control of temperature uniformity and assures isolation of the material being heated from the byproducts of fuel combustion. Muffle kilns Historically, small muffle ovens were often used for a second firing of porcelain at a relatively low temperature to fix overglaze enamels; these tend to be called muffle kilns. The pigments for most enamel colours discoloured at the high temperatures required for the body and glaze of the porcelain. They were used for painted enamels on metal for the same reason. Like other types of muffle furnaces, the design isolates the objects from the flames producing the heat (with electricity this is not so important). For historical overglaze enamels the kiln was generally far smaller than that for the main firing, and produced firing temperatures in the approximate range of 750 to 950 °C, depending on the colours used. Typically, wares were fired for between five and twelve hours and then cooled over twelve hours. References Industrial furnaces Decorative arts Pottery Metallurgy
Muffle furnace
[ "Chemistry", "Materials_science", "Engineering" ]
559
[ "Metallurgical processes", "Metallurgy", "Materials science", "Industrial furnaces", "nan" ]
5,825,422
https://en.wikipedia.org/wiki/Robbins%20algebra
In abstract algebra, a Robbins algebra is an algebra containing a single binary operation, usually denoted by , and a single unary operation usually denoted by satisfying the following axioms: For all elements a, b, and c: Associativity: Commutativity: Robbins equation: For many years, it was conjectured, but unproven, that all Robbins algebras are Boolean algebras. This was proved in 1996, so the term "Robbins algebra" is now simply a synonym for "Boolean algebra". History In 1933, Edward Huntington proposed a new set of axioms for Boolean algebras, consisting of (1) and (2) above, plus: Huntington's equation: From these axioms, Huntington derived the usual axioms of Boolean algebra. Very soon thereafter, Herbert Robbins posed the Robbins conjecture, namely that the Huntington equation could be replaced with what came to be called the Robbins equation, and the result would still be Boolean algebra. would interpret Boolean join and Boolean complement. Boolean meet and the constants 0 and 1 are easily defined from the Robbins algebra primitives. Pending verification of the conjecture, the system of Robbins was called "Robbins algebra." Verifying the Robbins conjecture required proving Huntington's equation, or some other axiomatization of a Boolean algebra, as theorems of a Robbins algebra. Huntington, Robbins, Alfred Tarski, and others worked on the problem, but failed to find a proof or counterexample. William McCune proved the conjecture in 1996, using the automated theorem prover EQP. For a complete proof of the Robbins conjecture in one consistent notation and following McCune closely, see Mann (2003). Dahn (1998) simplified McCune's machine proof. See also Algebraic structure Minimal axioms for Boolean algebra References Dahn, B. I. (1998) Abstract to "Robbins Algebras Are Boolean: A Revision of McCune's Computer-Generated Solution of Robbins Problem," Journal of Algebra 208(2): 526–32. Mann, Allen (2003) "A Complete Proof of the Robbins Conjecture." William McCune, "Robbins Algebras Are Boolean," With links to proofs and other papers. Boolean algebra Formal methods Computer-assisted proofs
Robbins algebra
[ "Mathematics", "Engineering" ]
475
[ "Boolean algebra", "Computer-assisted proofs", "Mathematical logic", "Software engineering", "Fields of abstract algebra", "Formal methods" ]
7,601,607
https://en.wikipedia.org/wiki/Friedman%20Unit
The Friedman Unit, or simply Friedman, is a tongue-in-cheek neologism. One Friedman Unit is equal to six months, specifically the "next six months", a period repeatedly declared by New York Times columnist Thomas Friedman to be the most critical of the then-ongoing Iraq War even though such pronouncements extended back over two and a half years. History The term is in reference to a May 16, 2006, article by Fairness and Accuracy in Reporting (FAIR) detailing the repeated use by columnist Thomas Friedman of "the next six months" as the period in which, according to Friedman, "we're going to find out... whether a decent outcome is possible" in the Iraq War. As documented by FAIR, Friedman had been making such six-month predictions for a period of two and a half years, on at least fourteen different occasions, starting with a column in the November 30, 2003, edition of The New York Times, in which he stated: "The next six months in Iraq—which will determine the prospects for democracy-building there—are the most important six months in U.S. foreign policy in a long, long time." In tribute to Friedman's recurring prognostications, blogger Duncan Black (Atrios) coined the eponymous unit. Similar predictions by others began to be judged by the new metric in the blogosphere. For example, Zalmay Khalizad, U.S. Ambassador to Iraq, had said in June 2005 that "the next nine months are critical", now interpreted as predicting that in 1.5 Friedman units the U.S. might have a better grasp on the situation. In September, 2007, Friedman agreed not to call for any more FUs: "Stephen Colbert brings on Thomas Friedman, author of The World is Flat.... Colbert offers up the same six months (also known as a Friedman Unit or FU) that Friedman has spent the last four years claiming would be all we need to see "success in Iraq", but Friedman admits we're all out of FUs." However, in July, 2008, he again called for a delay in Barack Obama's plan to withdraw troops from Iraq, in effect adopting John McCain's position. Even after Barack Obama had designated the summer of 2010 as his goal for starting to bring American troops home from Iraq, with the end of 2010 as his deadline, in October 2008 Friedman was reported as calling for another Friedman unit, giving his opinion that, if elected, President Obama should let troops remain until 2011. A Way with Words noted that Friedman had also been calling for more Friedman units in other contexts. This refers to his appearance on The Daily Show on June 12, 2006. The term has been used in puns, for example, "Commander of the Friedman Unit," an article about the political hopes of Friedman for a third-party candidate in the U.S. presidential race of 2012. Impact It is used in discussions of whether it's too soon to tell if the U.S. is making progress. It worked its way into commentary on the war and whether an interval was indeed critical. Journalist Spencer Ackerman referred to the unit to measure the progress of American goals in other Middle Eastern countries, such as Afghanistan in 2009: "Ah, the Friedman unit, that beloved Internet tradition denoting the six-month increment many pundits believe will prove decisive in any war, only to be subject to an endless addition of ... Friedman units." Ezra Klein invented another Friedman-based term: "Friedman's Choice ... states that our real choices in Iraq are 10 months or 10 years. Either we commit the resources to entirely rebuild the place over a decade, for which there is little support, or we tell everyone that we will be out within 10 months, or sooner, and we'll deal with the consequences from afar." See also List of humorous units of measurement References External links Center for American Progress interactive timeline Political neologisms Units of time Iraq War terminology 2006 neologisms
Friedman Unit
[ "Physics", "Mathematics" ]
833
[ "Physical quantities", "Time", "Units of time", "Quantity", "Spacetime", "Units of measurement" ]
7,601,615
https://en.wikipedia.org/wiki/Lancet%20window
A lancet window is a tall, narrow window with a sharp lancet pointed arch at its top. It acquired the "lancet" name from its resemblance to a lance. Instances of this architectural element are typical of Gothic church edifices of the earliest period. Lancet windows may occur singly, or paired under a single moulding, or grouped in an odd number with the tallest window at the centre. The lancet window first appeared in the early French Gothic period (c. 1140–1200), and later in the English period of Gothic architecture (1200–1275). So common was the lancet window feature that this era is sometimes known as the "Lancet Period". The term lancet window is properly applied to windows of austere form, without tracery. Paired windows were sometimes surmounted by a simple opening such as a quatrefoil cut in plate tracery. This form gave way to the more ornate, multi-light traceried window. Examples See also Church window Monofora Polifora References 12th-century introductions Gothic architecture Church architecture Windows Architectural elements Lance
Lancet window
[ "Technology", "Engineering" ]
229
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
7,603,091
https://en.wikipedia.org/wiki/Canadian%20Centre%20for%20Energy%20Information
The Canadian Centre for Energy Information (CCEI) is a Canadian federal government website and portal that was announced on May 23, 2019. The Canadian Energy Information Portal was launched by Statistics Canada, in partnership with Natural Resources Canada, Environment and Climate Change Canada, and the Canada Energy Regulator. The regularly updated and expanded online interactive site provides a "single point" for accessing information Canadian energy sector including energy production, consumption, international trade, transportation, prices with monthly federal and provincial statistics. According to their website, the portal "supports Statistics Canada's shift toward a more collaborative model to ensure Canadians have access to a broad range of statistics." Funds were allocated in the 2019 Canadian federal budget for "increased access to comparable, consolidated energy data through the creation of a virtual data centre combining information from across federal, provincial and territorial organizations." The federal government's Canadian Centre for Energy Information will be developed using content from the Canadian Energy Information Portal. Historical CCEI The name—Canadian Centre for Energy Information (CCEI)—was previously used by a now-defunct Canadian Association of Petroleum Producers (CAPP)'s in-house information centre, that was established in 2002. The Calgary-based Petroleum Communication Foundation (PCF), which was in existence from 1975 until December 31, 2002, fulfilled a similar mandate to that of the CCEI of creating "awareness and understanding of how the Canadian petroleum industry operated." When CAPP's CCEI was established in 2002, PCF was merged into the new organization. CAPP no longer used the CCEI after 2013. Over the years, the PCF and the CCEI published seven editions of Robert Bott's Our petroleum challenge: into the 21st century under various titles. See also Canadian Energy Centre also known as Canadian Energy Centre Limited (CECL), a $30 million dollar pro-industry Calgary, Alberta-based corporation, established on December 11, 2019, by the Alberta government to improve Alberta's oil and gas reputation and to rebut and rebuke criticism of the fossil fuel industry. Energy Information Administration American counterpart Notes References External links Canadian Centre for Energy Information Petroleum industry in Canada Statistics Canada Natural Resources Canada Energy organizations Energy in Canada Petroleum politics
Canadian Centre for Energy Information
[ "Chemistry", "Engineering" ]
450
[ "Petroleum", "Energy organizations", "Petroleum politics" ]
7,604,268
https://en.wikipedia.org/wiki/AecXML
aecXML (architecture, engineering and construction extensible markup language) is a specific XML markup language which uses Industry Foundation Classes to create a vendor-neutral means to access data generated by building information modeling, BIM. It is being developed for use in the architecture, engineering, construction and facility management industries, in conjunction with BIM software, and is trademarked by the buildingSMART (the former International Alliance for Interoperability), a council of the National Institute of Building Sciences. Specific subsets are being developed, namely: Common Object Schema Infrastructure Structural Facility management Procurement Project Management Plant Operations Building Environmental Performance See also Industry Foundation Classes BuildingSMART BIM Collaboration Format External links National Institute of Building Sciences (NIBS) buildingSMART Alliance (bSa) Model Support Group (MSG) of NIBS bSa Responsible for AEC Industry Foundation Class (IFC) Development since ~2006 Links Obsolete by July 2009 at the latest: North American Chapter of the International Alliance for Interoperability Proposed common objects - a PDF file XML markup languages Building information modeling
AecXML
[ "Engineering" ]
219
[ "Building engineering", "Building information modeling" ]
7,604,357
https://en.wikipedia.org/wiki/Methylidyne%20radical
Methylidyne, or (unsubstituted) carbyne, is an organic compound whose molecule consists of a single hydrogen atom bonded to a carbon atom. It is the parent compound of the carbynes, which can be seen as obtained from it by substitution of other functional groups for the hydrogen. The carbon atom is left with either one or three unpaired electrons (unsatisfied valence bonds), depending on the molecule's excitation state; making it a radical. Accordingly, the chemical formula can be CH• or CH3• (also written as ⫶CH); each dot representing an unpaired electron. The corresponding systematic names are methylylidene or hydridocarbon(•), and methanetriyl or hydridocarbon(3•). However, the formula is often written simply as CH. Methylidyne is a highly reactive gas, that is quickly destroyed in ordinary conditions but is abundant in the interstellar medium (and was one of the first molecules to be detected there). Nomenclature The trivial name carbyne is the preferred IUPAC name. Following the substitutive nomenclature, the molecule is viewed as methane with three hydrogen atoms removed, yielding the systematic name "methylidyne". Following the additive nomenclature, the molecule is viewed as a hydrogen atom bonded to a carbon atom, yielding the name "hydridocarbon". By default, these names pay no regard to the excitation state of the molecule. When that attribute is considered, the states with one unpaired electron are named "methylylidene" or "hydridocarbon(•)", whereas the excited states with three unpaired electrons are named "methanetriyl" or "hydridocarbon(3•)". Bonding As an odd-electron species, CH is a radical. The ground state is a doublet (X2Π). The first two excited states are a quartet (with three unpaired electrons) (a4Σ−) and a doublet (A2Δ). The quartet lies at 71 kJ/mol above the ground state. Reactions of the doublet radical with non-radical species involves insertion or addition: [CH]•(X2Π) + → H• + CO (major) or • whereas reactions of the quartet radical generally involves only abstraction: [CH]3•(a4Σ−) + → + [HO]• Methylidyne can bind to metal atoms as tridentate ligand in coordination complexes. An example is methylidynetricobaltnonacarbonyl . Occurrence and reactivity Fischer–Tropsch intermediate Methylidyne-like species are implied intermediates in the Fischer–Tropsch process, the hydrogenation of CO to produce hydrocarbons. Methylidyne entities are assumed to bond to the catalyst's surface. A hypothetical sequence is: MnCO + H2 → MnCOH MnCOH + H2 → MnCH + H2O MnCH + H2 → MnCH2 The MnCH intermediate has a tridentate methylidine ligand. The methylene ligand (H2C) is then poised couple to CO or to another methylene, thereby growing the C–C chain. Amphotericity The methylylidyne group can exhibit both Lewis acidic and Lewis basic character. Such behavior is only of theoretical interest since it is not possible to produce methylidyne. In interstellar space In October 2016, astronomers reported that the methylidyne radical ⫶CH, the carbon-hydrogen positive ion :CH+, and the carbon ion ⫶C+ are the result of ultraviolet light from stars, rather than in other ways, such as the result of turbulent events related to supernovas and young stars, as thought earlier. Preparation Methylidyne can be prepared from bromoform. See also Methylene group Methylene bridge References Organometallic chemistry Free radicals
Methylidyne radical
[ "Chemistry", "Biology" ]
815
[ "Free radicals", "Senescence", "Organometallic chemistry", "Biomolecules" ]
7,604,764
https://en.wikipedia.org/wiki/Beale%20number
In mechanical engineering, the Beale number is a parameter that characterizes the performance of Stirling engines. It is often used to estimate the power output of a Stirling engine design. For engines operating with a high temperature differential, typical values for the Beale number are in the range 0.11−0.15; where a larger number indicates higher performance. Definition The Beale number can be defined in terms of a Stirling engine's operating parameters: where: Bn is the Beale number Wo is the power output of the engine (watts) P is the mean average gas pressure (Pa) or (MPa, if volume is in cm3) V is swept volume of the power piston (m3, or cm3, if pressure is in MPa) F is the engine cycle frequency (Hz) Estimating Stirling power To estimate the power output of an engine, nominal values are assumed for the Beale number, pressure, swept volume and frequency, then the power is calculated as the product of these parameters, as follows: See also West number References External links Stirling Engine Performance Calculator Beale number calculator Dimensionless numbers of thermodynamics Piston engines Mechanical engineering
Beale number
[ "Physics", "Chemistry", "Technology", "Engineering" ]
235
[ "Thermodynamics stubs", "Thermodynamic properties", "Applied and interdisciplinary physics", "Physical quantities", "Dimensionless numbers of thermodynamics", "Engines", "Piston engines", "Thermodynamics", "Mechanical engineering", "Mechanical engineering stubs", "Physical chemistry stubs" ]
7,604,906
https://en.wikipedia.org/wiki/Mantle%20convection
Mantle convection is the very slow creep of Earth's solid silicate mantle as convection currents carry heat from the interior to the planet's surface. Mantle convection causes tectonic plates to move around the Earth's surface. The Earth's lithosphere rides atop the asthenosphere, and the two form the components of the upper mantle. The lithosphere is divided into tectonic plates that are continuously being created or consumed at plate boundaries. Accretion occurs as mantle is added to the growing edges of a plate, associated with seafloor spreading. Upwelling beneath the spreading centers is a shallow, rising component of mantle convection and in most cases not directly linked to the global mantle upwelling. The hot material added at spreading centers cools down by conduction and convection of heat as it moves away from the spreading centers. At the consumption edges of the plate, the material has thermally contracted to become dense, and it sinks under its own weight in the process of subduction usually at an oceanic trench. Subduction is the descending component of mantle convection. This subducted material sinks through the Earth's interior. Some subducted material appears to reach the lower mantle, while in other regions this material is impeded from sinking further, possibly due to a phase transition from spinel to silicate perovskite and magnesiowustite, an endothermic reaction. The subducted oceanic crust triggers volcanism, although the basic mechanisms are varied. Volcanism may occur due to processes that add buoyancy to partially melted mantle, which would cause upward flow of the partial melt as it decreases in density. Secondary convection may cause surface volcanism as a consequence of intraplate extension and mantle plumes. In 1993 it was suggested that inhomogeneities in D" layer have some impact on mantle convection. Types of convection During the late 20th century, there was significant debate within the geophysics community as to whether convection is likely to be "layered" or "whole". Although elements of this debate still continue, results from seismic tomography, numerical simulations of mantle convection and examination of Earth's gravitational field are all beginning to suggest the existence of whole mantle convection, at least at the present time. In this model, cold subducting oceanic lithosphere descends all the way from the surface to the core–mantle boundary (CMB), and hot plumes rise from the CMB all the way to the surface. This model is strongly based on the results of global seismic tomography models, which typically show slab and plume-like anomalies crossing the mantle transition zone. Although it is accepted that subducting slabs cross the mantle transition zone and descend into the lower mantle, debate about the existence and continuity of plumes persists, with important implications for the style of mantle convection. This debate is linked to the controversy regarding whether intraplate volcanism is caused by shallow, upper mantle processes or by plumes from the lower mantle. Many geochemistry studies have argued that the lavas erupted in intraplate areas are different in composition from shallow-derived mid-ocean ridge basalts. Specifically, they typically have elevated helium-3 : helium-4 ratios. Being a primordial nuclide, helium-3 is not naturally produced on Earth. It also quickly escapes from Earth's atmosphere when erupted. The elevated He-3:He-4 ratio of ocean island basalts suggest that they must be sourced from a part of the Earth that has not previously been melted and reprocessed in the same way as mid-ocean ridge basalts have been. This has been interpreted as their originating from a different less well-mixed region, suggested to be the lower mantle. Others, however, have pointed out that geochemical differences could indicate the inclusion of a small component of near-surface material from the lithosphere. Planform and vigour of convection On Earth, the Rayleigh number for convection within Earth's mantle is estimated to be of order 107, which indicates vigorous convection. This value corresponds to whole mantle convection (i.e. convection extending from the Earth's surface to the border with the core). On a global scale, surface expression of this convection is the tectonic plate motions and therefore has speeds of a few cm per year. Speeds can be faster for small-scale convection occurring in low viscosity regions beneath the lithosphere, and slower in the lowermost mantle where viscosities are larger. A single shallow convection cycle takes on the order of 50 million years, though deeper convection can be closer to 200 million years. Currently, whole mantle convection is thought to include broad-scale downwelling beneath the Americas and the western Pacific, both regions with a long history of subduction, and upwelling flow beneath the central Pacific and Africa, both of which exhibit dynamic topography consistent with upwelling. This broad-scale pattern of flow is also consistent with the tectonic plate motions, which are the surface expression of convection in the Earth's mantle and currently indicate convergence toward the western Pacific and the Americas, and divergence away from the central Pacific and Africa. The persistence of net tectonic divergence away from Africa and the Pacific for the past 250 myr indicates the long-term stability of this general mantle flow pattern and is consistent with other studies that suggest long-term stability of the large low-shear-velocity provinces of the lowermost mantle that form the base of these upwellings. Creep in the mantle Due to the varying temperatures and pressures between the lower and upper mantle, a variety of creep processes can occur, with dislocation creep dominating in the lower mantle and diffusional creep occasionally dominating in the upper mantle. However, there is a large transition region in creep processes between the upper and lower mantle, and even within each section creep properties can change strongly with location and thus temperature and pressure. Since the upper mantle is primarily composed of olivine ((Mg,Fe)2SiO4), the rheological characteristics of the upper mantle are largely those of olivine. The strength of olivine is proportional to its melting temperature, and is also very sensitive to water and silica content. The solidus depression by impurities, primarily Ca, Al, and Na, and pressure affects creep behavior and thus contributes to the change in creep mechanisms with location. While creep behavior is generally plotted as homologous temperature versus stress, in the case of the mantle it is often more useful to look at the pressure dependence of stress. Though stress is simply force over area, defining the area is difficult in geology. Equation 1 demonstrates the pressure dependence of stress. Since it is very difficult to simulate the high pressures in the mantle (1MPa at 300–400 km), the low pressure laboratory data is usually extrapolated to high pressures by applying creep concepts from metallurgy. Most of the mantle has homologous temperatures of 0.65–0.75 and experiences strain rates of per second. Stresses in the mantle are dependent on density, gravity, thermal expansion coefficients, temperature differences driving convection, and the distance over which convection occurs—all of which give stresses around a fraction of 3-30MPa. Due to the large grain sizes (at low stresses as high as several mm), it is unlikely that Nabarro-Herring (NH) creep dominates; dislocation creep tends to dominate instead. 14 MPa is the stress below which diffusional creep dominates and above which power law creep dominates at 0.5Tm of olivine. Thus, even for relatively low temperatures, the stress diffusional creep would operate at is too low for realistic conditions. Though the power law creep rate increases with increasing water content due to weakening (reducing activation energy of diffusion and thus increasing the NH creep rate) NH is generally still not large enough to dominate. Nevertheless, diffusional creep can dominate in very cold or deep parts of the upper mantle. Additional deformation in the mantle can be attributed to transformation enhanced ductility. Below 400 km, the olivine undergoes a pressure-induced phase transformation, which can cause more deformation due to the increased ductility. Further evidence for the dominance of power law creep comes from preferred lattice orientations as a result of deformation. Under dislocation creep, crystal structures reorient into lower stress orientations. This does not happen under diffusional creep, thus observation of preferred orientations in samples lends credence to the dominance of dislocation creep. Mantle convection in other celestial bodies A similar process of slow convection probably occurs (or occurred) in the interiors of other planets (e.g., Venus, Mars) and some satellites (e.g., Io, Europa, Enceladus). See also Compatibility (geochemistry) – Distribution of trace elements in melt References Plate tectonics Convection Geodynamics
Mantle convection
[ "Physics", "Chemistry" ]
1,827
[ "Transport phenomena", "Physical phenomena", "Thermodynamics", "Convection" ]
7,606,440
https://en.wikipedia.org/wiki/Efimov%20state
The Efimov effect is an effect in the quantum mechanics of few-body systems predicted by the Russian theoretical physicist V. N. Efimov in 1970. Efimov's effect is where three identical bosons interact, with the prediction of an infinite series of excited three-body energy levels when a two-body state is exactly at the dissociation threshold. One corollary is that there exist bound states (called Efimov states) of three bosons even if the two-particle attraction is too weak to allow two bosons to form a pair. A (three-particle) Efimov state, where the (two-body) sub-systems are unbound, is often depicted symbolically by the Borromean rings. This means that if one of the particles is removed, the remaining two fall apart. In this case, the Efimov state is also called a Borromean state. Theory Pair interactions among three identical bosons will approach "Resonance (particle physics)" as the binding energy of some two-body bound state approaches zero, or equivalently, the s-wave scattering length of the state becomes infinite. In this limit, Efimov predicted that the three-body spectrum exhibits an infinite sequence of bound states whose scattering lengths and binding energies each form a geometric progression where the common ratio is a universal constant (). Here is the order of the imaginary-order modified Bessel function of the second kind that describes the radial dependence of the wavefunction. By virtue of the resonance-determined boundary conditions, this is the unique positive value of satisfying the transcendental equation The geometric progression of the energy levels of Efimov states is an example of a emergent discrete scaling symmetry. This phenomenon, exhibiting a renormalization group limit cycle, is closely related to the scale invariance of the form of the quantum mechanical potential of the system. Experimental results In 2005, the research group of Rudolf Grimm and Hanns-Christoph Nägerl at the Institute for Experimental Physics at the University of Innsbruck experimentally confirmed the existence of such a state for the first time in an ultracold gas of caesium atoms. In 2006, they published their findings in the scientific journal Nature. Further experimental support for the existence of the Efimov state has been given recently by independent groups. Almost 40 years after Efimov's purely theoretical prediction, the characteristic periodic behavior of the states has been confirmed. The most accurate experimental value of the scaling factor of the states has been determined by the experimental group of Rudolf Grimm at Innsbruck University as Interest in the "universal phenomena" of cold atomic gases is still growing. The discipline of universality in cold atomic gases near the Efimov states is sometimes referred to as "Efimov physics". The experimental groups of Cheng Chin of the University of Chicago and Matthias Weidemüller of the University of Heidelberg have observed Efimov states in an ultracold mixture of lithium and caesium atoms, extending Efimov's original picture of three identical bosons. An Efimov state existing as an excited state of a helium trimer was observed in an experiment in 2015. Usage The Efimov states are independent of the underlying physical interaction and can in principle be observed in all quantum mechanical systems (i.e. molecular, atomic, and nuclear). The states are very special because of their "non-classical" nature: The size of each three-particle Efimov state is much larger than the force-range between the individual particle pairs. This means that the state is purely quantum mechanical. Similar phenomena are observed in two-neutron halo-nuclei, such as lithium-11; these are called Borromean nuclei. (Halo nuclei could be seen as special Efimov states, depending on the subtle definitions.) See also Three-body force References External links Press release about the experimental confirmation (2006.03.16) Overwhelming proof for Efimov State that's become a hotbed for research some 40 years after it first appeared (2009.12.14) Observation of the Second Triatomic Resonance in Efimov’s Scenario (2014.05.15) Quantum mechanics
Efimov state
[ "Physics" ]
879
[ "Theoretical physics", "Quantum mechanics" ]
7,606,557
https://en.wikipedia.org/wiki/Cement%20board
A cement board is a combination of cement and reinforcing fibers formed into sheets, of varying thickness that are typically used as a tile backing board. Cement board can be nailed or screwed to wood or steel studs to create a substrate for vertical tile and attached horizontally to plywood for tile floors, kitchen counters and backsplashes. It can be used on the exterior of buildings as a base for exterior plaster (stucco) systems and sometimes as the finish system itself. Cement board adds impact resistance and strength to the wall surface as compared to water resistant gypsum boards. Cement board is also fabricated in thin sheets with polymer modified cements to allow bending for curved surfaces. Composition Cement boards are mainly cement bonded particle boards and Fibre cement. Cement bonded particle boards have treated wood flakes as reinforcement, whereas cement fibre boards have cellulose fibre, which is a plant extract as reinforcement. Cement acts as binder in both the cases. The fire resistance properties of cement bonded blue particle boards and cement fibre boards are the same. In terms of load-bearing capacity, cement-bonded particle boards have higher capacity than cement fibre boards. Cement particle boards are manufactured from thickness making it suitable for high load bearing applications. These boards are made of a homogeneous mixture and hence are formed as single layer for any thickness. Cement fibre boards are more used in decorative applications and can be manufactured from thickness. Fibre boards are made in very thin layers, making it extremely difficult to manufacture high thickness boards. Many manufacturers use additives like mica, aluminium stearate and cenospheres in order to achieve certain board qualities. Typical cement fiber board is made of approximately 40-60% of cement, 20-30% of fillers, 8-10% of cellulose, 10-15% of mica. Other additives like above mentioned aluminium stearate and PVA are normally used in quantities less than 1%. Cenospheres are used only in low density boards with quantities between 10 and 15%. The actual recipe depends on available raw materials and other local factors. Advantages As a tile backing board, cement board has better long-term performance than paper-faced gypsum core products because it will not mildew or physically break down in the continued presence of moisture or leaks. Also cement board provides a stronger bond and support with tiles than typical gypsum board. Cement board is not waterproof. It absorbs moisture as well, but it has excellent drying properties. In areas continually exposed to water spray (i.e., showers) a waterproofing material is usually recommended behind the boards (i.e., plastic barrier) or as a trowel-applied product to the face of the boards behind the finish system (i.e., liquid membrane). Disadvantages One major disadvantage of cement board is the weight per square foot. It is approximately twice that of gypsum board, making handling by one person difficult. Cutting of cement board must also be done with carbide-tipped tools and saw blades. Due to its hardness, pre-drilling of fasteners is often recommended. Finally, cement board is initially more expensive than water resistant gypsum board but may provide better long term value. Installation Cement board is hung with corrosion resistant screws or ring-shank nails. Cement board has very little movement under thermal stress, but the boards are usually installed with a slight gap at joints in shower pans, bathtubs, and each other. These joints are then filled with silicone sealant or the manufacturer's taping compounds before applying a finish. The filled joints are taped like conventional gypsum board, but with fiberglass tapes that provide additional water resistance. Combined with a water impermeable finish, cement board is a stable, durable backing board. Water resistance There is a class of cement board strictly constructed of a Portland cement based core with glass fiber mat reinforcing at both faces. These panels can be immersed in water without any degradation (excluding freeze thaw cycles). These panels do not require the sealing of edges and penetrations to maintain their structural integrity. These Portland cement based products are smaller in size compared with the gypsum core based products. Typically they range in size from to . They are, as one would expect, considerably heavier than the gypsum core type panels. Portland cement based panels are ideal for truly wet locations like shower surrounds and for locations where a Portland cement based thin-set material is used for bonding tile and stone surfaces to a substrate. They are also ideal for floor tile and stone installations over a structural subfloor. Cement boards may be classified as water resistant as they are not affected by water exposure; however, they do allow penetration and passage of water and water vapor. To waterproof cement boards, a liquid or membrane waterproofing material is applied over its surface. Cement boards should not be confused with gypsum core backer boards. Gypsum core backer boards are affected by water and should not be used on wet exposure areas. See also Clay panel Drywall Fiber cement siding Fibre cement Thermasave, an insulated structural panel References External links Why is Cement Backer Board So Great? (And What to Use it For! By Lee Wallender, About.com iVillage Home & Garden Network, What are pro's and con's of Wonderboard vs Hardybacker National Gypsum PERMABASE BRAND CEMENT BOARD Building materials Cement Engineered wood Wallcoverings
Cement board
[ "Physics", "Engineering" ]
1,120
[ "Building engineering", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
7,609,851
https://en.wikipedia.org/wiki/Malate%E2%80%93aspartate%20shuttle
The malate–aspartate shuttle (sometimes simply the malate shuttle) is a biochemical system for translocating electrons produced during glycolysis across the semipermeable inner membrane of the mitochondrion for oxidative phosphorylation in eukaryotes. These electrons enter the electron transport chain of the mitochondria via reduction equivalents to generate ATP. The shuttle system is required because the mitochondrial inner membrane is impermeable to NADH, the primary reducing equivalent of the electron transport chain. To circumvent this, malate carries the reducing equivalents across the membrane. Components The shuttle consists of four protein parts: malate dehydrogenase in the mitochondrial matrix and intermembrane space. aspartate aminotransferase in the mitochondrial matrix and intermembrane space. malate-alpha-ketoglutarate antiporter in the inner membrane. glutamate-aspartate antiporter in the inner membrane. Mechanism The primary enzyme in the malate–aspartate shuttle is malate dehydrogenase. Malate dehydrogenase is present in two forms in the shuttle system: mitochondrial malate dehydrogenase and cytosolic malate dehydrogenase. The two malate dehydrogenases are differentiated by their location and structure, and catalyze their reactions in opposite directions in this process. First, in the cytosol, malate dehydrogenase catalyses the reaction of oxaloacetate and NADH to produce malate and NAD+. In this process, two electrons generated from NADH, and an accompanying H+, are attached to oxaloacetate to form malate. Once malate is formed, the first antiporter (malate-alpha-ketoglutarate) imports the malate from the cytosol into the mitochondrial matrix and also exports alpha-ketoglutarate from the matrix into the cytosol simultaneously. After malate reaches the mitochondrial matrix, it is converted by mitochondrial malate dehydrogenase into oxaloacetate, during which NAD+ is reduced with two electrons to form NADH. Oxaloacetate is then transformed into aspartate (since oxaloacetate cannot be transported into the cytosol) by mitochondrial aspartate aminotransferase. Since aspartate is an amino acid, an amino radical needs to be added to the oxaloacetate. This is supplied by glutamate, which in the process is transformed into alpha-ketoglutarate by the same enzyme. The second antiporter (AGC1 or AGC2) imports glutamate from the cytosol into the matrix and exports aspartate from the matrix to the cytosol. Once in the cytosol, aspartate is converted by cytosolic aspartate aminotransferase to oxaloacetate. The net effect of the malate–aspartate shuttle is purely redox: NADH in the cytosol is oxidized to NAD+, and NAD+ in the matrix is reduced to NADH. The NAD+ in the cytosol can then be reduced again by another round of glycolysis, and the NADH in the matrix can be used to pass electrons to the electron transport chain so ATP can be synthesized. Since the malate–aspartate shuttle regenerates NADH inside the mitochondrial matrix, it is capable of maximizing the number of ATPs produced in glycolysis (3/NADH), ultimately resulting in a net gain of 38 ATP molecules per molecule of glucose metabolized. Compare this to the glycerol 3-phosphate shuttle, which reduces FAD+ to produce FADH2, donates electrons to the quinone pool in the electron transport chain, and is capable of generating only 2 ATPs per NADH generated in glycolysis (ultimately resulting in a net gain of 36 ATPs per glucose metabolized). (These ATP numbers are prechemiosmotic, and should be reduced in light of the work of Mitchell and many others. Each NADH produces only 2.5 ATPs, and each FADH2 produces only 1.5 ATPs. Hence, the ATPs per glucose should be reduced to 32 from 38 and 30 from 36. The extra H+ required to bring in the inorganic phosphate during oxidative-phosphorylation contributes to the 30 and 32 numbers as well). Regulation The activity of malate–aspartate shuttle is modulated by arginine methylation of malate dehydrogenase 1 (MDH1). Protein arginine N-methyltransferase CARM1 methylates and inhibits MDH1 by disrupting its dimerization, which represses malate–aspartate shuttle and inhibits mitochondria respiration of pancreatic cancer cells. Interactive pathway map See also Glycerol phosphate shuttle Mitochondrial shuttle References Biochemical reactions Cellular respiration
Malate–aspartate shuttle
[ "Chemistry", "Biology" ]
1,050
[ "Biochemistry", "Cellular respiration", "Metabolism", "Biochemical reactions" ]
7,609,997
https://en.wikipedia.org/wiki/Orbit%20determination
Orbit determination is the estimation of orbits of objects such as moons, planets, and spacecraft. One major application is to allow tracking newly observed asteroids and verify that they have not been previously discovered. The basic methods were discovered in the 17th century and have been continuously refined. Observations are the raw data fed into orbit determination algorithms. Observations made by a ground-based observer typically consist of time-tagged azimuth, elevation, range, and/or range rate values. Telescopes or radar apparatus are used, because naked-eye observations are inadequate for precise orbit determination. With more or better observations, the accuracy of the orbit determination process also improves, and fewer "false alarms" result. After orbits are determined, mathematical propagation techniques can be used to predict the future positions of orbiting objects. As time goes by, the actual path of an orbiting object tends to diverge from the predicted path (especially if the object is subject to difficult-to-predict perturbations such as atmospheric drag), and a new orbit determination using new observations serves to re-calibrate knowledge of the orbit. Satellite tracking is another major application. For the United States and partner countries, to the extent that optical and radar resources allow, the Joint Space Operations Center gathers observations of all objects in Earth orbit. The observations are used in new orbit determination calculations that maintain the overall accuracy of the satellite catalog. Collision avoidance calculations may use this data to calculate the probability that one orbiting object will collide with another. A satellite's operator may decide to adjust the orbit, if the risk of collision in the present orbit is unacceptable. (It is not possible to adjust the orbit for events of very low probability; it would soon use up the propellant the satellite carries for orbital station-keeping.) Other countries, including Russia and China, have similar tracking assets. History Orbit determination has a long history, beginning with the prehistoric discovery of the planets and subsequent attempts to predict their motions. Johannes Kepler used Tycho Brahe's careful observations of Mars to deduce the elliptical shape of its orbit and its orientation in space, deriving his three laws of planetary motion in the process. The mathematical methods for orbit determination originated with the publication in 1687 of the first edition of Newton's Principia, which gave a method for finding the orbit of a body following a parabolic path from three observations. This was used by Edmund Halley to establish the orbits of various comets, including that which bears his name. Newton's method of successive approximation was formalised into an analytic method by Euler in 1744, whose work was in turn generalised to elliptical and hyperbolic orbits by Lambert in 1761–1777. Another milestone in orbit determination was Carl Friedrich Gauss's assistance in the "recovery" of the dwarf planet Ceres in 1801. Gauss's method was able to use just three observations (in the form of celestial coordinates) to find the six orbital elements that completely describe an orbit. The theory of orbit determination has subsequently been developed to the point where today it is applied in GPS receivers as well as the tracking and cataloguing of newly observed minor planets. Observational data In order to determine the unknown orbit of a body, some observations of its motion with time are required. In early modern astronomy, the only available observational data for celestial objects were the right ascension and declination, obtained by observing the body as it moved in its observation arc, relative to the fixed stars, using an optical telescope. This corresponds to knowing the object's relative direction in space, measured from the observer, but without knowledge of the distance of the object, i.e. the resultant measurement contains only direction information, like a unit vector. With radar, relative distance measurements (by timing of the radar echo) and relative velocity measurements (by measuring the Doppler effect of the radar echo) are possible using radio telescopes. However, the returned signal strength from radar decreases rapidly, as the inverse fourth power of the range to the object. This generally limits radar observations to objects relatively near the Earth, such as artificial satellites and Near-Earth objects. Larger apertures permit tracking of transponders on interplanetary spacecraft throughout the solar system, and radar astronomy of natural bodies. Various space agencies and commercial providers operate tracking networks to provide these observations. See :Category:Deep space networks for a partial listing. Space-based tracking of satellites is also regularly performed. See List of radio telescopes#Space-based and Space Network. Methods Orbit determination must take into account that the apparent celestial motion of the body is influenced by the observer's own motion. For instance, an observer on Earth tracking an asteroid must take into account the motion of the Earth around the Sun, the rotation of the Earth, and the observer's local latitude and longitude, as these affect the apparent position of the body. A key observation is that (to a close approximation) all objects move in orbits that are conic sections, with the attracting body (such as the Sun or the Earth) in the prime focus, and that the orbit lies in a fixed plane. Vectors drawn from the attracting body to the body at different points in time will all lie in the orbital plane. If the position and velocity relative to the observer are available (as is the case with radar observations), these observational data can be adjusted by the known position and velocity of the observer relative to the attracting body at the times of observation. This yields the position and velocity with respect to the attracting body. If two such observations are available, along with the time difference between them, the orbit can be determined using Lambert's method, invented in the 18th century. See Lambert's problem for details. Even if no distance information is available, an orbit can still be determined if three or more observations of the body's right ascension and declination have been made. Gauss's method, made famous in his 1801 "recovery" of the first lost minor planet, Ceres, has been subsequently polished. One use is in the determination of asteroid masses via the dynamic method. In this procedure Gauss's method is used twice, both before and after a close interaction between two asteroids. After both orbits have been determined the mass of one or both of the asteroids can be worked out. Orbit determination from a state vector The basic orbit determination task is to determine the classical orbital elements or Keplerian elements, , from the orbital state vectors [], of an orbiting body with respect to the reference frame of its central body. The central bodies are the sources of the gravitational forces, like the Sun, Earth, Moon and other planets. The orbiting bodies, on the other hand, include planets around the Sun, artificial satellites around the Earth, and spacecraft around planets. Newton's laws of motion will explain the trajectory of an orbiting body, known as Keplerian orbit. The steps of orbit determination from one state vector are summarized as follows: Compute the specific angular momentum of the orbiting body from its state vector: where is the unit vector of the z-axis of the orbital plane. The specific angular momentum is a constant vector for an orbiting body, with its direction perpendicular to the orbital plane of the orbiting body. Compute the ascending node vector from , with representing the unit vector of the Z-axis of the reference plane, which is perpendicular to the reference plane of the central body: The ascending node vector is a vector pointing from the central body to the ascending node of the orbital plane of the orbiting body. Since the line of ascending node is the line of intersection between the orbital plane and the reference plane, it is perpendicular to both the normal vectors of the reference plane () and the orbital plane ( or ). Therefore, the ascending node vector can be defined by the cross product of these two vectors. Compute the eccentricity vector of the orbit. The eccentricity vector has the magnitude of the eccentricity, , of the orbit, and points to the direction of the periapsis of the orbit. This direction is often defined as the x-axis of the orbital plane and has a unit vector . According to the law of motion, it can be expressed as: where is the standard gravitational parameter for the central body of mass , and is the universal gravitational constant. Compute the semi-latus rectum of the orbit, and its semi-major axis (if it is not a parabolic orbit, where and is undefined or defined as infinity): (if ). Compute the inclination of the orbital plane with respect to the reference plane: where is the Z-coordinate of when it is projected to the reference frame. Compute the longitude of ascending node , which is the angle between the ascending line and the X-axis of the reference frame: where and are the X- and Y- coordinates, respectively, of , in the reference frame. Notice that , but is defined only in [0,180] degrees. So is ambiguous in that there are two angles, and in [0,360], who have the same value. It could actually return the angle or . Therefore, we have to make the judgment based on the sign of the Y-coordinate of the vector in the plane where the angle is measured. In this case, can be used for such judgment. Compute the argument of periapsis , which is the angle between the periapsis and the ascending line: where is the Z-coordinate of in the reference frame. Compute the true anomaly at epoch, which is the angle between the position vector and the periapsis at the particular time ('epoch') of observation: The sign of can be used to check the quadrant of and correct the angle, because it has the same sign as the fly-path angle . And, the sign of the fly-path angle is always positive when , and negative when . Both are related by and . Optionally, we may compute the argument of latitude at epoch, which is the angle between the position vector and the ascending line at the particular time: where is the Z-coordinate of in the reference frame. References Further reading Curtis, H.; Orbital Mechanics for Engineering Students, Chapter 5; Elsevier (2005) . Taff, L.; Celestial Mechanics, Chapters 7, 8; Wiley-Interscience (1985) . Bate, Mueller, White; Fundamentals of Astrodynamics, Chapters 2, 5; Dover (1971) . Madonna, R.; Orbital Mechanics, Chapter 3; Krieger (1997) . Schutz, Tapley, Born; Statistical Orbit Determination, Academic Press. Satellite Orbit Determination, Coastal Bend College, Texas Astrometry Spaceflight technology Orbits Astrodynamics
Orbit determination
[ "Astronomy", "Engineering" ]
2,166
[ "Astrodynamics", "Astrometry", "Astronomical sub-disciplines", "Aerospace engineering" ]
7,611,764
https://en.wikipedia.org/wiki/Hall%20algebra
In mathematics, the Hall algebra is an associative algebra with a basis corresponding to isomorphism classes of finite abelian p-groups. It was first discussed by but forgotten until it was rediscovered by , both of whom published no more than brief summaries of their work. The Hall polynomials are the structure constants of the Hall algebra. The Hall algebra plays an important role in the theory of Masaki Kashiwara and George Lusztig regarding canonical bases in quantum groups. generalized Hall algebras to more general categories, such as the category of representations of a quiver. Construction A finite abelian p-group M is a direct sum of cyclic p-power components where is a partition of called the type of M. Let be the number of subgroups N of M such that N has type and the quotient M/N has type . Hall proved that the functions g are polynomial functions of p with integer coefficients. Thus we may replace p with an indeterminate q, which results in the Hall polynomials Hall next constructs an associative ring over , now called the Hall algebra. This ring has a basis consisting of the symbols and the structure constants of the multiplication in this basis are given by the Hall polynomials: It turns out that H is a commutative ring, freely generated by the elements corresponding to the elementary p-groups. The linear map from H to the algebra of symmetric functions defined on the generators by the formula (where en is the nth elementary symmetric function) uniquely extends to a ring homomorphism and the images of the basis elements may be interpreted via the Hall–Littlewood symmetric functions. Specializing q to 0, these symmetric functions become Schur functions, which are thus closely connected with the theory of Hall polynomials. References George Lusztig, Quivers, perverse sheaves, and quantized enveloping algebras, Journal of the American Mathematical Society 4 (1991), no. 2, 365–421. Algebras Invariant theory Symmetric functions
Hall algebra
[ "Physics", "Mathematics" ]
408
[ "Symmetry", "Mathematical structures", "Group actions", "Algebras", "Symmetric functions", "Invariant theory", "Algebraic structures", "Algebra" ]
16,674,008
https://en.wikipedia.org/wiki/Radenska
Radenska is a Slovenia-based worldwide known brand of mineral water, trademark of Radenska d.o.o. company. It is one of the oldest Slovenian brands. Brand history Development of the mineral water company started at Radenci in 1869, when Karl Henn, owner of the land, filled the first bottles of mineral water. By the end of the century, the company had sold over 1 million bottles. Three hearts Mineral water brand name Radenska Three Hearts (Radenska Tri srca) has been in use since 1936. It was designed in 1931 by the illustrator Milko Bambič. According to the author, the three hearts symbolised three former nations of the Kingdom of Yugoslavia: Serbs, Croats, and Slovenes. Sponsorship The company is the title sponsor of UCI Continental cycling team . See also Donat Mg References External links Radenska homepage Mineral water Bottled water brands Slovenian brands 1869 introductions 1869 establishments in Austria-Hungary Slovenian drinks
Radenska
[ "Chemistry" ]
198
[ "Mineral water" ]
16,677,505
https://en.wikipedia.org/wiki/Mass-analyzed%20ion-kinetic-energy%20spectrometry
Mass-analyzed ion kinetic-energy spectrometry (MIKES) is a mass spectrometry technique by which mass spectra are obtained from a sector instrument that incorporates at least one magnetic sector plus one electric sector in reverse geometry (the beam first enters the magnetic sector). The accelerating voltage V, and the magnetic field B, are set to select the precursor ions of a particular m/z. The precursor ions then dissociate or react in an electric field-free region between the two sectors. The ratio of the kinetic energy to charge of the product ions are analyzed by scanning the electric sector field E. The width of the product ion spectrum peaks is related to the kinetic energy release distribution for the dissociation process. History MIKES was developed at Purdue University in 1973 by Beynon, Cooks, J. W. Amy, W. E. Baitinger, and T. Y. Ridley. MIKES was invented because researches at Purdue and Cornell thought that if the parent ion was mass-selected before the dissociation and mass analysis of the products by the electric sector it would be easier to study the metastable ions and the collision-induced dissociation (CID). This was an achievement because it combined the utility of previous instruments such as the ion kinetic energy spectrometer with the ability to mass select precursor ions. That precursor ion is mass selected with the magnetic sector. The dissociation products are then mass analyzed using the electric sector. "The peak shapes revealed from the electric sector scan can provide information on the kinetic energy release from in the course of fragmentation and on the kinetic energy uptake in the course of ionic collision processes." The dispersion of velocities due to kinetic energy release leads to the characteristic wide metastable peaks observed using MIKES techniques. Application MIKES is a powerful technique used for structural studies of organic compounds, gaseous ions, and also for direct analysis of complex mixtures without separation of the components. In other words, it is used for molecular structure studies. The reason why MIKES is good for molecular structure studies is due to the reverse-geometry of MIKES. The MIKES Schematic shows that the ion species in the source goes into the magnetic field. After which, the chemistry is later studied in the second field-free region (FFR) by scanning the electric sector which defines the nature of the fragments by measuring their kinetic energy. This causes competitive unimolecular fragmentations that can be observed in the MIKE spectra. Furthermore, if gas is brought into the second FFR, more dissociation will be induced by collision, that will later appear in the MIKE spectra. Tandem MS scan This scan uses reverse-geometry (BE-type) instruments. These instruments use a front-end magnetic sector that allows for exclusive mass selection of the precursor ion. The fragmentation region is in-between the two analyzers. The electric sector scan gives the product-ion spectrum. MIKES can also be used for direct measurement of kinetic-energy release values. Advantages MIKES, as the name implies, is used for kinetic energy spectrometery. This means that certain criteria are needed to accomplish this. One such feature of MIKES is that it has high kinetic energy resolution and good angular resolution. This is due to the fact that MIKES has low accelerating voltage, around 3 kilo-volts. Another feature is that it has good differential pumping between the various regions of the instrument. In addition, MIKES has multiple systems for bringing in and/or overseeing collision gases or vapors and the ability to vary slit height and width. This prevents favoritism when determining kinetic energy distributions. Although common now, back in the 1970s, MIKES had a great computer compatibility that allowed for readily obtainable molecular structures. Disadvantages A disadvantage to MIKES is that observations are made later in the ion flight path when compared to other methods. Also, a smaller number of ions will typically decompose. This will in turn cause the sensitivity to be lower than other kinetic energy spectroscopy methods. See also Gas phase ion chemistry Unimolecular ion decomposition R. Graham Cooks References Further reading Mass spectrometry Spectroscopy
Mass-analyzed ion-kinetic-energy spectrometry
[ "Physics", "Chemistry" ]
851
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Spectroscopy", "Matter" ]
196,141
https://en.wikipedia.org/wiki/Electrician
An electrician is a tradesperson specializing in electrical wiring of buildings, transmission lines, stationary machines, and related equipment. Electricians may be employed in the installation of new electrical components or the maintenance and repair of existing electrical infrastructure. Electricians may also specialize in wiring ships, airplanes, and other mobile platforms, as well as data and cable lines. Terminology Electricians were originally people who demonstrated or studied the principles of electricity, often electrostatic generators of one form or another. In the United States, electricians are divided into two primary categories: lineperson, who work on electric utility company distribution systems at higher voltages, and wiremen, who work with the lower voltages utilized inside buildings. Wiremen are generally trained in one of five primary specialties: commercial, residential, light industrial, industrial, and low-voltage wiring, more commonly known as Voice-Data-Video, or VDV. Other sub-specialties such as control wiring and fire-alarm may be performed by specialists trained in the devices being installed, or by inside wiremen. Electricians are trained to one of three levels: Apprentice, Journeyperson, and Master Electrician. In the US and Canada, apprentices work and receive a reduced compensation while learning their trade. They generally take several hundred hours of classroom instruction and are contracted to follow apprenticeship standards for a period of between three and six years, during which time they are paid as a percentage of the Journeyperson's pay. Journeymen are electricians who have completed their Apprenticeship and who have been found by the local, State, or National licensing body to be competent in the electrical trade. Master Electricians have performed well in the trade for a period of time, often seven to ten years, and have passed an exam to demonstrate superior knowledge of the National Electrical Code, or NEC. Service electricians are tasked to respond to requests for isolated repairs and upgrades. They have skills troubleshooting wiring problems, installing wiring in existing buildings, and making repairs. Construction electricians primarily focus on larger projects, such as installing all new electrical system for an entire building, or upgrading an entire floor of an office building as part of a remodeling process. Other specialty areas are marine electricians, research electricians and hospital electricians. "Electrician" is also used as the name of a role in stagecraft, where electricians are tasked primarily with hanging, focusing, and operating stage lighting. In this context, the Master Electrician is the show's chief electrician. Although theater electricians routinely perform electrical work on stage lighting instruments and equipment, they are not part of the electrical trade and have a different set of skills and qualifications from the electricians that work on building wiring. In the film industry and on a television crew the head electrician is referred to as a Gaffer. Electrical contractors are businesses that employ electricians to design, install, and maintain electrical systems. Contractors are responsible for generating bids for new jobs, hiring tradespeople for the job, providing material to electricians in a timely manner, and communicating with architects, electrical and building engineers, and the customer to plan and complete the finished product. Training and regulation of trade Many jurisdictions have regulatory restrictions concerning electrical work for safety reasons due to the many hazards of working with electricity. Such requirements may be testing, registration or licensing. Licensing requirements vary between jurisdictions. Australia An electrician's license entitles the holder to carry out all types of electrical installation work in Australia without supervision. However, to contract, or offer to contract, to carry out electrical installation work, a licensed electrician must also be registered as an electrical contractor. Under Australian law, electrical work that involves fixed wiring is strictly regulated and must almost always be performed by a licensed electrician or electrical contractor. A local electrician can handle a range of work including air conditioning, light fittings and installation, safety switches, smoke alarm installation, inspection and certification and testing and tagging of electrical appliances. To provide data, structured cabling systems, home automation & theatre, LAN, WAN and VPN data solutions or phone points, an installer must be licensed as a Telecommunications Cable Provider under a scheme controlled by Australian Communications and Media Authority Electrical licensing in Australia is regulated by the individual states. In Western Australia, the Department of Commerce tracks licensee's and allows the public to search for individually named/licensed Electricians. Currently in Victoria the apprenticeship lasts for four years, during three of those years the apprentice attends trade school in either a block release of one week each month or one day each week. At the end of the apprenticeship the apprentice is required to pass three examinations, one of which is theory based with the other two practically based. Upon successful completion of these exams, providing all other components of the apprenticeship are satisfactory, the apprentice is granted an A Class licence on application to Energy Safe Victoria (ESV). An A Class electrician may perform work unsupervised but is unable to work for profit or gain without having the further qualifications necessary to become a Registered Electrical Contractor (REC) or being in the employment of a person holding REC status. However, some exemptions do exist. In most cases a certificate of electrical safety must be submitted to the relevant body after any electrical works are performed. Safety equipment used and worn by electricians in Australia (including insulated rubber gloves and mats) needs to be tested regularly to ensure it is still protecting the worker. Because of the high risk involved in this trade, this testing needs to be performed regularly and regulations vary according to state. Industry best practice is the Queensland Electrical Safety Act 2002, and requires six-monthly testing. Canada Training of electricians follows an apprenticeship model, taking four or five years to progress to fully qualified journeyperson level. Typical apprenticeship programs consists of 80-90% hands-on work under the supervision of journeymen and 10-20% classroom training. Training and licensing of electricians is regulated by each province, however professional licenses are valid throughout Canada under Agreement on Internal Trade. An endorsement under the Red Seal Program provides additional competency assurance to industry standards. In order for individuals to become a licensed electricians, they need to have 9000 hours of practical, on the job training. They also need to attend school for 4 terms and pass a provincial exam. This training enables them to become journeyperson electricians. Furthermore, in British Columbia, an individual can go a step beyond that and become a "FSR", or field safety representative. This credential gives the ability to become a licensed electrical contractor and to pull permits. Notwithstanding this, some Canadian provinces only grant "permit pulling privileges" to current Master Electricians, that is, a journeyperson who has been engaged in the industry for three years and has passed the Master's examination (i.e. Alberta). The various levels of field safety representatives are A, B and C. The only difference between each class is that they are able to do increasingly higher voltage and current work. United Kingdom The two qualification awarding organisations are City and Guilds and EAL. Electrical competence is required at Level 3 to practice as a 'qualified electrician' in the UK. Once qualified and demonstrating the required level of competence an Electrician can apply to register for a Joint Industry Board Electrotechnical Certification Scheme card in order to work on building sites or other controlled areas. Although partly covered during Level 3 training, more in depth knowledge and qualifications can be obtained covering subjects such as Design and Verification or Testing and Inspection among others. These additional qualifications can be listed on the reverse of the JIB card. Beyond this level is additional training and qualifications such as EV charger installations or training and working in specialist areas such as street furniture or within industry. The Electricity at Work Regulations are a statutory document that covers the use and proper maintenance of electrical equipment and installations within businesses and other organisations such as charities. Parts of the Building Regulations cover the legal requirements of the installation of electrical technical equipment with Part P outlining most of the regulations covering dwellings Information regarding design, selection, installation and testing of electrical structures is provided in the non-statutory publication 'Requirements for Electrical Installations, IET Wiring Regulations, Eighteenth Edition, BS 7671:2018' otherwise known as the Wiring Regulations or 'Regs'. Usual amendments are published on an ad hoc bases when minor changes occur. The first major update of the 18th Edition were published during February 2020 mainly covering the section covering Electric vehicles charger installations although an addendum was published during December 2019 correcting some minor mistakes and adding some small changes. The IET also publish a series of 'Guidance Notes' in book form that provide further in-depth knowledge. With the exception of the work covered by Part P of the Building Regulations, such as installing consumer units, new circuits or work in bathrooms, there are no laws that prevent anyone from carrying out some basic electrical work in the UK. In British English, an electrician is colloquially known as a "spark". United States The United States does not offer nationwide licensing and electrical licenses are issued by individual states. There are variations in licensing requirements, however, all states recognize three basic skill categories: level electricians. Journeyperson electricians can work unsupervised provided that they work according to a master's direction. Generally, states do not offer journeyperson permits, and journeyperson electricians and other apprentices can only work under permits issued to a master electrician. Apprentices may not work without direct supervision. Before electricians can work unsupervised, they are usually required to serve an apprenticeship lasting three to five years under the general supervision of a master electrician and usually the direct supervision of a journeyperson electrician. Schooling in electrical theory and electrical building codes is required to complete the apprenticeship program. Many apprenticeship programs provide a salary to the apprentice during training. A journeyperson electrician is a classification of licensing granted to those who have met the experience requirements for on the job training (usually 4,000 to 6,000 hours) and classroom hours (about 144 hours). Requirements include completion of two to six years of apprenticeship training and passing a licensing exam. Reciprocity An electrician's license is valid for work in the state where the license was issued. In addition, many states recognize licenses from other states, sometimes called interstate reciprocity participation, although there can be conditions imposed. For example, California reciprocates with Arizona, Nevada, and Utah on the condition that licenses are in good standing and have been held at the other state for five years. Nevada reciprocates with Arizona, California, and Utah. Maine reciprocates with New Hampshire and Vermont at the master level, and the state reciprocates with New Hampshire, North Dakota, Idaho, Oregon, Vermont, and Wyoming at the journeyperson level. Colorado maintains a journeyperson alliance with Alaska, Arkansas, the Dakotas, Idaho, Iowa, Minnesota, Montana, Nebraska, New Hampshire, New Mexico, Oklahoma, Utah, and Wyoming. Tools Electricians use a range of hand and power tools and instruments. Some of the more common tools are: Conduit Bender: Bender used to bend various types of Electrical Conduit. These come in many variations including hand, electrical, and hydraulic powered. Non-Contact Voltage Testers Lineperson's Pliers: Heavy-duty pliers for general use in cutting, bending, crimping and pulling wire. Diagonal Pliers (also known as side cutters or Dikes): Pliers consisting of cutting blades for use on smaller gauge wires, but sometimes also used as a gripping tool for removal of nails and staples. Needle-Nose Pliers: Pliers with a long, tapered gripping nose of various size, with or without cutters, generally smaller and for finer work (including very small tools used in electronics wiring). Wire Strippers: Plier-like tool available in many sizes and designs featuring special blades to cut and strip wire insulation while leaving the conductor wire intact and without nicks. Some wire strippers include cable strippers among their multiple functions, for removing the outer cable jacket. Cable Cutters: Highly leveraged pliers for cutting larger cable. Armored Cable Cutters: Commonly referred to by the trademark 'Roto-Split', is a tool used to cut the metal sleeve on MC (Metal Clad) cable. Multimeter: An instrument for electrical measurement with multiple functions. It is available as analog or digital display. Common features include: voltage, resistance, and current. Some models offer additional functions. Unibit or Step-Bit: A metal-cutting drill bit with stepped-diameter cutting edges to enable convenient drilling holes in preset increments in stamped/rolled metal up to about 1.6mm (1/16 inch) thick. Commonly used to create custom knock-outs in a breaker panel or junction box. Cord, Rope or Fish Tape. Used to manipulate cables and wires through cavities. The fishing tool is pushed, dropped, or shot into the installed raceway, stud-bay or joist-bay of a finished wall or in a floor or ceiling. Then the wire or cable is attached and pulled back. Crimping Tools: Used to apply terminals or splices. These may be hand or hydraulic powered. Some hand tools have ratchets to insure proper pressure. Hydraulic units achieve cold welding, even for aluminum cable. Insulation Resistance Tester: Commonly referred to as a Megger, these testers apply several hundred to several thousand volts to cables and equipment to determine the insulation resistance value. Knockout Punch: For punching holes into boxes, panels, switchgear, etc. for inserting cable & pipe connectors. GFI/GFCI Testers: Used to test the functionality of Ground-Fault Interrupting receptacles. Voltmeter: An electrician's tool used to measure electrical potential difference between two points in an electric circuit. Other general-use tools include screwdrivers, hammers, reciprocating saws, drywall saws, flashlights, chisels, tongue and groove pliers (Commonly referred to as 'Channellock®' pliers, a famous manufacturer of this tool) and drills. Safety In addition to the workplace hazards generally faced by industrial workers, electricians are also particularly exposed to injury by electricity. An electrician may experience electric shock due to direct contact with energized circuit conductors or due to stray voltage caused by faults in a system. An electric arc exposes eyes and skin to hazardous amounts of heat and light. Faulty switchgear may cause an arc flash incident with a resultant blast. Electricians are trained to work safely and take many measures to minimize the danger of injury. Lockout and tagout procedures are used to make sure that circuits are proven to be de-energized before work is done. Limits of approach to energized equipment protect against arc flash exposure; specially designed flash-resistant clothing provides additional protection; grounding (earthing) clamps and chains are used on line conductors to provide a visible assurance that a conductor is de-energized. Personal protective equipment provides electrical insulation as well as protection from mechanical impact; gloves have insulating rubber liners, and work boots and hard hats are specially rated to provide protection from shock. If a system cannot be de-energized, insulated tools are used; even high-voltage transmission lines can be repaired while energized, when necessary. Electrical workers, which includes electricians, accounted for 34% of total electrocutions of construction trades workers in the United States between 1992 and 2003. Working conditions Working conditions for electricians vary by specialization. Generally an electrician's work is physically demanding such as climbing ladders and lifting tools and supplies. Occasionally an electrician must work in a cramped space or on scaffolding, and may frequently be bending, squatting or kneeling, to make connections in awkward locations. Construction electricians may spend much of their days in outdoor or semi-outdoor loud and dirty work sites. Industrial electricians may be exposed to the heat, dust, and noise of an industrial plant. Power systems electricians may be called to work in all kinds of adverse weather to make emergency repairs. Trade organizations Some electricians are union members and work under their union's policies. Australia Electricians can choose to be represented by the Electrical Trade Union (ETU). Electrical Contractors can be represented by the National Electrical & Communications Association or Master Electricians Australia. North America Some electricians are union members. Some examples of electricians' unions include the International Brotherhood of Electrical Workers, Canadian Union of Public Employees, and the International Association of Machinists and Aerospace Workers.The International Brotherhood of Electrical Workers provides its own apprenticeships through its National Joint Apprenticeship and Training Committee and the National Electrical Contractors Association. Many merit shop training and apprenticeship programs also exist, including those offered by such as trade associations as Associated Builders and Contractors and Independent Electrical Contractors. These organizations provide comprehensive training, in accordance with U.S. Department of Labor regulations. United Kingdom/Ireland In the United Kingdom, electricians are represented by several unions including Unite the Union In the Republic of Ireland there are two self-regulation/self certification bodies RECI Register of Electrical Contractors of Ireland and ECSSA. Auto electrician An auto electrician is a tradesperson specializing in electrical wiring of motor vehicles. Auto electricians may be employed in the installation of new electrical components or the maintenance and repair of existing electrical components. Auto electricians specialize in cars and commercial vehicles. The auto electrical trade is generally more difficult than the electrical trade due to the confined spaces, engineering complexity of modern automotive electrical systems, and working conditions (often roadside breakdowns or on construction sites, mines, quarries to repair machinery etc.) Also the presence of high-current DC electricity makes injury from burns and arc-flash injury possible. See also Lineperson (Technician) Gaffer (Term used in film and television) International Brotherhood of Electrical Workers List of electricians, notable individuals who have worked as electricians Electronics technician References External links Occupational Outlook Handbook Electrician fault and detections issue Jeans, W. T., The Lives of Electricians: Professors Tyndall, Wheatstone, and Morse. (1887, Whittaker & Co.) Electric power Construction trades workers Electrical wiring Industrial occupations Technicians
Electrician
[ "Physics", "Engineering" ]
3,744
[ "Physical quantities", "Electrical systems", "Building engineering", "Physical systems", "Power (physics)", "Electric power", "Electrical engineering", "Electrical wiring" ]
197,129
https://en.wikipedia.org/wiki/Nanocrystalline%20silicon
Nanocrystalline silicon (nc-Si), sometimes also known as microcrystalline silicon (μc-Si), is a form of porous silicon. It is an allotropic form of silicon with paracrystalline structure—is similar to amorphous silicon (a-Si), in that it has an amorphous phase. Where they differ, however, is that nc-Si has small grains of crystalline silicon within the amorphous phase. This is in contrast to polycrystalline silicon (poly-Si) which consists solely of crystalline silicon grains, separated by grain boundaries. The difference comes solely from the grain size of the crystalline grains. Most materials with grains in the micrometre range are actually fine-grained polysilicon, so nanocrystalline silicon is a better term. The term Nanocrystalline silicon refers to a range of materials around the transition region from amorphous to microcrystalline phase in the silicon thin film. The crystalline volume fraction (as measured from Raman spectroscopy) is another criterion to describe the materials in this transition zone. nc-Si has many useful advantages over a-Si, one being that if grown properly it can have a higher electron mobility, due to the presence of the silicon crystallites. It also shows increased absorption in the red and infrared wavelengths, which make it an important material for use in a-Si solar cells. One of the most important advantages of nanocrystalline silicon, however, is that it has increased stability over a-Si, one of the reasons being because of its lower hydrogen concentration. Although it currently cannot attain the mobility that poly-Si can, it has the advantage over poly-Si that it is easier to fabricate, as it can be deposited using conventional low temperature a-Si deposition techniques, such as PECVD, as opposed to laser annealing or high temperature CVD processes, in the case of poly-Si. Uses The main application of this novel material is in the field of silicon thin film solar cells. As nc-Si has about the same bandgap as crystalline silicon, which is ~1.12 eV, it can be combined in thin layers with a-Si, creating a layered, multi-junction cell called a tandem cell. The top cell in a-Si absorbs the visible light and leaves the infrared part of the spectrum for the bottom cell in nanocrystalline Si. A few companies are on the verge of commercializing silicon inks based on nanocrystalline silicon or on other silicon compounds. The semiconductor industry is also investigating the potential for nanocrystalline silicon, especially in the memory area. Thin-film silicon Nanocrystalline silicon and small-grained polycrystalline silicon are considered thin-film silicon. See also Amorphous silicon Conductive ink Nanoparticle Printed electronics Protocrystalline Quantum dot References External links Thin-film silicon solar cells. Allotropes of silicon Silicon solar cells Silicon, Nanocrystalline Thin-film cells Nanomaterials
Nanocrystalline silicon
[ "Chemistry", "Materials_science", "Mathematics" ]
643
[ "Allotropes", "Thin-film cells", "Semiconductor materials", "Group IV semiconductors", "Allotropes of silicon", "Nanotechnology", "Planes (geometry)", "Nanomaterials", "Thin films" ]
197,767
https://en.wikipedia.org/wiki/Radioactive%20decay
Radioactive decay (also known as nuclear decay, radioactivity, radioactive disintegration, or nuclear disintegration) is the process by which an unstable atomic nucleus loses energy by radiation. A material containing unstable nuclei is considered radioactive. Three of the most common types of decay are alpha, beta, and gamma decay. The weak force is the mechanism that is responsible for beta decay, while the other two are governed by the electromagnetic and nuclear forces. Radioactive decay is a random process at the level of single atoms. According to quantum theory, it is impossible to predict when a particular atom will decay, regardless of how long the atom has existed. However, for a significant number of identical atoms, the overall decay rate can be expressed as a decay constant or as a half-life. The half-lives of radioactive atoms have a huge range: from nearly instantaneous to far longer than the age of the universe. The decaying nucleus is called the parent radionuclide (or parent radioisotope), and the process produces at least one daughter nuclide. Except for gamma decay or internal conversion from a nuclear excited state, the decay is a nuclear transmutation resulting in a daughter containing a different number of protons or neutrons (or both). When the number of protons changes, an atom of a different chemical element is created. There are 28 naturally occurring chemical elements on Earth that are radioactive, consisting of 35 radionuclides (seven elements have two different radionuclides each) that date before the time of formation of the Solar System. These 35 are known as primordial radionuclides. Well-known examples are uranium and thorium, but also included are naturally occurring long-lived radioisotopes, such as potassium-40. Each of the heavy primordial radionuclides participates in one of the four decay chains. History of discovery Henri Poincaré laid the seeds for the discovery of radioactivity through his interest in and studies of X-rays, which significantly influenced physicist Henri Becquerel. Radioactivity was discovered in 1896 by Becquerel and independently by Marie Curie, while working with phosphorescent materials. These materials glow in the dark after exposure to light, and Becquerel suspected that the glow produced in cathode-ray tubes by X-rays might be associated with phosphorescence. He wrapped a photographic plate in black paper and placed various phosphorescent salts on it. All results were negative until he used uranium salts. The uranium salts caused a blackening of the plate in spite of the plate being wrapped in black paper. These radiations were given the name "Becquerel Rays". It soon became clear that the blackening of the plate had nothing to do with phosphorescence, as the blackening was also produced by non-phosphorescent salts of uranium and by metallic uranium. It became clear from these experiments that there was a form of invisible radiation that could pass through paper and was causing the plate to react as if exposed to light. At first, it seemed as though the new radiation was similar to the then recently discovered X-rays. Further research by Becquerel, Ernest Rutherford, Paul Villard, Pierre Curie, Marie Curie, and others showed that this form of radioactivity was significantly more complicated. Rutherford was the first to realize that all such elements decay in accordance with the same mathematical exponential formula. Rutherford and his student Frederick Soddy were the first to realize that many decay processes resulted in the transmutation of one element to another. Subsequently, the radioactive displacement law of Fajans and Soddy was formulated to describe the products of alpha and beta decay. The early researchers also discovered that many other chemical elements, besides uranium, have radioactive isotopes. A systematic search for the total radioactivity in uranium ores also guided Pierre and Marie Curie to isolate two new elements: polonium and radium. Except for the radioactivity of radium, the chemical similarity of radium to barium made these two elements difficult to distinguish. Marie and Pierre Curie's study of radioactivity is an important factor in science and medicine. After their research on Becquerel's rays led them to the discovery of both radium and polonium, they coined the term "radioactivity" to define the emission of ionizing radiation by some heavy elements. (Later the term was generalized to all elements.) Their research on the penetrating rays in uranium and the discovery of radium launched an era of using radium for the treatment of cancer. Their exploration of radium could be seen as the first peaceful use of nuclear energy and the start of modern nuclear medicine. Early health dangers The dangers of ionizing radiation due to radioactivity and X-rays were not immediately recognized. X-rays The discovery of X‑rays by Wilhelm Röntgen in 1895 led to widespread experimentation by scientists, physicians, and inventors. Many people began recounting stories of burns, hair loss and worse in technical journals as early as 1896. In February of that year, Professor Daniel and Dr. Dudley of Vanderbilt University performed an experiment involving X-raying Dudley's head that resulted in his hair loss. A report by Dr. H.D. Hawks, of his suffering severe hand and chest burns in an X-ray demonstration, was the first of many other reports in Electrical Review. Other experimenters, including Elihu Thomson and Nikola Tesla, also reported burns. Thomson deliberately exposed a finger to an X-ray tube over a period of time and suffered pain, swelling, and blistering. Other effects, including ultraviolet rays and ozone, were sometimes blamed for the damage, and many physicians still claimed that there were no effects from X-ray exposure at all. Despite this, there were some early systematic hazard investigations, and as early as 1902 William Herbert Rollins wrote almost despairingly that his warnings about the dangers involved in the careless use of X-rays were not being heeded, either by industry or by his colleagues. By this time, Rollins had proved that X-rays could kill experimental animals, could cause a pregnant guinea pig to abort, and that they could kill a foetus. He also stressed that "animals vary in susceptibility to the external action of X-light" and warned that these differences be considered when patients were treated by means of X-rays. Radioactive substances However, the biological effects of radiation due to radioactive substances were less easy to gauge. This gave the opportunity for many physicians and corporations to market radioactive substances as patent medicines. Examples were radium enema treatments, and radium-containing waters to be drunk as tonics. Marie Curie protested against this sort of treatment, warning that "radium is dangerous in untrained hands". Curie later died from aplastic anaemia, likely caused by exposure to ionizing radiation. By the 1930s, after a number of cases of bone necrosis and death of radium treatment enthusiasts, radium-containing medicinal products had been largely removed from the market (radioactive quackery). Radiation protection Only a year after Röntgen's discovery of X-rays, the American engineer Wolfram Fuchs (1896) gave what is probably the first protection advice, but it was not until 1925 that the first International Congress of Radiology (ICR) was held and considered establishing international protection standards. The effects of radiation on genes, including the effect of cancer risk, were recognized much later. In 1927, Hermann Joseph Muller published research showing genetic effects and, in 1946, was awarded the Nobel Prize in Physiology or Medicine for his findings. The second ICR was held in Stockholm in 1928 and proposed the adoption of the röntgen unit, and the International X-ray and Radium Protection Committee (IXRPC) was formed. Rolf Sievert was named chairman, but a driving force was George Kaye of the British National Physical Laboratory. The committee met in 1931, 1934, and 1937. After World War II, the increased range and quantity of radioactive substances being handled as a result of military and civil nuclear programs led to large groups of occupational workers and the public being potentially exposed to harmful levels of ionising radiation. This was considered at the first post-war ICR convened in London in 1950, when the present International Commission on Radiological Protection (ICRP) was born. Since then the ICRP has developed the present international system of radiation protection, covering all aspects of radiation hazards. In 2020, Hauptmann and another 15 international researchers from eight nations (among them: Institutes of Biostatistics, Registry Research, Centers of Cancer Epidemiology, Radiation Epidemiology, and also the U.S. National Cancer Institute (NCI), International Agency for Research on Cancer (IARC) and the Radiation Effects Research Foundation of Hiroshima) studied definitively through meta-analysis the damage resulting from the "low doses" that have afflicted survivors of the atomic bombings of Hiroshima and Nagasaki and also in numerous accidents at nuclear plants that have occurred. These scientists reported, in JNCI Monographs: Epidemiological Studies of Low Dose Ionizing Radiation and Cancer Risk, that the new epidemiological studies directly support excess cancer risks from low-dose ionizing radiation. In 2021, Italian researcher Sebastiano Venturi reported the first correlations between radio-caesium and pancreatic cancer with the role of caesium in biology, in pancreatitis and in diabetes of pancreatic origin. Units The International System of Units (SI) unit of radioactive activity is the becquerel (Bq), named in honor of the scientist Henri Becquerel. One Bq is defined as one transformation (or decay or disintegration) per second. An older unit of radioactivity is the curie, Ci, which was originally defined as "the quantity or mass of radium emanation in equilibrium with one gram of radium (element)". Today, the curie is defined as disintegrations per second, so that 1 curie (Ci) = . For radiological protection purposes, although the United States Nuclear Regulatory Commission permits the use of the unit curie alongside SI units, the European Union European units of measurement directives required that its use for "public health ... purposes" be phased out by 31 December 1985. The effects of ionizing radiation are often measured in units of gray for mechanical or sievert for damage to tissue. Types Radioactive decay results in a reduction of summed rest mass, once the released energy (the disintegration energy) has escaped in some way. Although decay energy is sometimes defined as associated with the difference between the mass of the parent nuclide products and the mass of the decay products, this is true only of rest mass measurements, where some energy has been removed from the product system. This is true because the decay energy must always carry mass with it, wherever it appears (see mass in special relativity) according to the formula E = mc2. The decay energy is initially released as the energy of emitted photons plus the kinetic energy of massive emitted particles (that is, particles that have rest mass). If these particles come to thermal equilibrium with their surroundings and photons are absorbed, then the decay energy is transformed to thermal energy, which retains its mass. Decay energy, therefore, remains associated with a certain measure of the mass of the decay system, called invariant mass, which does not change during the decay, even though the energy of decay is distributed among decay particles. The energy of photons, the kinetic energy of emitted particles, and, later, the thermal energy of the surrounding matter, all contribute to the invariant mass of the system. Thus, while the sum of the rest masses of the particles is not conserved in radioactive decay, the system mass and system invariant mass (and also the system total energy) is conserved throughout any decay process. This is a restatement of the equivalent laws of conservation of energy and conservation of mass. Alpha, beta and gamma decay Early researchers found that an electric or magnetic field could split radioactive emissions into three types of beams. The rays were given the names alpha, beta, and gamma, in increasing order of their ability to penetrate matter. Alpha decay is observed only in heavier elements of atomic number 52 (tellurium) and greater, with the exception of beryllium-8 (which decays to two alpha particles). The other two types of decay are observed in all the elements. Lead, atomic number 82, is the heaviest element to have any isotopes stable (to the limit of measurement) to radioactive decay. Radioactive decay is seen in all isotopes of all elements of atomic number 83 (bismuth) or greater. Bismuth-209, however, is only very slightly radioactive, with a half-life greater than the age of the universe; radioisotopes with extremely long half-lives are considered effectively stable for practical purposes. In analyzing the nature of the decay products, it was obvious from the direction of the electromagnetic forces applied to the radiations by external magnetic and electric fields that alpha particles carried a positive charge, beta particles carried a negative charge, and gamma rays were neutral. From the magnitude of deflection, it was clear that alpha particles were much more massive than beta particles. Passing alpha particles through a very thin glass window and trapping them in a discharge tube allowed researchers to study the emission spectrum of the captured particles, and ultimately proved that alpha particles are helium nuclei. Other experiments showed beta radiation, resulting from decay and cathode rays, were high-speed electrons. Likewise, gamma radiation and X-rays were found to be high-energy electromagnetic radiation. The relationship between the types of decays also began to be examined: For example, gamma decay was almost always found to be associated with other types of decay, and occurred at about the same time, or afterwards. Gamma decay as a separate phenomenon, with its own half-life (now termed isomeric transition), was found in natural radioactivity to be a result of the gamma decay of excited metastable nuclear isomers, which were in turn created from other types of decay. Although alpha, beta, and gamma radiations were most commonly found, other types of emission were eventually discovered. Shortly after the discovery of the positron in cosmic ray products, it was realized that the same process that operates in classical beta decay can also produce positrons (positron emission), along with neutrinos (classical beta decay produces antineutrinos). Electron capture In electron capture, some proton-rich nuclides were found to capture their own atomic electrons instead of emitting positrons, and subsequently, these nuclides emit only a neutrino and a gamma ray from the excited nucleus (and often also Auger electrons and characteristic X-rays, as a result of the re-ordering of electrons to fill the place of the missing captured electron). These types of decay involve the nuclear capture of electrons or emission of electrons or positrons, and thus acts to move a nucleus toward the ratio of neutrons to protons that has the least energy for a given total number of nucleons. This consequently produces a more stable (lower energy) nucleus. A hypothetical process of positron capture, analogous to electron capture, is theoretically possible in antimatter atoms, but has not been observed, as complex antimatter atoms beyond antihelium are not experimentally available. Such a decay would require antimatter atoms at least as complex as beryllium-7, which is the lightest known isotope of normal matter to undergo decay by electron capture. Nucleon emission Shortly after the discovery of the neutron in 1932, Enrico Fermi realized that certain rare beta-decay reactions immediately yield neutrons as an additional decay particle, so called beta-delayed neutron emission. Neutron emission usually happens from nuclei that are in an excited state, such as the excited 17O* produced from the beta decay of 17N. The neutron emission process itself is controlled by the nuclear force and therefore is extremely fast, sometimes referred to as "nearly instantaneous". Isolated proton emission was eventually observed in some elements. It was also found that some heavy elements may undergo spontaneous fission into products that vary in composition. In a phenomenon called cluster decay, specific combinations of neutrons and protons other than alpha particles (helium nuclei) were found to be spontaneously emitted from atoms. More exotic types of decay Other types of radioactive decay were found to emit previously seen particles but via different mechanisms. An example is internal conversion, which results in an initial electron emission, and then often further characteristic X-rays and Auger electrons emissions, although the internal conversion process involves neither beta nor gamma decay. A neutrino is not emitted, and none of the electron(s) and photon(s) emitted originate in the nucleus, even though the energy to emit all of them does originate there. Internal conversion decay, like isomeric transition gamma decay and neutron emission, involves the release of energy by an excited nuclide, without the transmutation of one element into another. Rare events that involve a combination of two beta-decay-type events happening simultaneously are known (see below). Any decay process that does not violate the conservation of energy or momentum laws (and perhaps other particle conservation laws) is permitted to happen, although not all have been detected. An interesting example discussed in a final section, is bound state beta decay of rhenium-187. In this process, the beta electron-decay of the parent nuclide is not accompanied by beta electron emission, because the beta particle has been captured into the K-shell of the emitting atom. An antineutrino is emitted, as in all negative beta decays. If energy circumstances are favorable, a given radionuclide may undergo many competing types of decay, with some atoms decaying by one route, and others decaying by another. An example is copper-64, which has 29 protons, and 35 neutrons, which decays with a half-life of hours. This isotope has one unpaired proton and one unpaired neutron, so either the proton or the neutron can decay to the other particle, which has opposite isospin. This particular nuclide (though not all nuclides in this situation) is more likely to decay through beta plus decay (%) than through electron capture (%). The excited energy states resulting from these decays which fail to end in a ground energy state, also produce later internal conversion and gamma decay in almost 0.5% of the time. List of decay modes Decay chains and multiple modes The daughter nuclide of a decay event may also be unstable (radioactive). In this case, it too will decay, producing radiation. The resulting second daughter nuclide may also be radioactive. This can lead to a sequence of several decay events called a decay chain (see this article for specific details of important natural decay chains). Eventually, a stable nuclide is produced. Any decay daughters that are the result of an alpha decay will also result in helium atoms being created. Some radionuclides may have several different paths of decay. For example, % of bismuth-212 decays, through alpha-emission, to thallium-208 while % of bismuth-212 decays, through beta-emission, to polonium-212. Both thallium-208 and polonium-212 are radioactive daughter products of bismuth-212, and both decay directly to stable lead-208. Occurrence and applications According to the Big Bang theory, stable isotopes of the lightest three elements (H, He, and traces of Li) were produced very shortly after the emergence of the universe, in a process called Big Bang nucleosynthesis. These lightest stable nuclides (including deuterium) survive to today, but any radioactive isotopes of the light elements produced in the Big Bang (such as tritium) have long since decayed. Isotopes of elements heavier than boron were not produced at all in the Big Bang, and these first five elements do not have any long-lived radioisotopes. Thus, all radioactive nuclei are, therefore, relatively young with respect to the birth of the universe, having formed later in various other types of nucleosynthesis in stars (in particular, supernovae), and also during ongoing interactions between stable isotopes and energetic particles. For example, carbon-14, a radioactive nuclide with a half-life of only years, is constantly produced in Earth's upper atmosphere due to interactions between cosmic rays and nitrogen. Nuclides that are produced by radioactive decay are called radiogenic nuclides, whether they themselves are stable or not. There exist stable radiogenic nuclides that were formed from short-lived extinct radionuclides in the early Solar System. The extra presence of these stable radiogenic nuclides (such as xenon-129 from extinct iodine-129) against the background of primordial stable nuclides can be inferred by various means. Radioactive decay has been put to use in the technique of radioisotopic labeling, which is used to track the passage of a chemical substance through a complex system (such as a living organism). A sample of the substance is synthesized with a high concentration of unstable atoms. The presence of the substance in one or another part of the system is determined by detecting the locations of decay events. On the premise that radioactive decay is truly random (rather than merely chaotic), it has been used in hardware random-number generators. Because the process is not thought to vary significantly in mechanism over time, it is also a valuable tool in estimating the absolute ages of certain materials. For geological materials, the radioisotopes and some of their decay products become trapped when a rock solidifies, and can then later be used (subject to many well-known qualifications) to estimate the date of the solidification. These include checking the results of several simultaneous processes and their products against each other, within the same sample. In a similar fashion, and also subject to qualification, the rate of formation of carbon-14 in various eras, the date of formation of organic matter within a certain period related to the isotope's half-life may be estimated, because the carbon-14 becomes trapped when the organic matter grows and incorporates the new carbon-14 from the air. Thereafter, the amount of carbon-14 in organic matter decreases according to decay processes that may also be independently cross-checked by other means (such as checking the carbon-14 in individual tree rings, for example). Szilard–Chalmers effect The Szilard–Chalmers effect is the breaking of a chemical bond as a result of a kinetic energy imparted from radioactive decay. It operates by the absorption of neutrons by an atom and subsequent emission of gamma rays, often with significant amounts of kinetic energy. This kinetic energy, by Newton's third law, pushes back on the decaying atom, which causes it to move with enough speed to break a chemical bond. This effect can be used to separate isotopes by chemical means. The Szilard–Chalmers effect was discovered in 1934 by Leó Szilárd and Thomas A. Chalmers. They observed that after bombardment by neutrons, the breaking of a bond in liquid ethyl iodide allowed radioactive iodine to be removed. Origins of radioactive nuclides Radioactive primordial nuclides found in the Earth are residues from ancient supernova explosions that occurred before the formation of the Solar System. They are the fraction of radionuclides that survived from that time, through the formation of the primordial solar nebula, through planet accretion, and up to the present time. The naturally occurring short-lived radiogenic radionuclides found in today's rocks, are the daughters of those radioactive primordial nuclides. Another minor source of naturally occurring radioactive nuclides are cosmogenic nuclides, that are formed by cosmic ray bombardment of material in the Earth's atmosphere or crust. The decay of the radionuclides in rocks of the Earth's mantle and crust contribute significantly to Earth's internal heat budget. Aggregate processes While the underlying process of radioactive decay is subatomic, historically and in most practical cases it is encountered in bulk materials with very large numbers of atoms. This section discusses models that connect events at the atomic level to observations in aggregate. Terminology The decay rate, or activity, of a radioactive substance is characterized by the following time-independent parameters: The half-life, , is the time taken for the activity of a given amount of a radioactive substance to decay to half of its initial value. The decay constant, "lambda", the reciprocal of the mean lifetime (in ), sometimes referred to as simply decay rate. The mean lifetime, "tau", the average lifetime (1/e life) of a radioactive particle before decay. Although these are constants, they are associated with the statistical behavior of populations of atoms. In consequence, predictions using these constants are less accurate for minuscule samples of atoms. In principle a half-life, a third-life, or even a (1/√2)-life, could be used in exactly the same way as half-life; but the mean life and half-life have been adopted as standard times associated with exponential decay. Those parameters can be related to the following time-dependent parameters: Total activity (or just activity), , is the number of decays per unit time of a radioactive sample. Number of particles, , in the sample. Specific activity, , is the number of decays per unit time per amount of substance of the sample at time set to zero (). "Amount of substance" can be the mass, volume or moles of the initial sample. These are related as follows: where N0 is the initial amount of active substance — substance that has the same percentage of unstable particles as when the substance was formed. Assumptions The mathematics of radioactive decay depend on a key assumption that a nucleus of a radionuclide has no "memory" or way of translating its history into its present behavior. A nucleus does not "age" with the passage of time. Thus, the probability of its breaking down does not increase with time but stays constant, no matter how long the nucleus has existed. This constant probability may differ greatly between one type of nucleus and another, leading to the many different observed decay rates. However, whatever the probability is, it does not change over time. This is in marked contrast to complex objects that do show aging, such as automobiles and humans. These aging systems do have a chance of breakdown per unit of time that increases from the moment they begin their existence. Aggregate processes, like the radioactive decay of a lump of atoms, for which the single-event probability of realization is very small but in which the number of time-slices is so large that there is nevertheless a reasonable rate of events, are modelled by the Poisson distribution, which is discrete. Radioactive decay and nuclear particle reactions are two examples of such aggregate processes. The mathematics of Poisson processes reduce to the law of exponential decay, which describes the statistical behaviour of a large number of nuclei, rather than one individual nucleus. In the following formalism, the number of nuclei or the nuclei population N, is of course a discrete variable (a natural number)—but for any physical sample N is so large that it can be treated as a continuous variable. Differential calculus is used to model the behaviour of nuclear decay. One-decay process Consider the case of a nuclide that decays into another by some process (emission of other particles, like electron neutrinos and electrons e− as in beta decay, are irrelevant in what follows). The decay of an unstable nucleus is entirely random in time so it is impossible to predict when a particular atom will decay. However, it is equally likely to decay at any instant in time. Therefore, given a sample of a particular radioisotope, the number of decay events expected to occur in a small interval of time is proportional to the number of atoms present , that is Particular radionuclides decay at different rates, so each has its own decay constant . The expected decay is proportional to an increment of time, : The negative sign indicates that decreases as time increases, as the decay events follow one after another. The solution to this first-order differential equation is the function: where is the value of at time = 0, with the decay constant expressed as We have for all time : where is the constant number of particles throughout the decay process, which is equal to the initial number of nuclides since this is the initial substance. If the number of non-decayed nuclei is: then the number of nuclei of (i.e. the number of decayed nuclei) is The number of decays observed over a given interval obeys Poisson statistics. If the average number of decays is , the probability of a given number of decays is Chain-decay processes Chain of two decays Now consider the case of a chain of two decays: one nuclide decaying into another by one process, then decaying into another by a second process, i.e. . The previous equation cannot be applied to the decay chain, but can be generalized as follows. Since decays into , then decays into , the activity of adds to the total number of nuclides in the present sample, before those nuclides decay and reduce the number of nuclides leading to the later sample. In other words, the number of second generation nuclei increases as a result of the first generation nuclei decay of , and decreases as a result of its own decay into the third generation nuclei . The sum of these two terms gives the law for a decay chain for two nuclides: The rate of change of , that is , is related to the changes in the amounts of and , can increase as is produced from and decrease as produces . Re-writing using the previous results: The subscripts simply refer to the respective nuclides, i.e. is the number of nuclides of type ; is the initial number of nuclides of type ; is the decay constant for – and similarly for nuclide . Solving this equation for gives: In the case where is a stable nuclide ( = 0), this equation reduces to the previous solution: as shown above for one decay. The solution can be found by the integration factor method, where the integrating factor is . This case is perhaps the most useful since it can derive both the one-decay equation (above) and the equation for multi-decay chains (below) more directly. Chain of any number of decays For the general case of any number of consecutive decays in a decay chain, i.e. , where is the number of decays and is a dummy index (), each nuclide population can be found in terms of the previous population. In this case , , ..., . Using the above result in a recursive form: The general solution to the recursive problem is given by Bateman's equations: Multiple products In all of the above examples, the initial nuclide decays into just one product. Consider the case of one initial nuclide that can decay into either of two products, that is and in parallel. For example, in a sample of potassium-40, 89.3% of the nuclei decay to calcium-40 and 10.7% to argon-40. We have for all time : which is constant, since the total number of nuclides remains constant. Differentiating with respect to time: defining the total decay constant in terms of the sum of partial decay constants and : Solving this equation for : where is the initial number of nuclide A. When measuring the production of one nuclide, one can only observe the total decay constant . The decay constants and determine the probability for the decay to result in products or as follows: because the fraction of nuclei decay into while the fraction of nuclei decay into . Corollaries of laws The above equations can also be written using quantities related to the number of nuclide particles in a sample; The activity: . The amount of substance: . The mass: . where = is the Avogadro constant, is the molar mass of the substance in kg/mol, and the amount of the substance is in moles. Decay timing: definitions and relations Time constant and mean-life For the one-decay solution : the equation indicates that the decay constant has units of , and can thus also be represented as 1/, where is a characteristic time of the process called the time constant. In a radioactive decay process, this time constant is also the mean lifetime for decaying atoms. Each atom "lives" for a finite amount of time before it decays, and it may be shown that this mean lifetime is the arithmetic mean of all the atoms' lifetimes, and that it is , which again is related to the decay constant as follows: This form is also true for two-decay processes simultaneously , inserting the equivalent values of decay constants (as given above) into the decay solution leads to: Half-life A more commonly used parameter is the half-life . Given a sample of a particular radionuclide, the half-life is the time taken for half the radionuclide's atoms to decay. For the case of one-decay nuclear reactions: the half-life is related to the decay constant as follows: set and = to obtain This relationship between the half-life and the decay constant shows that highly radioactive substances are quickly spent, while those that radiate weakly endure longer. Half-lives of known radionuclides vary by almost 54 orders of magnitude, from more than years ( sec) for the very nearly stable nuclide 128Te, to seconds for the highly unstable nuclide 5H. The factor of in the above relations results from the fact that the concept of "half-life" is merely a way of selecting a different base other than the natural base for the lifetime expression. The time constant is the -life, the time until only 1/e remains, about 36.8%, rather than the 50% in the half-life of a radionuclide. Thus, is longer than . The following equation can be shown to be valid: Since radioactive decay is exponential with a constant probability, each process could as easily be described with a different constant time period that (for example) gave its "(1/3)-life" (how long until only 1/3 is left) or "(1/10)-life" (a time period until only 10% is left), and so on. Thus, the choice of and for marker-times, are only for convenience, and from convention. They reflect a fundamental principle only in so much as they show that the same proportion of a given radioactive substance will decay, during any time-period that one chooses. Mathematically, the life for the above situation would be found in the same way as aboveby setting , and substituting into the decay solution to obtain Example for carbon-14 Carbon-14 has a half-life of years and a decay rate of 14 disintegrations per minute (dpm) per gram of natural carbon. If an artifact is found to have radioactivity of 4 dpm per gram of its present C, we can find the approximate age of the object using the above equation: where: Changing rates The radioactive decay modes of electron capture and internal conversion are known to be slightly sensitive to chemical and environmental effects that change the electronic structure of the atom, which in turn affects the presence of 1s and 2s electrons that participate in the decay process. A small number of nuclides are affected. For example, chemical bonds can affect the rate of electron capture to a small degree (in general, less than 1%) depending on the proximity of electrons to the nucleus. In 7Be, a difference of 0.9% has been observed between half-lives in metallic and insulating environments. This relatively large effect is because beryllium is a small atom whose valence electrons are in 2s atomic orbitals, which are subject to electron capture in 7Be because (like all s atomic orbitals in all atoms) they naturally penetrate into the nucleus. In 1992, Jung et al. of the Darmstadt Heavy-Ion Research group observed an accelerated β− decay of 163Dy66+. Although neutral 163Dy is a stable isotope, the fully ionized 163Dy66+ undergoes β− decay into the K and L shells to 163Ho66+ with a half-life of 47 days. Rhenium-187 is another spectacular example. 187Re normally undergoes beta decay to 187Os with a half-life of 41.6 × 109 years, but studies using fully ionised 187Re atoms (bare nuclei) have found that this can decrease to only 32.9 years. This is attributed to "bound-state β− decay" of the fully ionised atom – the electron is emitted into the "K-shell" (1s atomic orbital), which cannot occur for neutral atoms in which all low-lying bound states are occupied. A number of experiments have found that decay rates of other modes of artificial and naturally occurring radioisotopes are, to a high degree of precision, unaffected by external conditions such as temperature, pressure, the chemical environment, and electric, magnetic, or gravitational fields. Comparison of laboratory experiments over the last century, studies of the Oklo natural nuclear reactor (which exemplified the effects of thermal neutrons on nuclear decay), and astrophysical observations of the luminosity decays of distant supernovae (which occurred far away so the light has taken a great deal of time to reach us), for example, strongly indicate that unperturbed decay rates have been constant (at least to within the limitations of small experimental errors) as a function of time as well. Recent results suggest the possibility that decay rates might have a weak dependence on environmental factors. It has been suggested that measurements of decay rates of silicon-32, manganese-54, and radium-226 exhibit small seasonal variations (of the order of 0.1%). However, such measurements are highly susceptible to systematic errors, and a subsequent paper has found no evidence for such correlations in seven other isotopes (22Na, 44Ti, 108Ag, 121Sn, 133Ba, 241Am, 238Pu), and sets upper limits on the size of any such effects. The decay of radon-222 was once reported to exhibit large 4% peak-to-peak seasonal variations (see plot), which were proposed to be related to either solar flare activity or the distance from the Sun, but detailed analysis of the experiment's design flaws, along with comparisons to other, much more stringent and systematically controlled, experiments refute this claim. GSI anomaly An unexpected series of experimental results for the rate of decay of heavy highly charged radioactive ions circulating in a storage ring has provoked theoretical activity in an effort to find a convincing explanation. The rates of weak decay of two radioactive species with half lives of about 40 s and 200 s are found to have a significant oscillatory modulation, with a period of about 7 s. The observed phenomenon is known as the GSI anomaly, as the storage ring is a facility at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany. As the decay process produces an electron neutrino, some of the proposed explanations for the observed rate oscillation invoke neutrino properties. Initial ideas related to flavour oscillation met with skepticism. A more recent proposal involves mass differences between neutrino mass eigenstates. Nuclear processes A nuclide is considered to "exist" if it has a half-life greater than 2x10−14s. This is an arbitrary boundary; shorter half-lives are considered resonances, such as a system undergoing a nuclear reaction. This time scale is characteristic of the strong interaction which creates the nuclear force. Only nuclides are considered to decay and produce radioactivity. Nuclides can be stable or unstable. Unstable nuclides decay, possibly in several steps, until they become stable. There are 251 known stable nuclides. The number of unstable nuclides discovered has grown, with about 3000 known in 2006. The most common and consequently historically the most important forms of natural radioactive decay involve the emission of alpha-particles, beta-particles, and gamma rays. Each of these correspond to a fundamental interaction predominantly responsible for the radioactivity: alpha-decay -> strong interaction, beta-decay -> weak interaction, gamma-decay -> electromagnetism. In alpha decay, a particle containing two protons and two neutrons, equivalent to a He nucleus, breaks out of the parent nucleus. The process represents a competition between the electromagnetic repulsion between the protons in the nucleus and attractive nuclear force, a residual of the strong interaction. The alpha particle is an especially strongly bound nucleus, helping it win the competition more often. However some nuclei break up or fission into larger particles and artificial nuclei decay with the emission of single protons, double protons, and other combinations. Beta decay transforms a neutron into proton or vice versa. When a neutron inside a parent nuclide decays to a proton, an electron, a anti-neutrino, and nuclide with high atomic number results. When a proton in a parent nuclide transforms to a neutron, a positron, a neutrino, and nuclide with a lower atomic number results. These changes are a direct manifestation of the weak interaction. Gamma decay resembles other kinds of electromagnetic emission: it corresponds to transitions between an excited quantum state and lower energy state. Any of the particle decay mechanisms often leave the daughter in an excited state, which then decays via gamma emission. Other forms of decay include neutron emission, electron capture, internal conversion, cluster decay. Hazard warning signs See also Actinides in the environment Background radiation Chernobyl disaster Crimes involving radioactive substances Decay chain Decay correction Fallout shelter Geiger counter Induced radioactivity Lists of nuclear disasters and radioactive incidents National Council on Radiation Protection and Measurements Nuclear engineering Nuclear pharmacy Nuclear physics Nuclear power Nuclear chain reaction Particle decay Poisson process Radiation therapy Radioactive contamination Radioactivity in biology Radiometric dating Stochastic Transient equilibrium Notes References External links The Lund/LBNL Nuclear Data Search – Contains tabulated information on radioactive decay types and energies. Nomenclature of nuclear chemistry Specific activity and related topics. The Live Chart of Nuclides – IAEA Interactive Chart of Nuclides Health Physics Society Public Education Website Annotated bibliography for radioactivity from the Alsos Digital Library for Nuclear Issues Stochastic Java applet on the decay of radioactive atoms by Wolfgang Bauer Stochastic Flash simulation on the decay of radioactive atoms by David M. Harrison "Henri Becquerel: The Discovery of Radioactivity", Becquerel's 1896 articles online and analyzed on BibNum [click 'à télécharger' for English version]. "Radioactive change", Rutherford & Soddy article (1903), online and analyzed on Bibnum [click 'à télécharger' for English version] Exponentials Poisson point processes
Radioactive decay
[ "Physics", "Chemistry", "Mathematics" ]
8,959
[ "Point (geometry)", "E (mathematical constant)", "Point processes", "Exponentials", "Nuclear physics", "Radioactivity", "Poisson point processes" ]
197,845
https://en.wikipedia.org/wiki/Nuclear%20reprocessing
Nuclear reprocessing is the chemical separation of fission products and actinides from spent nuclear fuel. Originally, reprocessing was used solely to extract plutonium for producing nuclear weapons. With commercialization of nuclear power, the reprocessed plutonium was recycled back into MOX nuclear fuel for thermal reactors. The reprocessed uranium, also known as the spent fuel material, can in principle also be re-used as fuel, but that is only economical when uranium supply is low and prices are high. Nuclear reprocessing may extend beyond fuel and include the reprocessing of other nuclear reactor material, such as Zircaloy cladding. The high radioactivity of spent nuclear material means that reprocessing must be highly controlled and carefully executed in advanced facilities by specialized personnel. Numerous processes exist, with the chemical based PUREX process dominating. Alternatives include heating to drive off volatile elements, burning via oxidation, and fluoride volatility (which uses extremely reactive Fluorine). Each process results in some form of refined nuclear product, with radioactive waste as a byproduct. Because this could allow for weapons grade nuclear material, nuclear reprocessing is a concern for nuclear proliferation and is thus tightly regulated. Relatively high cost is associated with spent fuel reprocessing compared to the once-through fuel cycle, but fuel use can be increased and waste volumes decreased. Nuclear fuel reprocessing is performed routinely in Europe, Russia, and Japan. In the United States, the Obama administration stepped back from President Bush's plans for commercial-scale reprocessing and reverted to a program focused on reprocessing-related scientific research. Not all nuclear fuel requires reprocessing; a breeder reactor is not restricted to using recycled plutonium and uranium. It can employ all the actinides, closing the nuclear fuel cycle and potentially multiplying the energy extracted from natural uranium by about 60 times. Separated components and disposition The potentially useful components dealt with in nuclear reprocessing comprise specific actinides (plutonium, uranium, and some minor actinides). The lighter elements components include fission products, activation products, and cladding. History The first large-scale nuclear reactors were built during World War II. These reactors were designed for the production of plutonium for use in nuclear weapons. The only reprocessing required, therefore, was the extraction of the plutonium (free of fission-product contamination) from the spent natural uranium fuel. In 1943, several methods were proposed for separating the relatively small quantity of plutonium from the uranium and fission products. The first method selected, a precipitation process called the bismuth phosphate process, was developed and tested at the Oak Ridge National Laboratory (ORNL) between 1943 and 1945 to produce quantities of plutonium for evaluation and use in the US weapons programs. ORNL produced the first macroscopic quantities (grams) of separated plutonium with these processes. The bismuth phosphate process was first operated on a large scale at the Hanford Site, in the later part of 1944. It was successful for plutonium separation in the emergency situation existing then, but it had a significant weakness: the inability to recover uranium. The first successful solvent extraction process for the recovery of pure uranium and plutonium was developed at ORNL in 1949. The PUREX process is the current method of extraction. Separation plants were also constructed at Savannah River Site and a smaller plant at West Valley Reprocessing Plant which closed by 1972 because of its inability to meet new regulatory requirements. Reprocessing of civilian fuel has long been employed at the COGEMA La Hague site in France, the Sellafield site in the United Kingdom, the Mayak Chemical Combine in Russia, and at sites such as the Tokai plant in Japan, the Tarapur plant in India, and briefly at the West Valley Reprocessing Plant in the United States. In October 1976, concern of nuclear weapons proliferation (especially after India demonstrated nuclear weapons capabilities using reprocessing technology) led President Gerald Ford to issue a Presidential directive to indefinitely suspend the commercial reprocessing and recycling of plutonium in the U.S. On 7 April 1977, President Jimmy Carter banned the reprocessing of commercial reactor spent nuclear fuel. The key issue driving this policy was the risk of nuclear weapons proliferation by diversion of plutonium from the civilian fuel cycle, and to encourage other nations to follow the US lead. After that, only countries that already had large investments in reprocessing infrastructure continued to reprocess spent nuclear fuel. President Reagan lifted the ban in 1981, but did not provide the substantial subsidy that would have been necessary to start up commercial reprocessing. In March 1999, the U.S. Department of Energy (DOE) reversed its policy and signed a contract with a consortium of Duke Energy, COGEMA, and Stone & Webster (DCS) to design and operate a mixed oxide (MOX) fuel fabrication facility. Site preparation at the Savannah River Site (South Carolina) began in October 2005. In 2011 the New York Times reported "...11 years after the government awarded a construction contract, the cost of the project has soared to nearly $5 billion. The vast concrete and steel structure is a half-finished hulk, and the government has yet to find a single customer, despite offers of lucrative subsidies." TVA (currently the most likely customer) said in April 2011 that it would delay a decision until it could see how MOX fuel performed in the nuclear accident at Fukushima Daiichi. Separation technologies Water and organic solvents PUREX PUREX, the current standard method, is an acronym standing for Plutonium and Uranium Recovery by EXtraction. The PUREX process is a liquid-liquid extraction method used to reprocess spent nuclear fuel, to extract uranium and plutonium, independent of each other, from the fission products. This is the most developed and widely used process in the industry at present. When used on fuel from commercial power reactors the plutonium extracted typically contains too much Pu-240 to be considered "weapons-grade" plutonium, ideal for use in a nuclear weapon. Nevertheless, highly reliable nuclear weapons can be built at all levels of technical sophistication using reactor-grade plutonium. Moreover, reactors that are capable of refueling frequently can be used to produce weapon-grade plutonium, which can later be recovered using PUREX. Because of this, PUREX chemicals are monitored. Modifications of PUREX UREX The PUREX process can be modified to make a UREX (URanium EXtraction) process which could be used to save space inside high level nuclear waste disposal sites, such as the Yucca Mountain nuclear waste repository, by removing the uranium which makes up the vast majority of the mass and volume of used fuel and recycling it as reprocessed uranium. The UREX process is a PUREX process which has been modified to prevent the plutonium from being extracted. This can be done by adding a plutonium reductant before the first metal extraction step. In the UREX process, ~99.9% of the uranium and >95% of technetium are separated from each other and the other fission products and actinides. The key is the addition of acetohydroxamic acid (AHA) to the extraction and scrub sections of the process. The addition of AHA greatly diminishes the extractability of plutonium and neptunium, providing somewhat greater proliferation resistance than with the plutonium extraction stage of the PUREX process. TRUEX Adding a second extraction agent, octyl(phenyl)-N, N-dibutyl carbamoylmethyl phosphine oxide (CMPO) in combination with tributylphosphate, (TBP), the PUREX process can be turned into the TRUEX (TRansUranic EXtraction) process. TRUEX was invented in the US by Argonne National Laboratory and is designed to remove the transuranic metals (Am/Cm) from waste. The idea is that by lowering the alpha activity of the waste, the majority of the waste can then be disposed of with greater ease. In common with PUREX this process operates by a solvation mechanism. DIAMEX As an alternative to TRUEX, an extraction process using a malondiamide has been devised. The DIAMEX (DIAMide EXtraction) process has the advantage of avoiding the formation of organic waste which contains elements other than carbon, hydrogen, nitrogen, and oxygen. Such an organic waste can be burned without the formation of acidic gases which could contribute to acid rain (although the acidic gases could be recovered by a scrubber). The DIAMEX process is being worked on in Europe by the French CEA. The process is sufficiently mature that an industrial plant could be constructed with the existing knowledge of the process. In common with PUREX this process operates by a solvation mechanism. SANEX Selective ActiNide EXtraction. As part of the management of minor actinides it has been proposed that the lanthanides and trivalent minor actinides should be removed from the PUREX raffinate by a process such as DIAMEX or TRUEX. To allow the actinides such as americium to be either reused in industrial sources or used as fuel, the lanthanides must be removed. The lanthanides have large neutron cross sections and hence they would poison a neutron driven nuclear reaction. To date the extraction system for the SANEX process has not been defined, but currently several different research groups are working towards a process. For instance the French CEA is working on a bis-triazinyl pyridine (BTP) based process. Other systems such as the dithiophosphinic acids are being worked on by some other workers. UNEX The UNiversal EXtraction process was developed in Russia and the Czech Republic; it is designed to completely remove the most troublesome radioisotopes (Sr, Cs and minor actinides) from the raffinate remaining after the extraction of uranium and plutonium from used nuclear fuel. The chemistry is based upon the interaction of caesium and strontium with polyethylene glycol and a cobalt carborane anion (known as chlorinated cobalt dicarbollide). The actinides are extracted by CMPO, and the diluent is a polar aromatic such as nitrobenzene. Other diluents such as meta-nitrobenzotrifluoride and phenyl trifluoromethyl sulfone have been suggested as well. Electrochemical and ion exchange methods An exotic method using electrochemistry and ion exchange in ammonium carbonate has been reported. Other methods for the extraction of uranium using ion exchange in alkaline carbonate and "fumed" lead oxide have also been reported. Obsolete methods Bismuth phosphate The bismuth phosphate process is an obsolete process that adds significant unnecessary material to the final radioactive waste. The bismuth phosphate process has been replaced by solvent extraction processes. The bismuth phosphate process was designed to extract plutonium from aluminium-clad nuclear fuel rods, containing uranium. The fuel was decladded by boiling it in caustic soda. After decladding, the uranium metal was dissolved in nitric acid. The plutonium at this point is in the +4 oxidation state. It was then precipitated out of the solution by the addition of bismuth nitrate and phosphoric acid to form the bismuth phosphate. The plutonium was coprecipitated with this. The supernatant liquid (containing many of the fission products) was separated from the solid. The precipitate was then dissolved in nitric acid before the addition of an oxidant (such as potassium permanganate) to produce PuO22+. The plutonium was maintained in the +6 oxidation state by addition of a dichromate salt. The bismuth phosphate was next re-precipitated, leaving the plutonium in solution, and an iron(II) salt (such as ferrous sulfate) was added. The plutonium was again re-precipitated using a bismuth phosphate carrier and a combination of lanthanum salts and fluoride added, forming a solid lanthanum fluoride carrier for the plutonium. Addition of an alkali produced an oxide. The combined lanthanum plutonium oxide was collected and extracted with nitric acid to form plutonium nitrate. Hexone or REDOX This is a liquid-liquid extraction process which uses methyl isobutyl ketone codenamed hexone as the extractant. The extraction is by a solvation mechanism. This process has the disadvantage of requiring the use of a salting-out reagent (aluminium nitrate) to increase the nitrate concentration in the aqueous phase to obtain a reasonable distribution ratio (D value). Also, hexone is degraded by concentrated nitric acid. This process was used in 1952-1956 on the Hanford plant T and has been replaced by the PUREX process. Butex, β,β'-dibutyoxydiethyl ether A process based on a solvation extraction process using the triether extractant named above. This process has the disadvantage of requiring the use of a salting-out reagent (aluminium nitrate) to increase the nitrate concentration in the aqueous phase to obtain a reasonable distribution ratio. This process was used at Windscale in 1951-1964. This process has been replaced by PUREX, which was shown to be a superior technology for larger scale reprocessing. Sodium acetate The sodium uranyl acetate process was used by the early Soviet nuclear industry to recover plutonium from irradiated fuel. It was never used in the West; the idea is to dissolve the fuel in nitric acid, alter the oxidation state of the plutonium, and then add acetic acid and base. This would convert the uranium and plutonium into a solid acetate salt. Explosion of the crystallized acetates-nitrates in a non-cooled waste tank caused the Kyshtym disaster in 1957. Alternatives to PUREX As there are some downsides to the PUREX process, there have been efforts to develop alternatives to the process, some of them compatible with PUREX (i.e. the residue from one process could be used as feedstock for the other) and others wholly incompatible. None of these have (as of the 2020s) reached widespread commercial use, but some have seen large scale tests or firm commitments towards their future larger scale implementation. Pyroprocessing Pyroprocessing is a generic term for high-temperature methods. Solvents are molten salts (e.g. LiCl + KCl or LiF + CaF2) and molten metals (e.g. cadmium, bismuth, magnesium) rather than water and organic compounds. Electrorefining, distillation, and solvent-solvent extraction are common steps. These processes are not currently in significant use worldwide, but they have been pioneered at Argonne National Laboratory with current research also taking place at CRIEPI in Japan, the Nuclear Research Institute of Řež in Czech Republic, Indira Gandhi Centre for Atomic Research in India and KAERI in South Korea. Advantages of pyroprocessing The principles behind it are well understood, and no significant technical barriers exist to their adoption. Readily applied to high-burnup spent fuel and requires little cooling time, since the operating temperatures are high already. Does not use solvents containing hydrogen and carbon, which are neutron moderators creating risk of criticality accidents and can absorb the fission product tritium and the activation product carbon-14 in dilute solutions that cannot be separated later. Alternatively, voloxidation (see below) can remove 99% of the tritium from used fuel and recover it in the form of a strong solution suitable for use as a supply of tritium. More compact than aqueous methods, allowing on-site reprocessing at the reactor site, which avoids transportation of spent fuel and its security issues, instead storing a much smaller volume of fission products on site as high-level waste until decommissioning. For example, the Integral Fast Reactor and Molten Salt Reactor fuel cycles are based on on-site pyroprocessing. It can separate many or even all actinides at once and produce highly radioactive fuel which is harder to manipulate for theft or making nuclear weapons. (However, the difficulty has been questioned.) In contrast the PUREX process was designed to separate plutonium only for weapons, and it also leaves the minor actinides (americium and curium) behind, producing waste with more long-lived radioactivity. Most of the radioactivity in roughly 102 to 105 years after the use of the nuclear fuel is produced by the actinides, since there are no fission products with half-lives in this range. These actinides can fuel fast reactors, so extracting and reusing (fissioning) them increases energy production per kg of fuel, as well as reducing the long-term radioactivity of the wastes. Fluoride volatility (see below) produces salts that can readily be used in molten salt reprocessing such as pyroprocessing The ability to process "fresh" spent fuel reduces the needs for spent fuel pools (even if the recovered short lived radionuclides are "only" sent to storage, that still requires less space as the bulk of the mass, uranium, can be stored separately from them). Uranium – even higher specific activity reprocessed uranium – does not need cooling for safe storage. Short lived radionuclides can be recovered from "fresh" spent fuel allowing either their direct use in industry science or medicine or the recovery of their decay products without contamination by other isotopes (for example: ruthenium in spent fuel decays to rhodium all isotopes of which other than further decay to stable isotopes of palladium. Palladium derived from the decay of fission ruthenium and rhodium will be nonradioactive, but fission Palladium contains significant contamination with long-lived . Ruthenium-107 and rhodium-107 both have half lives on the order of minutes and decay to palladium-107 before reprocessing under most circumstances) Possible fuels for radioisotope thermoelectric generators (RTGs) that are mostly decayed in spent fuel, that has significantly aged, can be recovered in sufficient quantities to make their use worthwhile. Examples include materials with half lives around two years such as , , . While those would perhaps not be suitable for lengthy space missions, they can be used to replace diesel generators in off-grid locations where refueling is possible once a year. Antimony would be particularly interesting because it forms a stable alloy with lead and can thus be transformed relatively easily into a partially self-shielding and chemically inert form. Shorter lived RTG fuels present the further benefit of reducing the risk of orphan sources as the activity will decline relatively quickly if no refueling is undertaken. Disadvantages of pyroprocessing Reprocessing as a whole is not currently (2005) in favor, and places that do reprocess already have PUREX plants constructed. Consequently, there is little demand for new pyrometallurgical systems, although there could be if the Generation IV reactor programs become reality. The used salt from pyroprocessing is less suitable for conversion into glass than the waste materials produced by the PUREX process. If the goal is to reduce the longevity of spent nuclear fuel in burner reactors, then better recovery rates of the minor actinides need to be achieved. Working with "fresh" spent fuel requires more shielding and better ways to deal with heat production than working with "aged" spent fuel does. If the facilities are built in such a way as to require high specific activity material, they cannot handle older "legacy waste" except blended with fresh spent fuel Electrolysis The electrolysis methods are based on the difference in the standard potentials of uranium, plutonium and minor actinides in a molten salt. The standard potential of uranium is the lowest, therefore when a potential is applied, the uranium will be reduced at the cathode out of the molten salt solution before the other elements. PYRO-A and -B for IFR These processes were developed by Argonne National Laboratory and used in the Integral Fast Reactor project. PYRO-A is a means of separating actinides (elements within the actinide family, generally heavier than U-235) from non-actinides. The spent fuel is placed in an anode basket which is immersed in a molten salt electrolyte. An electric current is applied, causing the uranium metal (or sometimes oxide, depending on the spent fuel) to plate out on a solid metal cathode while the other actinides (and the rare earths) can be absorbed into a liquid cadmium cathode. Many of the fission products (such as caesium, zirconium and strontium) remain in the salt. As alternatives to the molten cadmium electrode it is possible to use a molten bismuth cathode, or a solid aluminium cathode. As an alternative to electrowinning, the wanted metal can be isolated by using a molten alloy of an electropositive metal and a less reactive metal. Since the majority of the long term radioactivity, and volume, of spent fuel comes from actinides, removing the actinides produces waste that is more compact, and not nearly as dangerous over the long term. The radioactivity of this waste will then drop to the level of various naturally occurring minerals and ores within a few hundred, rather than thousands of, years. The mixed actinides produced by pyrometallic processing can be used again as nuclear fuel, as they are virtually all either fissile, or fertile, though many of these materials would require a fast breeder reactor to be burned efficiently. In a thermal neutron spectrum, the concentrations of several heavy actinides (curium-242 and plutonium-240) can become quite high, creating fuel that is substantially different from the usual uranium or mixed uranium-plutonium oxides (MOX) that most current reactors were designed to use. Another pyrochemical process, the PYRO-B process, has been developed for the processing and recycling of fuel from a transmuter reactor ( a fast breeder reactor designed to convert transuranic nuclear waste into fission products ). A typical transmuter fuel is free from uranium and contains recovered transuranics in an inert matrix such as metallic zirconium. In the PYRO-B processing of such fuel, an electrorefining step is used to separate the residual transuranic elements from the fission products and recycle the transuranics to the reactor for fissioning. Newly generated technetium and iodine are extracted for incorporation into transmutation targets, and the other fission products are sent to waste. Voloxidation Voloxidation (for volumetric oxidation) involves heating oxide fuel with oxygen, sometimes with alternating oxidation and reduction, or alternating oxidation by ozone to uranium trioxide with decomposition by heating back to triuranium octoxide. A major purpose is to capture tritium as tritiated water vapor before further processing where it would be difficult to retain the tritium. Tritium is a difficult contaminant to remove from aqueous solution, as it cannot be separated from water except by isotope separation. However, tritium is also a valuable product used in industry science and nuclear weapons, so recovery of a stream of hydrogen or water with a high tritium content can make targeted recovery economically worthwhile. Other volatile elements leave the fuel and must be recovered, especially iodine, technetium, and carbon-14. Voloxidation also breaks up the fuel or increases its surface area to enhance penetration of reagents in following reprocessing steps. Advantages The process is simple and requires no complex machinery or chemicals above and beyond that required in all reprocessing (hot cells, remote handling equipment) Products such as krypton-85 or tritium, as well as xenon (whose isotope are either stable, very nearly stable, or quickly decay), can be recovered and sold for use in industry, science or medicine Driving off volatile fission products allows for safer storage in interim storage or deep geological repository Nuclear proliferation risks are low as no separation of plutonium occurs Radioactive material is not chemically mobilized beyond what should be accounted for in long-term storage anyway. Substances that are inert as native elements or oxides remain so The product can be used as fuel in a CANDU reactor or even downblended with similarly treated spent CANDU fuel if too much fissile material is left in the spent fuel. The resulting product can be further processed by any of the other processes mentioned above and below. Removal of volatile fission products means that transportation becomes slightly easier compared to spent fuel with damaged or removed cladding All volatile products of concern (while helium will be present in the spent fuel, there won't be any radioactive isotopes of helium) can in principle be recovered in a cold trap cooled by liquid nitrogen (temperature: or lower). However, this requires significant amounts of cooling to counteract the effect of decay heat from radioactive volatiles like krypton-85. Tritium will be present in the form of tritiated water, which is a solid at the temperature of liquid nitrogen. Technetium heptoxide can be removed as a gas by heating above its boiling point of which reduces the issues presented by Technetium contamination in processes like fluoride volatility or PUREX; ruthenium tetroxide (gaseous above ) can likewise be removed from the spent fuel and recovered for sale or disposal Disadvantages Further processing is needed if the resulting product is to be used for re-enrichment or fabrication of MOX-fuel If volatile fission products escape to the environment this presents a radiation hazard, mostly due to , Tritium and . Their safe recovery and storage requires further equipment. An oxidizing agent / reducing agent has to be used for reduction/oxidation steps whose recovery can be difficult, energy consuming or both Volatilization in isolation Simply heating spent oxide fuel in an inert atmosphere or vacuum at a temperature between and as a first reprocessing step can remove several volatile elements, including caesium whose isotope caesium-137 emits about half of the heat produced by the spent fuel over the following 100 years of cooling (however, most of the other half is from strontium-90, which has a similar half-life). The estimated overall mass balance for 20,000 g of processed fuel with 2,000 g of cladding is: Advantages Requires no chemical processes at all Can in theory be done "self heating" via the decay heat of sufficiently "fresh" spent fuel Caesium-137 has uses in food irradiation and can be used to power radioisotope thermoelectric generators. However, its contamination with stable and long lived reduces efficiency of such uses while contamination with in relatively fresh spent fuel makes the curve of overall radiation and heat output much steeper until most of the has decayed Can potentially recover elements like ruthenium whose ruthenate ion is particularly troublesome in PUREX and which has no isotopes significantly longer lived than a year, allowing possible recovery of the metal for use A "third phase recovery" can be added to the process if substances that melt but don't vaporize at the temperatures involved are drained to a container for liquid effluents and allowed to re-solidify. To avoid contamination with low-boiling products which melt at low temperatures, a melt plug could be used to open the container for liquid effluents only once a certain temperature is reached by the liquid phase. Strontium, which is present in the form of the particularly troublesome mid-lived fission product is liquid above . However, Strontium oxide remains solid below and if strontium oxide is to be recovered with other liquid effluents, it has to be reduced to the native metal before the heating step. Both Strontium and Strontium oxide form soluble Strontium hydroxide and hydrogen upon contact with water, which can be used to separate them from non-soluble parts of the spent fuel. As there are little to no chemical changes in the spent fuel, any chemical reprocessing methods can be used following this process Disadvantages At temperatures above the native metal form of several actinides, including neptunium (melting point: ) and plutonium (melting point: ), are molten. This could be used to recover a liquid phase, raising proliferation concerns, given that uranium metal remains a solid until . While neptunium and plutonium cannot be easily separated from each other by different melting points, their differing solubility in water can be used to separate them. If "nuclear self heating" is employed, the spent fuel with have much higher specific activity, heat production and radiation release. If an external heat source is used, significant amounts of external power are needed, which mostly go to heat the uranium. Heating and cooling the vacuum chamber and/or the piping and vessels to collect volatile effluents induces thermal stress. This combines with radiation damage to material and possibly neutron embrittlement if neutron sources such as californium-252 are present to a significant extent. In the commonly used oxide fuel, some elements will be present both as oxides and as native elements. Depending on their chemical state, they may end up in either the volatalized stream or in the residue stream. If an element is present in both states to a significant degree, separation of that element may be impossible without converting it all to one chemical state or the other The temperatures involved are much higher than the melting point of lead () which can present issues with radiation shielding if lead is employed as a shielding material If filters are used to recover volatile fission products, those become low- to intermediate level waste. Fluoride volatility In the fluoride volatility process, fluorine is reacted with the fuel. Fluorine is so much more reactive than even oxygen that small particles of ground oxide fuel will burst into flame when dropped into a chamber full of fluorine. This is known as flame fluorination; the heat produced helps the reaction proceed. Most of the uranium, which makes up the bulk of the fuel, is converted to uranium hexafluoride, the form of uranium used in uranium enrichment, which has a very low boiling point. Technetium, the main long-lived fission product, is also efficiently converted to its volatile hexafluoride. A few other elements also form similarly volatile hexafluorides, pentafluorides, or heptafluorides. The volatile fluorides can be separated from excess fluorine by condensation, then separated from each other by fractional distillation or selective reduction. Uranium hexafluoride and technetium hexafluoride have very similar boiling points and vapor pressures, which makes complete separation more difficult. Many of the fission products volatilized are the same ones volatilized in non-fluorinated, higher-temperature volatilization, such as iodine, tellurium and molybdenum; notable differences are that technetium is volatilized, but caesium is not. Some transuranium elements such as plutonium, neptunium and americium can form volatile fluorides, but these compounds are not stable when the fluorine partial pressure is decreased. Most of the plutonium and some of the uranium will initially remain in ash which drops to the bottom of the flame fluorinator. The plutonium-uranium ratio in the ash may even approximate the composition needed for fast neutron reactor fuel. Further fluorination of the ash can remove all the uranium, neptunium, and plutonium as volatile fluorides; however, some other minor actinides may not form volatile fluorides and instead remain with the alkaline fission products. Some noble metals may not form fluorides at all, but remain in metallic form; however ruthenium hexafluoride is relatively stable and volatile. Distillation of the residue at higher temperatures can separate lower-boiling transition metal fluorides and alkali metal (Cs, Rb) fluorides from higher-boiling lanthanide and alkaline earth metal (Sr, Ba) and yttrium fluorides. The temperatures involved are much higher, but can be lowered somewhat by distilling in a vacuum. If a carrier salt like lithium fluoride or sodium fluoride is being used as a solvent, high-temperature distillation is a way to separate the carrier salt for reuse. Molten salt reactor designs carry out fluoride volatility reprocessing continuously or at frequent intervals. The goal is to return actinides to the molten fuel mixture for eventual fission, while removing fission products that are neutron poisons, or that can be more securely stored outside the reactor core while awaiting eventual transfer to permanent storage. Chloride volatility and solubility Many of the elements that form volatile high-valence fluorides will also form volatile high-valence chlorides. Chlorination and distillation is another possible method for separation. The sequence of separation may differ usefully from the sequence for fluorides; for example, zirconium tetrachloride and tin tetrachloride have relatively low boiling points of and . Chlorination has even been proposed as a method for removing zirconium fuel cladding, instead of mechanical decladding. Chlorides are likely to be easier than fluorides to later convert back to other compounds, such as oxides. Chlorides remaining after volatilization may also be separated by solubility in water. Chlorides of alkaline elements like americium, curium, lanthanides, strontium, caesium are more soluble than those of uranium, neptunium, plutonium, and zirconium. Advantages of halogen volatility Chlorine (and to a lesser extent fluorine) is a readily available industrial chemical that is produced in mass quantity Fractional distillation allows many elements to be separated from each other in a single step or iterative repetition of the same step Uranium will be produced directly as Uranium hexafluoride, the form used in enrichment Many volatile fluorides and chlorides are volatile at relatively moderate temperatures reducing thermal stress. This is especially important as the boiling point of uranium hexafluoride is below that of water, allowing to conserve energy in the separation of high boiling fission products (or their fluorides) from one another as this can take place in the absence of uranium, which makes up the bulk of the mass Some fluorides and chlorides melt at relatively low temperatures allowing a "liquid phase separation" if desired. Those low melting salts could be further processed by molten salt electrolysis. Fluorides and chlorides differ in water solubility depending on the cation. This can be used to separate them by aqueous solution. However, some fluorides violently react with water, which has to be taken into account. Disadvantages of halogen volatility Many compounds of fluorine or chlorine as well as the native elements themselves are toxic, corrosive and react violently with air, water or both Uranium hexafluoride and Technetium hexafluoride have very similar boiling points ( and respectively), making it hard to completely separate them from one another by distillation. Fractional distillation as used in petroleum refining requires large facilities and huge amounts of energy. To process thousands of tons of uranium would require smaller facilities than processing billions of tons of petroleum however, unlike petroleum refineries, the entire process would have to take place inside radiation shielding and there would have to be provisions made to prevent leaks of volatile, poisonous and radioactive fluorides. Plutonium hexafluoride boils at this means that any facility capable of separating uranium hexafluoride from Technetium hexafluoride is capable of separating plutonium hexafluoride from either, raising proliferation concerns The presence of alpha emitters induces some (α,n) reactions in fluorine, producing both radioactive and neutrons. This effect can be reduced by separating alpha emitters and fluorine as fast as feasible. Interactions between chlorine's two stable isotopes and on the one hand and alpha particles on the other are of lesser concern as they do not have as high a cross section and do not produce neutrons or long lived radionuclides. If carbon is present in the spent fuel it'll form halogenated hydrocarbons which are extremely potent greenhouse gases, and hard to chemically decompose. Some of those are toxic as well. Radioanalytical separations To determine the distribution of radioactive metals for analytical purposes, Solvent Impregnated Resins (SIRs) can be used. SIRs are porous particles, which contain an extractant inside their pores. This approach avoids the liquid-liquid separation step required in conventional liquid-liquid extraction. For the preparation of SIRs for radioanalytical separations, organic Amberlite XAD-4 or XAD-7 can be used. Possible extractants are e.g. trihexyltetradecylphosphonium chloride(CYPHOS IL-101) or N,N0-dialkyl-N,N0-diphenylpyridine-2,6-dicarboxyamides (R-PDA; R = butyl, octy I, decyl, dodecyl). Economics The relative economics of reprocessing-waste disposal and interim storage-direct disposal was the focus of much debate over the first decade of the 2000s. Studies have modeled the total fuel cycle costs of a reprocessing-recycling system based on one-time recycling of plutonium in existing thermal reactors (as opposed to the proposed breeder reactor cycle) and compare this to the total costs of an open fuel cycle with direct disposal. The range of results produced by these studies is very wide, but all agreed that under then-current economic conditions the reprocessing-recycle option is the more costly one. While the uranium market - particularly its short term fluctuations - has only a minor impact on the cost of electricity from nuclear power, long-term trends in the uranium market do significantly affect the economics of nuclear reprocessing. If uranium prices were to rise and remain consistently high, "stretching the fuel supply" via MOX fuel, breeder reactors or even the thorium fuel cycle could become more attractive. However, if uranium prices remain low, reprocessing will remain less attractive. If reprocessing is undertaken only to reduce the radioactivity level of spent fuel it should be taken into account that spent nuclear fuel becomes less radioactive over time. After 40 years its radioactivity drops by 99.9%, though it still takes over a thousand years for the level of radioactivity to approach that of natural uranium. However the level of transuranic elements, including plutonium-239, remains high for over 100,000 years, so if not reused as nuclear fuel, then those elements need secure disposal because of nuclear proliferation reasons as well as radiation hazard. On 25 October 2011 a commission of the Japanese Atomic Energy Commission revealed during a meeting calculations about the costs of recycling nuclear fuel for power generation. These costs could be twice the costs of direct geological disposal of spent fuel: the cost of extracting plutonium and handling spent fuel was estimated at 1.98 to 2.14 yen per kilowatt-hour of electricity generated. Discarding the spent fuel as waste would cost only 1 to 1.35 yen per kilowatt-hour. In July 2004 Japanese newspapers reported that the Japanese Government had estimated the costs of disposing radioactive waste, contradicting claims four months earlier that no such estimates had been made. The cost of non-reprocessing options was estimated to be between a quarter and a third ($5.5–7.9 billion) of the cost of reprocessing ($24.7 billion). At the end of the year 2011 it became clear that Masaya Yasui, who had been director of the Nuclear Power Policy Planning Division in 2004, had instructed his subordinate in April 2004 to conceal the data. The fact that the data were deliberately concealed obliged the ministry to re-investigate the case and to reconsider whether to punish the officials involved. List of sites See also Nuclear fuel cycle Breeder reactor Nuclear fusion-fission hybrid Spent nuclear fuel shipping cask Taylor Wilson's nuclear waste-fired small reactor Global Nuclear Energy Partnership announced February 2006 References Notes Further reading OECD Nuclear Energy Agency, The Economics of the Nuclear Fuel Cycle, Paris, 1994 I. Hensing and W Schultz, Economic Comparison of Nuclear Fuel Cycle Options, Energiewirtschaftlichen Instituts, Cologne, 1995. Cogema, Reprocessing-Recycling: the Industrial Stakes, presentation to the Konrad-Adenauer-Stiftung, Bonn, 9 May 1995. OECD Nuclear Energy Agency, Plutonium Fuel: An Assessment, Paris, 1989. National Research Council, "Nuclear Wastes: Technologies for Separation and Transmutation", National Academy Press, Washington D.C. 1996. External links Processing of Used Nuclear Fuel, World Nuclear Association PUREX Process, European Nuclear Society Mixed Oxide Fuel (MOX) – World Nuclear Association Disposal Options for Surplus Weapons-Usable Plutonium – Congressional Research Service Report for Congress Brief History of Fuel Reprocessing Annotated bibliography for reprocessing spent nuclear fuel from the Alsos Digital Library for Nuclear Issues Radioactive waste
Nuclear reprocessing
[ "Chemistry", "Technology" ]
8,729
[ "Environmental impact of nuclear power", "Radioactive waste", "Hazardous waste", "Radioactivity" ]
198,204
https://en.wikipedia.org/wiki/Tetragonal%20crystal%20system
In crystallography, the tetragonal crystal system is one of the 7 crystal systems. Tetragonal crystal lattices result from stretching a cubic lattice along one of its lattice vectors, so that the cube becomes a rectangular prism with a square base (a by a) and height (c, which is different from a). Bravais lattices There are two tetragonal Bravais lattices: the primitive tetragonal and the body-centered tetragonal. The body-centered tetragonal lattice is equivalent to the primitive tetragonal lattice with a smaller unit cell, while the face-centered tetragonal lattice is equivalent to the body-centered tetragonal lattice with a smaller unit cell. Crystal classes The point groups that fall under this crystal system are listed below, followed by their representations in international notation, Schoenflies notation, orbifold notation, Coxeter notation and mineral examples. In two dimensions There is only one tetragonal Bravais lattice in two dimensions: the square lattice. See also Bravais lattices Crystal system Crystal structure Point groups References External links Crystal systems System
Tetragonal crystal system
[ "Chemistry", "Materials_science" ]
231
[ "Crystallography", "Crystal systems" ]
198,319
https://en.wikipedia.org/wiki/Hamiltonian%20mechanics
In physics, Hamiltonian mechanics is a reformulation of Lagrangian mechanics that emerged in 1833. Introduced by Sir William Rowan Hamilton, Hamiltonian mechanics replaces (generalized) velocities used in Lagrangian mechanics with (generalized) momenta. Both theories provide interpretations of classical mechanics and describe the same physical phenomena. Hamiltonian mechanics has a close relationship with geometry (notably, symplectic geometry and Poisson structures) and serves as a link between classical and quantum mechanics. Overview Phase space coordinates (p, q) and Hamiltonian H Let be a mechanical system with configuration space and smooth Lagrangian Select a standard coordinate system on The quantities are called momenta. (Also generalized momenta, conjugate momenta, and canonical momenta). For a time instant the Legendre transformation of is defined as the map which is assumed to have a smooth inverse For a system with degrees of freedom, the Lagrangian mechanics defines the energy function The Legendre transform of turns into a function known as the . The Hamiltonian satisfies which implies that where the velocities are found from the (-dimensional) equation which, by assumption, is uniquely solvable for . The (-dimensional) pair is called phase space coordinates. (Also canonical coordinates). From Euler–Lagrange equation to Hamilton's equations In phase space coordinates , the (-dimensional) Euler–Lagrange equation becomes Hamilton's equations in dimensions From stationary action principle to Hamilton's equations Let be the set of smooth paths for which and The action functional is defined via where , and (see above). A path is a stationary point of (and hence is an equation of motion) if and only if the path in phase space coordinates obeys the Hamilton's equations. Basic physical interpretation A simple interpretation of Hamiltonian mechanics comes from its application on a one-dimensional system consisting of one nonrelativistic particle of mass . The value of the Hamiltonian is the total energy of the system, in this case the sum of kinetic and potential energy, traditionally denoted and , respectively. Here is the momentum and is the space coordinate. Then is a function of alone, while is a function of alone (i.e., and are scleronomic). In this example, the time derivative of is the velocity, and so the first Hamilton equation means that the particle's velocity equals the derivative of its kinetic energy with respect to its momentum. The time derivative of the momentum equals the Newtonian force, and so the second Hamilton equation means that the force equals the negative gradient of potential energy. Example A spherical pendulum consists of a mass m moving without friction on the surface of a sphere. The only forces acting on the mass are the reaction from the sphere and gravity. Spherical coordinates are used to describe the position of the mass in terms of , where is fixed, . The Lagrangian for this system is Thus the Hamiltonian is where and In terms of coordinates and momenta, the Hamiltonian reads Hamilton's equations give the time evolution of coordinates and conjugate momenta in four first-order differential equations, Momentum , which corresponds to the vertical component of angular momentum , is a constant of motion. That is a consequence of the rotational symmetry of the system around the vertical axis. Being absent from the Hamiltonian, azimuth is a cyclic coordinate, which implies conservation of its conjugate momentum. Deriving Hamilton's equations Hamilton's equations can be derived by a calculation with the Lagrangian , generalized positions , and generalized velocities , where . Here we work off-shell, meaning , , are independent coordinates in phase space, not constrained to follow any equations of motion (in particular, is not a derivative of ). The total differential of the Lagrangian is: The generalized momentum coordinates were defined as , so we may rewrite the equation as: After rearranging, one obtains: The term in parentheses on the left-hand side is just the Hamiltonian defined previously, therefore: One may also calculate the total differential of the Hamiltonian with respect to coordinates , , instead of , , , yielding: One may now equate these two expressions for , one in terms of , the other in terms of : Since these calculations are off-shell, one can equate the respective coefficients of , , on the two sides: On-shell, one substitutes parametric functions which define a trajectory in phase space with velocities , obeying Lagrange's equations: Rearranging and writing in terms of the on-shell gives: Thus Lagrange's equations are equivalent to Hamilton's equations: In the case of time-independent and , i.e. , Hamilton's equations consist of first-order differential equations, while Lagrange's equations consist of second-order equations. Hamilton's equations usually do not reduce the difficulty of finding explicit solutions, but important theoretical results can be derived from them, because coordinates and momenta are independent variables with nearly symmetric roles. Hamilton's equations have another advantage over Lagrange's equations: if a system has a symmetry, so that some coordinate does not occur in the Hamiltonian (i.e. a cyclic coordinate), the corresponding momentum coordinate is conserved along each trajectory, and that coordinate can be reduced to a constant in the other equations of the set. This effectively reduces the problem from coordinates to coordinates: this is the basis of symplectic reduction in geometry. In the Lagrangian framework, the conservation of momentum also follows immediately, however all the generalized velocities still occur in the Lagrangian, and a system of equations in coordinates still has to be solved. The Lagrangian and Hamiltonian approaches provide the groundwork for deeper results in classical mechanics, and suggest analogous formulations in quantum mechanics: the path integral formulation and the Schrödinger equation. Properties of the Hamiltonian The value of the Hamiltonian is the total energy of the system if and only if the energy function has the same property. (See definition of ). when , form a solution of Hamilton's equations. Indeed, and everything but the final term cancels out. does not change under point transformations, i.e. smooth changes of space coordinates. (Follows from the invariance of the energy function under point transformations. The invariance of can be established directly). (See ). . (Compare Hamilton's and Euler-Lagrange equations or see ). if and only if .A coordinate for which the last equation holds is called cyclic (or ignorable). Every cyclic coordinate reduces the number of degrees of freedom by , causes the corresponding momentum to be conserved, and makes Hamilton's equations easier to solve. Hamiltonian as the total system energy In its application to a given system, the Hamiltonian is often taken to be where is the kinetic energy and is the potential energy. Using this relation can be simpler than first calculating the Lagrangian, and then deriving the Hamiltonian from the Lagrangian. However, the relation is not true for all systems. The relation holds true for nonrelativistic systems when all of the following conditions are satisfied where is time, is the number of degrees of freedom of the system, and each is an arbitrary scalar function of . In words, this means that the relation holds true if does not contain time as an explicit variable (it is scleronomic), does not contain generalised velocity as an explicit variable, and each term of is quadratic in generalised velocity. Proof Preliminary to this proof, it is important to address an ambiguity in the related mathematical notation. While a change of variables can be used to equate , it is important to note that . In this case, the right hand side always evaluates to 0. To perform a change of variables inside of a partial derivative, the multivariable chain rule should be used. Hence, to avoid ambiguity, the function arguments of any term inside of a partial derivative should be stated. Additionally, this proof uses the notation to imply that . Application to systems of point masses For a system of point masses, the requirement for to be quadratic in generalised velocity is always satisfied for the case where , which is a requirement for anyway. Conservation of energy If the conditions for are satisfied, then conservation of the Hamiltonian implies conservation of energy. This requires the additional condition that does not contain time as an explicit variable. With respect to the extended Euler-Lagrange formulation (See ), the Rayleigh dissipation function represents energy dissipation by nature. Therefore, energy is not conserved when . This is similar to the velocity dependent potential. In summary, the requirements for to be satisfied for a nonrelativistic system are is a homogeneous quadratic function in Hamiltonian of a charged particle in an electromagnetic field A sufficient illustration of Hamiltonian mechanics is given by the Hamiltonian of a charged particle in an electromagnetic field. In Cartesian coordinates the Lagrangian of a non-relativistic classical particle in an electromagnetic field is (in SI Units): where is the electric charge of the particle, is the electric scalar potential, and the are the components of the magnetic vector potential that may all explicitly depend on and . This Lagrangian, combined with Euler–Lagrange equation, produces the Lorentz force law and is called minimal coupling. The canonical momenta are given by: The Hamiltonian, as the Legendre transformation of the Lagrangian, is therefore: This equation is used frequently in quantum mechanics. Under gauge transformation: where is any scalar function of space and time. The aforementioned Lagrangian, the canonical momenta, and the Hamiltonian transform like: which still produces the same Hamilton's equation: In quantum mechanics, the wave function will also undergo a local U(1) group transformation during the Gauge Transformation, which implies that all physical results must be invariant under local U(1) transformations. Relativistic charged particle in an electromagnetic field The relativistic Lagrangian for a particle (rest mass and charge ) is given by: Thus the particle's canonical momentum is that is, the sum of the kinetic momentum and the potential momentum. Solving for the velocity, we get So the Hamiltonian is This results in the force equation (equivalent to the Euler–Lagrange equation) from which one can derive The above derivation makes use of the vector calculus identity: An equivalent expression for the Hamiltonian as function of the relativistic (kinetic) momentum, , is This has the advantage that kinetic momentum can be measured experimentally whereas canonical momentum cannot. Notice that the Hamiltonian (total energy) can be viewed as the sum of the relativistic energy (kinetic+rest), , plus the potential energy, . From symplectic geometry to Hamilton's equations Geometry of Hamiltonian systems The Hamiltonian can induce a symplectic structure on a smooth even-dimensional manifold in several equivalent ways, the best known being the following: As a closed nondegenerate symplectic 2-form ω. According to the Darboux's theorem, in a small neighbourhood around any point on there exist suitable local coordinates (canonical or symplectic coordinates) in which the symplectic form becomes: The form induces a natural isomorphism of the tangent space with the cotangent space: . This is done by mapping a vector to the 1-form , where for all . Due to the bilinearity and non-degeneracy of , and the fact that , the mapping is indeed a linear isomorphism. This isomorphism is natural in that it does not change with change of coordinates on Repeating over all , we end up with an isomorphism between the infinite-dimensional space of smooth vector fields and that of smooth 1-forms. For every and , (In algebraic terms, one would say that the -modules and are isomorphic). If , then, for every fixed , , and . is known as a Hamiltonian vector field. The respective differential equation on is called . Here and is the (time-dependent) value of the vector field at . A Hamiltonian system may be understood as a fiber bundle over time , with the fiber being the position space at time . The Lagrangian is thus a function on the jet bundle over ; taking the fiberwise Legendre transform of the Lagrangian produces a function on the dual bundle over time whose fiber at is the cotangent space , which comes equipped with a natural symplectic form, and this latter function is the Hamiltonian. The correspondence between Lagrangian and Hamiltonian mechanics is achieved with the tautological one-form. Any smooth real-valued function on a symplectic manifold can be used to define a Hamiltonian system. The function is known as "the Hamiltonian" or "the energy function." The symplectic manifold is then called the phase space. The Hamiltonian induces a special vector field on the symplectic manifold, known as the Hamiltonian vector field. The Hamiltonian vector field induces a Hamiltonian flow on the manifold. This is a one-parameter family of transformations of the manifold (the parameter of the curves is commonly called "the time"); in other words, an isotopy of symplectomorphisms, starting with the identity. By Liouville's theorem, each symplectomorphism preserves the volume form on the phase space. The collection of symplectomorphisms induced by the Hamiltonian flow is commonly called "the Hamiltonian mechanics" of the Hamiltonian system. The symplectic structure induces a Poisson bracket. The Poisson bracket gives the space of functions on the manifold the structure of a Lie algebra. If and are smooth functions on then the smooth function is properly defined; it is called a Poisson bracket of functions and and is denoted . The Poisson bracket has the following properties: bilinearity antisymmetry Leibniz rule: Jacobi identity: non-degeneracy: if the point on is not critical for then a smooth function exists such that . Given a function if there is a probability distribution , then (since the phase space velocity has zero divergence and probability is conserved) its convective derivative can be shown to be zero and so This is called Liouville's theorem. Every smooth function over the symplectic manifold generates a one-parameter family of symplectomorphisms and if , then is conserved and the symplectomorphisms are symmetry transformations. A Hamiltonian may have multiple conserved quantities . If the symplectic manifold has dimension and there are functionally independent conserved quantities which are in involution (i.e., ), then the Hamiltonian is Liouville integrable. The Liouville–Arnold theorem says that, locally, any Liouville integrable Hamiltonian can be transformed via a symplectomorphism into a new Hamiltonian with the conserved quantities as coordinates; the new coordinates are called action–angle coordinates. The transformed Hamiltonian depends only on the , and hence the equations of motion have the simple form for some function . There is an entire field focusing on small deviations from integrable systems governed by the KAM theorem. The integrability of Hamiltonian vector fields is an open question. In general, Hamiltonian systems are chaotic; concepts of measure, completeness, integrability and stability are poorly defined. Riemannian manifolds An important special case consists of those Hamiltonians that are quadratic forms, that is, Hamiltonians that can be written as where is a smoothly varying inner product on the fibers , the cotangent space to the point in the configuration space, sometimes called a cometric. This Hamiltonian consists entirely of the kinetic term. If one considers a Riemannian manifold or a pseudo-Riemannian manifold, the Riemannian metric induces a linear isomorphism between the tangent and cotangent bundles. (See Musical isomorphism). Using this isomorphism, one can define a cometric. (In coordinates, the matrix defining the cometric is the inverse of the matrix defining the metric.) The solutions to the Hamilton–Jacobi equations for this Hamiltonian are then the same as the geodesics on the manifold. In particular, the Hamiltonian flow in this case is the same thing as the geodesic flow. The existence of such solutions, and the completeness of the set of solutions, are discussed in detail in the article on geodesics. See also Geodesics as Hamiltonian flows. Sub-Riemannian manifolds When the cometric is degenerate, then it is not invertible. In this case, one does not have a Riemannian manifold, as one does not have a metric. However, the Hamiltonian still exists. In the case where the cometric is degenerate at every point of the configuration space manifold , so that the rank of the cometric is less than the dimension of the manifold , one has a sub-Riemannian manifold. The Hamiltonian in this case is known as a sub-Riemannian Hamiltonian. Every such Hamiltonian uniquely determines the cometric, and vice versa. This implies that every sub-Riemannian manifold is uniquely determined by its sub-Riemannian Hamiltonian, and that the converse is true: every sub-Riemannian manifold has a unique sub-Riemannian Hamiltonian. The existence of sub-Riemannian geodesics is given by the Chow–Rashevskii theorem. The continuous, real-valued Heisenberg group provides a simple example of a sub-Riemannian manifold. For the Heisenberg group, the Hamiltonian is given by is not involved in the Hamiltonian. Poisson algebras Hamiltonian systems can be generalized in various ways. Instead of simply looking at the algebra of smooth functions over a symplectic manifold, Hamiltonian mechanics can be formulated on general commutative unital real Poisson algebras. A state is a continuous linear functional on the Poisson algebra (equipped with some suitable topology) such that for any element of the algebra, maps to a nonnegative real number. A further generalization is given by Nambu dynamics. Generalization to quantum mechanics through Poisson bracket Hamilton's equations above work well for classical mechanics, but not for quantum mechanics, since the differential equations discussed assume that one can specify the exact position and momentum of the particle simultaneously at any point in time. However, the equations can be further generalized to then be extended to apply to quantum mechanics as well as to classical mechanics, through the deformation of the Poisson algebra over and to the algebra of Moyal brackets. Specifically, the more general form of the Hamilton's equation reads where is some function of and , and is the Hamiltonian. To find out the rules for evaluating a Poisson bracket without resorting to differential equations, see Lie algebra; a Poisson bracket is the name for the Lie bracket in a Poisson algebra. These Poisson brackets can then be extended to Moyal brackets comporting to an inequivalent Lie algebra, as proven by Hilbrand J. Groenewold, and thereby describe quantum mechanical diffusion in phase space (See Phase space formulation and Wigner–Weyl transform). This more algebraic approach not only permits ultimately extending probability distributions in phase space to Wigner quasi-probability distributions, but, at the mere Poisson bracket classical setting, also provides more power in helping analyze the relevant conserved quantities in a system. See also Canonical transformation Classical field theory Hamiltonian field theory Hamilton's optico-mechanical analogy Covariant Hamiltonian field theory Classical mechanics Dynamical systems theory Hamiltonian system Hamilton–Jacobi equation Hamilton–Jacobi–Einstein equation Lagrangian mechanics Maxwell's equations Hamiltonian (quantum mechanics) Quantum Hamilton's equations Quantum field theory Hamiltonian optics De Donder–Weyl theory Geometric mechanics Routhian mechanics Nambu mechanics Hamiltonian fluid mechanics Hamiltonian vector field References Further reading External links Classical mechanics Dynamical systems Mathematical physics
Hamiltonian mechanics
[ "Physics", "Mathematics" ]
4,167
[ "Applied mathematics", "Theoretical physics", "Classical mechanics", "Hamiltonian mechanics", "Mechanics", "Mathematical physics", "Dynamical systems" ]
198,579
https://en.wikipedia.org/wiki/Non-Proliferation%20Trust
The Non-Proliferation Trust (NPT) is a U.S. nonprofit organization that, at the beginning of the 21st century, advocated storing 10,000 tons of U.S. nuclear waste in Russia for a fee of $15 billion paid to the Russian government and $250 million paid to a fund for Russian orphans. The group was headed by Admiral Daniel Murphy. This proposal was endorsed by the Russian atomic energy ministry, MinAtom, which estimated that the proposal could eventually generate $150 billion in revenue for Russia. See also Halter Marine Federal Agency on Atomic Energy (Russia) References Energy economics Non-profit organizations based in the United States Radioactive waste Environment of Russia
Non-Proliferation Trust
[ "Physics", "Chemistry", "Technology", "Environmental_science" ]
137
[ "Energy economics", "Environmental social science stubs", "Nuclear chemistry stubs", "Nuclear and atomic physics stubs", "Hazardous waste", "Radioactivity", "Nuclear physics", "Environmental impact of nuclear power", "Environmental social science", "Radioactive waste" ]
198,608
https://en.wikipedia.org/wiki/Molecular%20dynamics
Molecular dynamics (MD) is a computer simulation method for analyzing the physical movements of atoms and molecules. The atoms and molecules are allowed to interact for a fixed period of time, giving a view of the dynamic "evolution" of the system. In the most common version, the trajectories of atoms and molecules are determined by numerically solving Newton's equations of motion for a system of interacting particles, where forces between the particles and their potential energies are often calculated using interatomic potentials or molecular mechanical force fields. The method is applied mostly in chemical physics, materials science, and biophysics. Because molecular systems typically consist of a vast number of particles, it is impossible to determine the properties of such complex systems analytically; MD simulation circumvents this problem by using numerical methods. However, long MD simulations are mathematically ill-conditioned, generating cumulative errors in numerical integration that can be minimized with proper selection of algorithms and parameters, but not eliminated. For systems that obey the ergodic hypothesis, the evolution of one molecular dynamics simulation may be used to determine the macroscopic thermodynamic properties of the system: the time averages of an ergodic system correspond to microcanonical ensemble averages. MD has also been termed "statistical mechanics by numbers" and "Laplace's vision of Newtonian mechanics" of predicting the future by animating nature's forces and allowing insight into molecular motion on an atomic scale. History MD was originally developed in the early 1950s, following earlier successes with Monte Carlo simulationswhich themselves date back to the eighteenth century, in the Buffon's needle problem for examplebut was popularized for statistical mechanics at Los Alamos National Laboratory by Marshall Rosenbluth and Nicholas Metropolis in what is known today as the Metropolis–Hastings algorithm. Interest in the time evolution of N-body systems dates much earlier to the seventeenth century, beginning with Isaac Newton, and continued into the following century largely with a focus on celestial mechanics and issues such as the stability of the solar system. Many of the numerical methods used today were developed during this time period, which predates the use of computers; for example, the most common integration algorithm used today, the Verlet integration algorithm, was used as early as 1791 by Jean Baptiste Joseph Delambre. Numerical calculations with these algorithms can be considered to be MD done "by hand". As early as 1941, integration of the many-body equations of motion was carried out with analog computers. Some undertook the labor-intensive work of modeling atomic motion by constructing physical models, e.g., using macroscopic spheres. The aim was to arrange them in such a way as to replicate the structure of a liquid and use this to examine its behavior. J.D. Bernal describes this process in 1962, writing:... I took a number of rubber balls and stuck them together with rods of a selection of different lengths ranging from 2.75 to 4 inches. I tried to do this in the first place as casually as possible, working in my own office, being interrupted every five minutes or so and not remembering what I had done before the interruption.Following the discovery of microscopic particles and the development of computers, interest expanded beyond the proving ground of gravitational systems to the statistical properties of matter. In an attempt to understand the origin of irreversibility, Enrico Fermi proposed in 1953, and published in 1955, the use of the early computer MANIAC I, also at Los Alamos National Laboratory, to solve the time evolution of the equations of motion for a many-body system subject to several choices of force laws. Today, this seminal work is known as the Fermi–Pasta–Ulam–Tsingou problem. The time evolution of the energy from the original work is shown in the figure to the right. In 1957, Berni Alder and Thomas Wainwright used an IBM 704 computer to simulate perfectly elastic collisions between hard spheres. In 1960, in perhaps the first realistic simulation of matter, J.B. Gibson et al. simulated radiation damage of solid copper by using a Born–Mayer type of repulsive interaction along with a cohesive surface force. In 1964, Aneesur Rahman published simulations of liquid argon that used a Lennard-Jones potential; calculations of system properties, such as the coefficient of self-diffusion, compared well with experimental data. Today, the Lennard-Jones potential is still one of the most frequently used intermolecular potentials. It is used for describing simple substances (a.k.a. Lennard-Jonesium) for conceptual and model studies and as a building block in many force fields of real substances. Areas of application and limits First used in theoretical physics, the molecular dynamics method gained popularity in materials science soon afterward, and since the 1970s it has also been commonly used in biochemistry and biophysics. MD is frequently used to refine 3-dimensional structures of proteins and other macromolecules based on experimental constraints from X-ray crystallography or NMR spectroscopy. In physics, MD is used to examine the dynamics of atomic-level phenomena that cannot be observed directly, such as thin film growth and ion subplantation, and to examine the physical properties of nanotechnological devices that have not or cannot yet be created. In biophysics and structural biology, the method is frequently applied to study the motions of macromolecules such as proteins and nucleic acids, which can be useful for interpreting the results of certain biophysical experiments and for modeling interactions with other molecules, as in ligand docking. In principle, MD can be used for ab initio prediction of protein structure by simulating folding of the polypeptide chain from a random coil. The results of MD simulations can be tested through comparison to experiments that measure molecular dynamics, of which a popular method is NMR spectroscopy. MD-derived structure predictions can be tested through community-wide experiments in Critical Assessment of Protein Structure Prediction (CASP), although the method has historically had limited success in this area. Michael Levitt, who shared the Nobel Prize partly for the application of MD to proteins, wrote in 1999 that CASP participants usually did not use the method due to "... a central embarrassment of molecular mechanics, namely that energy minimization or molecular dynamics generally leads to a model that is less like the experimental structure". Improvements in computational resources permitting more and longer MD trajectories, combined with modern improvements in the quality of force field parameters, have yielded some improvements in both structure prediction and homology model refinement, without reaching the point of practical utility in these areas; many identify force field parameters as a key area for further development. MD simulation has been reported for pharmacophore development and drug design. For example, Pinto et al. implemented MD simulations of Bcl-xL complexes to calculate average positions of critical amino acids involved in ligand binding. Carlson et al. implemented molecular dynamics simulations to identify compounds that complement a receptor while causing minimal disruption to the conformation and flexibility of the active site. Snapshots of the protein at constant time intervals during the simulation were overlaid to identify conserved binding regions (conserved in at least three out of eleven frames) for pharmacophore development. Spyrakis et al. relied on a workflow of MD simulations, fingerprints for ligands and proteins (FLAP) and linear discriminant analysis (LDA) to identify the best ligand-protein conformations to act as pharmacophore templates based on retrospective ROC analysis of the resulting pharmacophores. In an attempt to ameliorate structure-based drug discovery modeling, vis-à-vis the need for many modeled compounds, Hatmal et al. proposed a combination of MD simulation and ligand-receptor intermolecular contacts analysis to discern critical intermolecular contacts (binding interactions) from redundant ones in a single ligand–protein complex. Critical contacts can then be converted into pharmacophore models that can be used for virtual screening. An important factor is intramolecular hydrogen bonds, which are not explicitly included in modern force fields, but described as Coulomb interactions of atomic point charges. This is a crude approximation because hydrogen bonds have a partially quantum mechanical and chemical nature. Furthermore, electrostatic interactions are usually calculated using the dielectric constant of a vacuum, even though the surrounding aqueous solution has a much higher dielectric constant. Thus, using the macroscopic dielectric constant at short interatomic distances is questionable. Finally, van der Waals interactions in MD are usually described by Lennard-Jones potentials based on the Fritz London theory that is only applicable in a vacuum. However, all types of van der Waals forces are ultimately of electrostatic origin and therefore depend on dielectric properties of the environment. The direct measurement of attraction forces between different materials (as Hamaker constant) shows that "the interaction between hydrocarbons across water is about 10% of that across vacuum". The environment-dependence of van der Waals forces is neglected in standard simulations, but can be included by developing polarizable force fields. Design constraints The design of a molecular dynamics simulation should account for the available computational power. Simulation size (n = number of particles), timestep, and total time duration must be selected so that the calculation can finish within a reasonable time period. However, the simulations should be long enough to be relevant to the time scales of the natural processes being studied. To make statistically valid conclusions from the simulations, the time span simulated should match the kinetics of the natural process. Otherwise, it is analogous to making conclusions about how a human walks when only looking at less than one footstep. Most scientific publications about the dynamics of proteins and DNA use data from simulations spanning nanoseconds (10−9 s) to microseconds (10−6 s). To obtain these simulations, several CPU-days to CPU-years are needed. Parallel algorithms allow the load to be distributed among CPUs; an example is the spatial or force decomposition algorithm. During a classical MD simulation, the most CPU intensive task is the evaluation of the potential as a function of the particles' internal coordinates. Within that energy evaluation, the most expensive one is the non-bonded or non-covalent part. In big O notation, common molecular dynamics simulations scale by if all pair-wise electrostatic and van der Waals interactions must be accounted for explicitly. This computational cost can be reduced by employing electrostatics methods such as particle mesh Ewald summation ( ), particle-particle-particle mesh (P3M), or good spherical cutoff methods ( ). Another factor that impacts total CPU time needed by a simulation is the size of the integration timestep. This is the time length between evaluations of the potential. The timestep must be chosen small enough to avoid discretization errors (i.e., smaller than the period related to fastest vibrational frequency in the system). Typical timesteps for classical MD are on the order of 1 femtosecond (10−15 s). This value may be extended by using algorithms such as the SHAKE constraint algorithm, which fix the vibrations of the fastest atoms (e.g., hydrogens) into place. Multiple time scale methods have also been developed, which allow extended times between updates of slower long-range forces. For simulating molecules in a solvent, a choice should be made between an explicit and implicit solvent. Explicit solvent particles (such as the TIP3P, SPC/E and SPC-f water models) must be calculated expensively by the force field, while implicit solvents use a mean-field approach. Using an explicit solvent is computationally expensive, requiring inclusion of roughly ten times more particles in the simulation. But the granularity and viscosity of explicit solvent is essential to reproduce certain properties of the solute molecules. This is especially important to reproduce chemical kinetics. In all kinds of molecular dynamics simulations, the simulation box size must be large enough to avoid boundary condition artifacts. Boundary conditions are often treated by choosing fixed values at the edges (which may cause artifacts), or by employing periodic boundary conditions in which one side of the simulation loops back to the opposite side, mimicking a bulk phase (which may cause artifacts too). Microcanonical ensemble (NVE) In the microcanonical ensemble, the system is isolated from changes in moles (N), volume (V), and energy (E). It corresponds to an adiabatic process with no heat exchange. A microcanonical molecular dynamics trajectory may be seen as an exchange of potential and kinetic energy, with total energy being conserved. For a system of N particles with coordinates and velocities , the following pair of first order differential equations may be written in Newton's notation as The potential energy function of the system is a function of the particle coordinates . It is referred to simply as the potential in physics, or the force field in chemistry. The first equation comes from Newton's laws of motion; the force acting on each particle in the system can be calculated as the negative gradient of . For every time step, each particle's position and velocity may be integrated with a symplectic integrator method such as Verlet integration. The time evolution of and is called a trajectory. Given the initial positions (e.g., from theoretical knowledge) and velocities (e.g., randomized Gaussian), we can calculate all future (or past) positions and velocities. One frequent source of confusion is the meaning of temperature in MD. Commonly we have experience with macroscopic temperatures, which involve a huge number of particles, but temperature is a statistical quantity. If there is a large enough number of atoms, statistical temperature can be estimated from the instantaneous temperature, which is found by equating the kinetic energy of the system to nkBT/2, where n is the number of degrees of freedom of the system. A temperature-related phenomenon arises due to the small number of atoms that are used in MD simulations. For example, consider simulating the growth of a copper film starting with a substrate containing 500 atoms and a deposition energy of 100 eV. In the real world, the 100 eV from the deposited atom would rapidly be transported through and shared among a large number of atoms ( or more) with no big change in temperature. When there are only 500 atoms, however, the substrate is almost immediately vaporized by the deposition. Something similar happens in biophysical simulations. The temperature of the system in NVE is naturally raised when macromolecules such as proteins undergo exothermic conformational changes and binding. Canonical ensemble (NVT) In the canonical ensemble, amount of substance (N), volume (V) and temperature (T) are conserved. It is also sometimes called constant temperature molecular dynamics (CTMD). In NVT, the energy of endothermic and exothermic processes is exchanged with a thermostat. A variety of thermostat algorithms are available to add and remove energy from the boundaries of an MD simulation in a more or less realistic way, approximating the canonical ensemble. Popular methods to control temperature include velocity rescaling, the Nosé–Hoover thermostat, Nosé–Hoover chains, the Berendsen thermostat, the Andersen thermostat and Langevin dynamics. The Berendsen thermostat might introduce the flying ice cube effect, which leads to unphysical translations and rotations of the simulated system. It is not trivial to obtain a canonical ensemble distribution of conformations and velocities using these algorithms. How this depends on system size, thermostat choice, thermostat parameters, time step and integrator is the subject of many articles in the field. Isothermal–isobaric (NPT) ensemble In the isothermal–isobaric ensemble, amount of substance (N), pressure (P) and temperature (T) are conserved. In addition to a thermostat, a barostat is needed. It corresponds most closely to laboratory conditions with a flask open to ambient temperature and pressure. In the simulation of biological membranes, isotropic pressure control is not appropriate. For lipid bilayers, pressure control occurs under constant membrane area (NPAT) or constant surface tension "gamma" (NPγT). Generalized ensembles The replica exchange method is a generalized ensemble. It was originally created to deal with the slow dynamics of disordered spin systems. It is also called parallel tempering. The replica exchange MD (REMD) formulation tries to overcome the multiple-minima problem by exchanging the temperature of non-interacting replicas of the system running at several temperatures. Potentials in MD simulations A molecular dynamics simulation requires the definition of a potential function, or a description of the terms by which the particles in the simulation will interact. In chemistry and biology this is usually referred to as a force field and in materials physics as an interatomic potential. Potentials may be defined at many levels of physical accuracy; those most commonly used in chemistry are based on molecular mechanics and embody a classical mechanics treatment of particle-particle interactions that can reproduce structural and conformational changes but usually cannot reproduce chemical reactions. The reduction from a fully quantum description to a classical potential entails two main approximations. The first one is the Born–Oppenheimer approximation, which states that the dynamics of electrons are so fast that they can be considered to react instantaneously to the motion of their nuclei. As a consequence, they may be treated separately. The second one treats the nuclei, which are much heavier than electrons, as point particles that follow classical Newtonian dynamics. In classical molecular dynamics, the effect of the electrons is approximated as one potential energy surface, usually representing the ground state. When finer levels of detail are needed, potentials based on quantum mechanics are used; some methods attempt to create hybrid classical/quantum potentials where the bulk of the system is treated classically but a small region is treated as a quantum system, usually undergoing a chemical transformation. Empirical potentials Empirical potentials used in chemistry are frequently called force fields, while those used in materials physics are called interatomic potentials. Most force fields in chemistry are empirical and consist of a summation of bonded forces associated with chemical bonds, bond angles, and bond dihedrals, and non-bonded forces associated with van der Waals forces and electrostatic charge. Empirical potentials represent quantum-mechanical effects in a limited way through ad hoc functional approximations. These potentials contain free parameters such as atomic charge, van der Waals parameters reflecting estimates of atomic radius, and equilibrium bond length, angle, and dihedral; these are obtained by fitting against detailed electronic calculations (quantum chemical simulations) or experimental physical properties such as elastic constants, lattice parameters and spectroscopic measurements. Because of the non-local nature of non-bonded interactions, they involve at least weak interactions between all particles in the system. Its calculation is normally the bottleneck in the speed of MD simulations. To lower the computational cost, force fields employ numerical approximations such as shifted cutoff radii, reaction field algorithms, particle mesh Ewald summation, or the newer particle–particle-particle–mesh (P3M). Chemistry force fields commonly employ preset bonding arrangements (an exception being ab initio dynamics), and thus are unable to model the process of chemical bond breaking and reactions explicitly. On the other hand, many of the potentials used in physics, such as those based on the bond order formalism can describe several different coordinations of a system and bond breaking. Examples of such potentials include the Brenner potential for hydrocarbons and its further developments for the C-Si-H and C-O-H systems. The ReaxFF potential can be considered a fully reactive hybrid between bond order potentials and chemistry force fields. Pair potentials versus many-body potentials The potential functions representing the non-bonded energy are formulated as a sum over interactions between the particles of the system. The simplest choice, employed in many popular force fields, is the "pair potential", in which the total potential energy can be calculated from the sum of energy contributions between pairs of atoms. Therefore, these force fields are also called "additive force fields". An example of such a pair potential is the non-bonded Lennard-Jones potential (also termed the 6–12 potential), used for calculating van der Waals forces. Another example is the Born (ionic) model of the ionic lattice. The first term in the next equation is Coulomb's law for a pair of ions, the second term is the short-range repulsion explained by Pauli's exclusion principle and the final term is the dispersion interaction term. Usually, a simulation only includes the dipolar term, although sometimes the quadrupolar term is also included. When nl = 6, this potential is also called the Coulomb–Buckingham potential. In many-body potentials, the potential energy includes the effects of three or more particles interacting with each other. In simulations with pairwise potentials, global interactions in the system also exist, but they occur only through pairwise terms. In many-body potentials, the potential energy cannot be found by a sum over pairs of atoms, as these interactions are calculated explicitly as a combination of higher-order terms. In the statistical view, the dependency between the variables cannot in general be expressed using only pairwise products of the degrees of freedom. For example, the Tersoff potential, which was originally used to simulate carbon, silicon, and germanium, and has since been used for a wide range of other materials, involves a sum over groups of three atoms, with the angles between the atoms being an important factor in the potential. Other examples are the embedded-atom method (EAM), the EDIP, and the Tight-Binding Second Moment Approximation (TBSMA) potentials, where the electron density of states in the region of an atom is calculated from a sum of contributions from surrounding atoms, and the potential energy contribution is then a function of this sum. Semi-empirical potentials Semi-empirical potentials make use of the matrix representation from quantum mechanics. However, the values of the matrix elements are found through empirical formulae that estimate the degree of overlap of specific atomic orbitals. The matrix is then diagonalized to determine the occupancy of the different atomic orbitals, and empirical formulae are used once again to determine the energy contributions of the orbitals. There are a wide variety of semi-empirical potentials, termed tight-binding potentials, which vary according to the atoms being modeled. Polarizable potentials Most classical force fields implicitly include the effect of polarizability, e.g., by scaling up the partial charges obtained from quantum chemical calculations. These partial charges are stationary with respect to the mass of the atom. But molecular dynamics simulations can explicitly model polarizability with the introduction of induced dipoles through different methods, such as Drude particles or fluctuating charges. This allows for a dynamic redistribution of charge between atoms which responds to the local chemical environment. For many years, polarizable MD simulations have been touted as the next generation. For homogenous liquids such as water, increased accuracy has been achieved through the inclusion of polarizability. Some promising results have also been achieved for proteins. However, it is still uncertain how to best approximate polarizability in a simulation. The point becomes more important when a particle experiences different environments during its simulation trajectory, e.g. translocation of a drug through a cell membrane. Potentials in ab initio methods In classical molecular dynamics, one potential energy surface (usually the ground state) is represented in the force field. This is a consequence of the Born–Oppenheimer approximation. In excited states, chemical reactions or when a more accurate representation is needed, electronic behavior can be obtained from first principles using a quantum mechanical method, such as density functional theory. This is named Ab Initio Molecular Dynamics (AIMD). Due to the cost of treating the electronic degrees of freedom, the computational burden of these simulations is far higher than classical molecular dynamics. For this reason, AIMD is typically limited to smaller systems and shorter times. Ab initio quantum mechanical and chemical methods may be used to calculate the potential energy of a system on the fly, as needed for conformations in a trajectory. This calculation is usually made in the close neighborhood of the reaction coordinate. Although various approximations may be used, these are based on theoretical considerations, not on empirical fitting. Ab initio calculations produce a vast amount of information that is not available from empirical methods, such as density of electronic states or other electronic properties. A significant advantage of using ab initio methods is the ability to study reactions that involve breaking or formation of covalent bonds, which correspond to multiple electronic states. Moreover, ab initio methods also allow recovering effects beyond the Born–Oppenheimer approximation using approaches like mixed quantum-classical dynamics. Hybrid QM/MM QM (quantum-mechanical) methods are very powerful. However, they are computationally expensive, while the MM (classical or molecular mechanics) methods are fast but suffer from several limits (require extensive parameterization; energy estimates obtained are not very accurate; cannot be used to simulate reactions where covalent bonds are broken/formed; and are limited in their abilities for providing accurate details regarding the chemical environment). A new class of method has emerged that combines the good points of QM (accuracy) and MM (speed) calculations. These methods are termed mixed or hybrid quantum-mechanical and molecular mechanics methods (hybrid QM/MM). The most important advantage of hybrid QM/MM method is the speed. The cost of doing classical molecular dynamics (MM) in the most straightforward case scales O(n2), where n is the number of atoms in the system. This is mainly due to electrostatic interactions term (every particle interacts with every other particle). However, use of cutoff radius, periodic pair-list updates and more recently the variations of the particle-mesh Ewald's (PME) method has reduced this to between O(n) to O(n2). In other words, if a system with twice as many atoms is simulated then it would take between two and four times as much computing power. On the other hand, the simplest ab initio calculations typically scale O(n3) or worse (restricted Hartree–Fock calculations have been suggested to scale ~O(n2.7)). To overcome the limit, a small part of the system is treated quantum-mechanically (typically active-site of an enzyme) and the remaining system is treated classically. In more sophisticated implementations, QM/MM methods exist to treat both light nuclei susceptible to quantum effects (such as hydrogens) and electronic states. This allows generating hydrogen wave-functions (similar to electronic wave-functions). This methodology has been useful in investigating phenomena such as hydrogen tunneling. One example where QM/MM methods have provided new discoveries is the calculation of hydride transfer in the enzyme liver alcohol dehydrogenase. In this case, quantum tunneling is important for the hydrogen, as it determines the reaction rate. Coarse-graining and reduced representations At the other end of the detail scale are coarse-grained and lattice models. Instead of explicitly representing every atom of the system, one uses "pseudo-atoms" to represent groups of atoms. MD simulations on very large systems may require such large computer resources that they cannot easily be studied by traditional all-atom methods. Similarly, simulations of processes on long timescales (beyond about 1 microsecond) are prohibitively expensive, because they require so many time steps. In these cases, one can sometimes tackle the problem by using reduced representations, which are also called coarse-grained models. Examples for coarse graining (CG) methods are discontinuous molecular dynamics (CG-DMD) and Go-models. Coarse-graining is done sometimes taking larger pseudo-atoms. Such united atom approximations have been used in MD simulations of biological membranes. Implementation of such approach on systems where electrical properties are of interest can be challenging owing to the difficulty of using a proper charge distribution on the pseudo-atoms. The aliphatic tails of lipids are represented by a few pseudo-atoms by gathering 2 to 4 methylene groups into each pseudo-atom. The parameterization of these very coarse-grained models must be done empirically, by matching the behavior of the model to appropriate experimental data or all-atom simulations. Ideally, these parameters should account for both enthalpic and entropic contributions to free energy in an implicit way. When coarse-graining is done at higher levels, the accuracy of the dynamic description may be less reliable. But very coarse-grained models have been used successfully to examine a wide range of questions in structural biology, liquid crystal organization, and polymer glasses. Examples of applications of coarse-graining: protein folding and protein structure prediction studies are often carried out using one, or a few, pseudo-atoms per amino acid; liquid crystal phase transitions have been examined in confined geometries and/or during flow using the Gay-Berne potential, which describes anisotropic species; Polymer glasses during deformation have been studied using simple harmonic or FENE springs to connect spheres described by the Lennard-Jones potential; DNA supercoiling has been investigated using 1–3 pseudo-atoms per basepair, and at even lower resolution; Packaging of double-helical DNA into bacteriophage has been investigated with models where one pseudo-atom represents one turn (about 10 basepairs) of the double helix; RNA structure in the ribosome and other large systems has been modeled with one pseudo-atom per nucleotide. The simplest form of coarse-graining is the united atom (sometimes called extended atom) and was used in most early MD simulations of proteins, lipids, and nucleic acids. For example, instead of treating all four atoms of a CH3 methyl group explicitly (or all three atoms of CH2 methylene group), one represents the whole group with one pseudo-atom. It must, of course, be properly parameterized so that its van der Waals interactions with other groups have the proper distance-dependence. Similar considerations apply to the bonds, angles, and torsions in which the pseudo-atom participates. In this kind of united atom representation, one typically eliminates all explicit hydrogen atoms except those that have the capability to participate in hydrogen bonds (polar hydrogens). An example of this is the CHARMM 19 force-field. The polar hydrogens are usually retained in the model, because proper treatment of hydrogen bonds requires a reasonably accurate description of the directionality and the electrostatic interactions between the donor and acceptor groups. A hydroxyl group, for example, can be both a hydrogen bond donor, and a hydrogen bond acceptor, and it would be impossible to treat this with one OH pseudo-atom. About half the atoms in a protein or nucleic acid are non-polar hydrogens, so the use of united atoms can provide a substantial savings in computer time. Machine Learning Force Fields Machine Learning Force Fields] (MLFFs) represent one approach to modeling interatomic interactions in molecular dynamics simulations. MLFFs can achieve accuracy close to that of ab initio methods. Once trained, MLFFs are much faster than direct quantum mechanical calculations. MLFFs address the limitations of traditional force fields by learning complex potential energy surfaces directly from high-level quantum mechanical data. Several software packages now support MLFFs, including VASP and open-source libraries like DeePMD-kit and SchNetPack. Incorporating solvent effects In many simulations of a solute-solvent system the main focus is on the behavior of the solute with little interest of the solvent behavior particularly in those solvent molecules residing in regions far from the solute molecule. Solvents may influence the dynamic behavior of solutes via random collisions and by imposing a frictional drag on the motion of the solute through the solvent. The use of non-rectangular periodic boundary conditions, stochastic boundaries and solvent shells can all help reduce the number of solvent molecules required and enable a larger proportion of the computing time to be spent instead on simulating the solute. It is also possible to incorporate the effects of a solvent without needing any explicit solvent molecules present. One example of this approach is to use a potential mean force (PMF) which describes how the free energy changes as a particular coordinate is varied. The free energy change described by PMF contains the averaged effects of the solvent. Without incorporating the effects of solvent simulations of macromolecules (such as proteins) may yield unrealistic behavior and even small molecules may adopt more compact conformations due to favourable van der Waals forces and electrostatic interactions which would be dampened in the presence of a solvent. Long-range forces A long range interaction is an interaction in which the spatial interaction falls off no faster than where is the dimensionality of the system. Examples include charge-charge interactions between ions and dipole-dipole interactions between molecules. Modelling these forces presents quite a challenge as they are significant over a distance which may be larger than half the box length with simulations of many thousands of particles. Though one solution would be to significantly increase the size of the box length, this brute force approach is less than ideal as the simulation would become computationally very expensive. Spherically truncating the potential is also out of the question as unrealistic behaviour may be observed when the distance is close to the cut off distance. Steered molecular dynamics (SMD) Steered molecular dynamics (SMD) simulations, or force probe simulations, apply forces to a protein in order to manipulate its structure by pulling it along desired degrees of freedom. These experiments can be used to reveal structural changes in a protein at the atomic level. SMD is often used to simulate events such as mechanical unfolding or stretching. There are two typical protocols of SMD: one in which pulling velocity is held constant, and one in which applied force is constant. Typically, part of the studied system (e.g., an atom in a protein) is restrained by a harmonic potential. Forces are then applied to specific atoms at either a constant velocity or a constant force. Umbrella sampling is used to move the system along the desired reaction coordinate by varying, for example, the forces, distances, and angles manipulated in the simulation. Through umbrella sampling, all of the system's configurations—both high-energy and low-energy—are adequately sampled. Then, each configuration's change in free energy can be calculated as the potential of mean force. A popular method of computing PMF is through the weighted histogram analysis method (WHAM), which analyzes a series of umbrella sampling simulations. A lot of important applications of SMD are in the field of drug discovery and biomolecular sciences. For e.g. SMD was used to investigate the stability of Alzheimer's protofibrils, to study the protein ligand interaction in cyclin-dependent kinase 5 and even to show the effect of electric field on thrombin (protein) and aptamer (nucleotide) complex among many other interesting studies. Examples of applications Molecular dynamics is used in many fields of science. First MD simulation of a simplified biological folding process was published in 1975. Its simulation published in Nature paved the way for the vast area of modern computational protein-folding. First MD simulation of a biological process was published in 1976. Its simulation published in Nature paved the way for understanding protein motion as essential in function and not just accessory. MD is the standard method to treat collision cascades in the heat spike regime, i.e., the effects that energetic neutron and ion irradiation have on solids and solid surfaces. The following biophysical examples illustrate notable efforts to produce simulations of a systems of very large size (a complete virus) or very long simulation times (up to 1.112 milliseconds): MD simulation of the full satellite tobacco mosaic virus (STMV) (2006, Size: 1 million atoms, Simulation time: 50 ns, program: NAMD) This virus is a small, icosahedral plant virus that worsens the symptoms of infection by Tobacco Mosaic Virus (TMV). Molecular dynamics simulations were used to probe the mechanisms of viral assembly. The entire STMV particle consists of 60 identical copies of one protein that make up the viral capsid (coating), and a 1063 nucleotide single stranded RNA genome. One key finding is that the capsid is very unstable when there is no RNA inside. The simulation would take one 2006 desktop computer around 35 years to complete. It was thus done in many processors in parallel with continuous communication between them. Folding simulations of the Villin Headpiece in all-atom detail (2006, Size: 20,000 atoms; Simulation time: 500 μs= 500,000 ns, Program: Folding@home) This simulation was run in 200,000 CPU's of participating personal computers around the world. These computers had the Folding@home program installed, a large-scale distributed computing effort coordinated by Vijay Pande at Stanford University. The kinetic properties of the Villin Headpiece protein were probed by using many independent, short trajectories run by CPU's without continuous real-time communication. One method employed was the Pfold value analysis, which measures the probability of folding before unfolding of a specific starting conformation. Pfold gives information about transition state structures and an ordering of conformations along the folding pathway. Each trajectory in a Pfold calculation can be relatively short, but many independent trajectories are needed. Long continuous-trajectory simulations have been performed on Anton, a massively parallel supercomputer designed and built around custom application-specific integrated circuits (ASICs) and interconnects by D. E. Shaw Research. The longest published result of a simulation performed using Anton is a 1.112-millisecond simulation of NTL9 at 355 K; a second, independent 1.073-millisecond simulation of this configuration was also performed (and many other simulations of over 250 μs continuous chemical time). In How Fast-Folding Proteins Fold, researchers Kresten Lindorff-Larsen, Stefano Piana, Ron O. Dror, and David E. Shaw discuss "the results of atomic-level molecular dynamics simulations, over periods ranging between 100 μs and 1 ms, that reveal a set of common principles underlying the folding of 12 structurally diverse proteins." Examination of these diverse long trajectories, enabled by specialized, custom hardware, allow them to conclude that "In most cases, folding follows a single dominant route in which elements of the native structure appear in an order highly correlated with their propensity to form in the unfolded state." In a separate study, Anton was used to conduct a 1.013-millisecond simulation of the native-state dynamics of bovine pancreatic trypsin inhibitor (BPTI) at 300 K. Another important application of MD method benefits from its ability of 3-dimensional characterization and analysis of microstructural evolution at atomic scale. MD simulations are used in characterization of grain size evolution, for example, when describing wear and friction of nanocrystalline Al and Al(Zr) materials. Dislocations evolution and grain size evolution are analyzed during the friction process in this simulation. Since MD method provided the full information of the microstructure, the grain size evolution was calculated in 3D using the Polyhedral Template Matching, Grain Segmentation, and Graph clustering methods. In such simulation, MD method provided an accurate measurement of grain size. Making use of these information, the actual grain structures were extracted, measured, and presented. Compared to the traditional method of using SEM with a single 2-dimensional slice of the material, MD provides a 3-dimensional and accurate way to characterize the microstructural evolution at atomic scale. Molecular dynamics algorithms Screened Coulomb potentials implicit solvent model Integrators Symplectic integrator Verlet–Stoermer integration Runge–Kutta integration Beeman's algorithm Constraint algorithms (for constrained systems) Short-range interaction algorithms Cell lists Verlet list Bonded interactions Long-range interaction algorithms Ewald summation Particle mesh Ewald summation (PME) Particle–particle-particle–mesh (P3M) Shifted force method Parallelization strategies Domain decomposition method (Distribution of system data for parallel computing) Ab-initio molecular dynamics Car–Parrinello molecular dynamics Specialized hardware for MD simulations Anton – A specialized, massively parallel supercomputer designed to execute MD simulations MDGRAPE – A special purpose system built for molecular dynamics simulations, especially protein structure prediction Graphics card as a hardware for MD simulations See also Molecular modeling Computational chemistry Force field (chemistry) Comparison of force field implementations Monte Carlo method Molecular design software Molecular mechanics Multiscale Green's function Car–Parrinello method Comparison of software for molecular mechanics modeling Quantum chemistry Discrete element method Comparison of nucleic acid simulation software Molecule editor Mixed quantum-classical dynamics References General references External links The GPUGRID.net Project (GPUGRID.net) The Blue Gene Project (IBM) JawBreakers.org Materials modelling and computer simulation codes A few tips on molecular dynamics Movie of MD simulation of water (YouTube) Computational chemistry Molecular modelling Simulation
Molecular dynamics
[ "Physics", "Chemistry" ]
8,579
[ "Molecular physics", "Computational physics", "Molecular dynamics", "Molecular modelling", "Theoretical chemistry", "Computational chemistry" ]
17,863,612
https://en.wikipedia.org/wiki/Shoelace%20formula
The shoelace formula, also known as Gauss's area formula and the surveyor's formula, is a mathematical algorithm to determine the area of a simple polygon whose vertices are described by their Cartesian coordinates in the plane. It is called the shoelace formula because of the constant cross-multiplying for the coordinates making up the polygon, like threading shoelaces. It has applications in surveying and forestry, among other areas. The formula was described by Albrecht Ludwig Friedrich Meister (1724–1788) in 1769 and is based on the trapezoid formula which was described by Carl Friedrich Gauss and C.G.J. Jacobi. The triangle form of the area formula can be considered to be a special case of Green's theorem. The area formula can also be applied to self-overlapping polygons since the meaning of area is still clear even though self-overlapping polygons are not generally simple. Furthermore, a self-overlapping polygon can have multiple "interpretations" but the Shoelace formula can be used to show that the polygon's area is the same regardless of the interpretation. The polygon area formulas Given: A planar simple polygon with a positively oriented (counter clock wise) sequence of points in a Cartesian coordinate system. For the simplicity of the formulas below it is convenient to set . The formulas: The area of the given polygon can be expressed by a variety of formulas, which are connected by simple operations (see below): If the polygon is negatively oriented, then the result of the formulas is negative. In any case is the sought area of the polygon. Trapezoid formula The trapezoid formula sums up a sequence of oriented areas of trapezoids with as one of its four edges (see below): Triangle formula The triangle formula sums up the oriented areas of triangles : Shoelace formula The triangle formula is the base of the popular shoelace formula, which is a scheme that optimizes the calculation of the sum of the 2×2-Determinants by hand: Sometimes this determinant is transposed (written vertically, in two columns), as shown in the diagram. Other formulas A particularly concise statement of the formula can be given in terms of the exterior algebra. If are the consecutive vertices of the polygon (regarded as vectors in the Cartesian plane) then Example For the area of the pentagon with one gets The advantage of the shoelace form: Only 6 columns have to be written for calculating the 5 determinants with 10 columns. Deriving the formulas Trapezoid formula The edge determines the trapezoid with its oriented area In case of the number is negative, otherwise positive or if . In the diagram the orientation of an edge is shown by an arrow. The color shows the sign of : red means , green indicates . In the first case the trapezoid is called negative in the second case positive. The negative trapezoids delete those parts of positive trapezoids, which are outside the polygon. In case of a convex polygon (in the diagram the upper example) this is obvious: The polygon area is the sum of the areas of the positive trapezoids (green edges) minus the areas of the negative trapezoids (red edges). In the non convex case one has to consider the situation more carefully (see diagram). In any case the result is Triangle form, determinant form Eliminating the brackets and using (see convention above), one gets the determinant form of the area formula: Because one half of the i-th determinant is the oriented area of the triangle this version of the area formula is called triangle form. Other formulas With (see convention above) one gets Combining both sums and excluding leads to With the identity one gets Alternatively, this is a special case of Green's theorem with one function set to 0 and the other set to x, such that the area is the integral of xdy along the boundary. Manipulations of a polygon indicates the oriented area of the simple polygon with (see above). is positive/negative if the orientation of the polygon is positive/negative. From the triangle form of the area formula or the diagram below one observes for : In case of one should first shift the indices. Hence: Moving affects only and leaves unchanged. There is no change of the area if is moved parallel to . Purging changes the total area by , which can be positive or negative. Inserting point between changes the total area by , which can be positive or negative. Example: With the above notation of the shoelace scheme one gets for the oriented area of the blue polygon: green triangle: red triangle: blue polygon minus point : blue polygon plus point between : One checks, that the following equations hold: Generalization In higher dimensions the area of a polygon can be calculated from its vertices using the exterior algebra form of the Shoelace formula (e.g. in 3d, the sum of successive cross products):(when the vertices are not coplanar this computes the vector area enclosed by the loop, i.e. the projected area or "shadow" in the plane in which it is greatest). This formulation can also be generalized to calculate the volume of an n-dimensional polytope from the coordinates of its vertices, or more accurately, from its hypersurface mesh. For example, the volume of a 3-dimensional polyhedron can be found by triangulating its surface mesh and summing the signed volumes of the tetrahedra formed by each surface triangle and the origin:where the sum is over the faces and care has to be taken to order the vertices consistently (all clockwise or anticlockwise viewed from outside the polyhedron). Alternatively, an expression in terms of the face areas and surface normals may be derived using the divergence theorem (see Polyhedron § Volume). See also Planimeter Polygon area Pick's theorem Heron's formula External links Mathologer video about Gauss's shoelace formula References Area Geometric algorithms Surveying
Shoelace formula
[ "Physics", "Mathematics", "Engineering" ]
1,246
[ "Scalar physical quantities", "Physical quantities", "Quantity", "Size", "Surveying", "Civil engineering", "Wikipedia categories named after physical quantities", "Area" ]
17,870,536
https://en.wikipedia.org/wiki/Pure%20type%20system
In the branches of mathematical logic known as proof theory and type theory, a pure type system (PTS), previously known as a generalized type system (GTS), is a form of typed lambda calculus that allows an arbitrary number of sorts and dependencies between any of these. The framework can be seen as a generalisation of Barendregt's lambda cube, in the sense that all corners of the cube can be represented as instances of a PTS with just two sorts. In fact, Barendregt (1991) framed his cube in this setting. Pure type systems may obscure the distinction between types and terms and collapse the type hierarchy, as is the case with the calculus of constructions, but this is not generally the case, e.g. the simply typed lambda calculus allows only terms to depend on terms. Pure type systems were independently introduced by Stefano Berardi (1988) and Jan Terlouw (1989). Barendregt discussed them at length in his subsequent papers. In his PhD thesis, Berardi defined a cube of constructive logics akin to the lambda cube (these specifications are non-dependent). A modification of this cube was later called the L-cube by Herman Geuvers, who in his PhD thesis extended the Curry–Howard correspondence to this setting. Based on these ideas, G. Barthe and others defined classical pure type systems (CPTS) by adding a double negation operator. Similarly, in 1998, Tijn Borghuis introduced modal pure type systems (MPTS). Roorda has discussed the application of pure type systems to functional programming; and Roorda and Jeuring have proposed a programming language based on pure type systems. The systems from the lambda cube are all known to be strongly normalizing. Pure type systems in general need not be, for example System U from Girard's paradox is not. (Roughly speaking, Girard found pure systems in which one can express the sentence "the types form a type".) Furthermore, all known examples of pure type systems that are not strongly normalizing are not even (weakly) normalizing: they contain expressions that do not have normal forms, just like the untyped lambda calculus. It is a major open problem in the field whether this is always the case, i.e. whether a (weakly) normalizing PTS always has the strong normalization property. This is known as the Barendregt–Geuvers–Klop conjecture (named after Henk Barendregt, Herman Geuvers, and Jan Willem Klop). Definition A pure type system is defined by a triple where is the set of sorts, is the set of axioms, and is the set of rules. Typing in pure type systems is determined by the following rules, where is any sort: Implementations The following programming languages have pure type systems: SAGE Yarrow Henk 2000 See also System U – an example of an inconsistent PTS λμ-calculus uses a different approach to control than CPTS Notes References Further reading External links Proof theory Type theory Lambda calculus
Pure type system
[ "Mathematics" ]
630
[ "Mathematical structures", "Proof theory", "Mathematical logic", "Mathematical objects", "Type theory" ]