id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
23,921,022
https://en.wikipedia.org/wiki/C5H5NS
{{DISPLAYTITLE:C5H5NS}} The molecular formula C5H5NS (molar mass: 111.16 g/mol, exact mass: 111.0143 u) may refer to: 2-Mercaptopyridine Thiazepines 1,2-Thiazepine 1,3-Thiazepine 1,4-Thiazepine Molecular formulas
C5H5NS
[ "Physics", "Chemistry" ]
88
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
23,922,168
https://en.wikipedia.org/wiki/Telepresence%20technology
Telepresence technology is a term used by the National Oceanic and Atmospheric Administration (NOAA) to refer to the combination of satellite technology with the Internet to broadcast information, including video in real-time from cameras used on its remotely operated vehicle (ROV) on Okeanos Explorer. Its ROV will be operating working in a deep sea environment. Data from the ROV is transmitted to a hub based on the land, which then send it to scientists and to the public. This effort of the Okeanos Explorer has been compared to the lunar landing. The telepresence technology used by NOAA includes the following: deep water mapping, to a depth of 6,000 m science-oriented ROV operations real-time satellite transmission of data. The Okeanos Explorer is designed as an educational tool that can be followed on Twitter. Notes Educational technology Oceanography Telepresence
Telepresence technology
[ "Physics", "Environmental_science" ]
183
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
23,922,461
https://en.wikipedia.org/wiki/Materials%20Science%20Laboratory
The Materials Science Laboratory (MSL) of the European Space Agency is a payload on board the International Space Station for materials science experiments in low gravity. It is installed in NASA's first Materials Science Research Rack which is placed in the Destiny laboratory on board the ISS. Its purpose is to process material samples in different ways: directional solidification of metals and alloys, crystal growth of semi-conducting materials, thermo-physical properties and diffusion experiments of alloys and glass-forming materials, and investigations on polymers and ceramics at the liquid-solid phase transition. MSL was built for ESA by EADS Astrium in Friedrichshafen, Germany. It is operated and monitored by the Microgravity User Support Center (MUSC) of the German Aerospace Center (DLR) in Cologne, Germany. Mission summary MSL was launched with Space Shuttle Discovery on its STS-128 mission at the end of August 2009. It was transferred from the Multi-Purpose Logistics Module to the Destiny Laboratory shortly after the shuttle docked at the International Space Station some two days after launch. After that the commissioning activities started to check out first the functionality of the Materials Science Research Rack and MSL inside MSRR. The commissioning included the processing of the first two samples which took place at the beginning of November. After bringing those two samples back to ground for analysis by the scientists the rest of the samples from batch 1 will be processed in early 2010. Core facility The Materials Science Laboratory (MSL) facility is the contribution of the European Space Agency to NASA's MSRR-1. It occupies one half of an International Standard Payload Rack. The MSL consists of a Core Facility, together with associated support sub-systems. The Core Facility consists mainly of a vacuum-tight stainless steel cylinder (Process Chamber) capable of accommodating different individual Furnace Inserts (FIs), within which sample processing is carried out. The processing chamber provides an accurately controlled processing environment and measurement of microgravity levels. It can house several different Furnace Inserts. During the first batch of experiments the Low Gradient Furnace (LGF) is installed. Another furnace, the Solidification and Quenching Furnace (SQF) is already produced and waiting on ground for future operations. The FI can be moved with a dedicated drive mechanism, to process each sample according to requirements from the scientists. Processing normally takes place under vacuum. The Core Facility supports FIs with up to eight heating elements, and provides the mechanical, thermal and electrical infrastructure necessary to handle the FIs, the Sample Cartridge Assembly (SCA), together with any associated experiment-dedicated electronics that may be required. A FI is an arrangement of heating elements, isolating zones and cooling zones contained in a thermal insulation assembly. On the outer envelope of this assembly is a water-cooled metal jacket forming the mechanical interface to the Core Facility. The major characteristics of the two produced Furnace Inserts are: Low Gradient Furnace (LGF) The LGF is designed mainly for Bridgman crystal growth of semiconductor materials. It consists of two heated cavities separated by an adiabatic zone. This assembly can establish low and precisely controlled gradients between two very stable temperature levels. Solidification and Quenching Furnace (SQF) The SQF is designed mainly for metallurgical research, with the option of quenching the solidification interface at the end of processing by quickly displacing the cooling zone. It consists of a heated cavity and a water-cooled cooling zone separated by an adiabatic zone. It can establish medium to steep temperature gradients along the experiment sample. For creating large gradients, a Liquid Metal Ring enhances the thermal coupling between the SCA and the cooling zone. Sample cartridge assembly The samples to be processed are contained in experiment cartridges, the SCAs, that consist of a leak-tight tube, crucible, sensors for process control, sample probe and cartridge foot (i.e. the mechanical and electrical interface to the process chamber). The MSL safety concept requires that experiment samples containing toxic compounds are contained in SCAs that support the detection of potential leaks. The volume between the experiment sample and the cartridge tube is filled with a pre-defined quantity of krypton, allowing leak detection by mass spectrometry. However the first batch of experiments does not contain any toxic substances. Up to 12 scientific thermocouples provide the sample's temperature profile and allow differential thermal analysis. Experiments Materials Science Laboratory - Columnar-to-Equiaxed Transition in Solidification Processing (CETSOL) and Microstructure Formation in Casting of Technical Alloys under Diffusive and Magnetically Controlled Convective Conditions (MICAST) are two investigations which will examine different growth patterns and evolution of microstructures during crystallization of metallic alloys in microgravity. MICAST studies microstructure formation during casting of technical alloys under diffusive and magnetically controlled convective conditions. The experimental results together with parametric studies using numerical simulations, will be used to optimize industrial casting processes. MICAST identifies and controls experimentally the fluid-flow patterns that affect microstructure evolution during casting processes, and to develop analytical and advanced numerical models. The microgravity environment of the International Space Station is of special importance to this project because only there are all gravity-induced convections eliminated and well-defined conditions for solidification prevail that can be disturbed by artificial fluid flow being under full control of the experimenters. Design solutions that make it possible to improve casting processes and especially aluminium alloys with well-defined properties will be provided. MICAST studies the influence of pure diffusive and convective conditions on aluminium-silicon (AlSi) and aluminium-silicon-iron (AlSiFe) cast alloys on the microstructure evolution during directional solidification with and without rotating magnetic field. The major objective of CETSOL is to improve and validate the modelling of Columnar-Equiaxed Transition (CET) and of the grain microstructure in solidification processing. This aims to give industry confidence in the reliability of the numerical tools introduced in their integrated numerical models of casting, and their relationship. To achieve this goal, intensive deepening of the quantitative characterization of the basic physical phenomena that, from the microscopic to the macroscopic scales, govern microstructure formation and CET will be pursued. CET occurs during columnar growth when new grains grow ahead of the columnar front in the undercooled liquid. Under certain conditions, these grains can stop the columnar growth and then the solidification microstructure becomes equiaxed. Experiments have to take place on the ISS due to the long-duration required to solidify samples with the objective to study the CET. Indeed, the length scale of the grain structure when columnar growth takes place is of the order of the casting scale rather than the microstructure scale. This is due to the fact that, to a first approximation, it is the heat flow that controls the transition rather than the solute flow. Experimental programs are being carried out on aluminium-nickel and aluminium-silicon alloys. Related publications Schaefer D, Henderson R. Concept for Materials Science Research Facility. 38th Aerospace Sciences Meeting and Exhibit. Reno, NV. Jan 12 -15, ;AIAA-1998-259. 1998 Cobb SD, Higgins DB, Kitchens L. First Materials Science Research Facility Rack Capabilities and Design Features. IAF abstracts, 34th COSPAR Scientific Assembly, The Second World Space Congress. ;J-6-07. 2002 Carswell W, Kroeger F, Hammond M. QMI: a furnace for metals and alloys processing on the International Space Station. Proceedings of the 2003 IEEE Aerospace Conference. ;1:1-74. 2003 Pettigrew PJ, Kitchen L, Darby C, Cobb SD, Lehoczky S. Design features and capabilities of the First Materials Science Research Rack (MSRR-1). Proceedings of the 2003 IEEE Aerospace Conference. ;1:55-63. 2003 See also Scientific research on the ISS Gallery References External links MSL Website of DLR ESA's MSL website Science facilities on the International Space Station Destiny (ISS module) Materials science
Materials Science Laboratory
[ "Physics", "Materials_science", "Engineering" ]
1,687
[ "Applied and interdisciplinary physics", "Materials science", "nan" ]
23,922,614
https://en.wikipedia.org/wiki/Primordial%20black%20hole
In cosmology, primordial black holes (PBHs) are hypothetical black holes that formed soon after the Big Bang. In the inflationary era and early radiation-dominated universe, extremely dense pockets of subatomic matter may have been tightly packed to the point of gravitational collapse, creating primordial black holes without the supernova compression typically needed to make black holes today. Because the creation of primordial black holes would pre-date the first stars, they are not limited to the narrow mass range of stellar black holes. In 1966, Yakov Zeldovich and Igor Novikov first proposed the existence of such black holes, while the first in-depth study was conducted by Stephen Hawking in 1971. However, their existence remains hypothetical. In September 2022, primordial black holes were proposed by some researchers to explain the unexpected very large early galaxies discovered by the James Webb Space Telescope (JWST). PBHs have long been considered possibly important if not nearly exclusive components of dark matter, the latter perspective having been strengthened by both LIGO/Virgo interferometer gravitational wave and JWST observations. Early constraints on PBHs as dark matter usually assumed most black holes would have similar or identical ("monochromatic") mass, which was disproven by LIGO/Virgo results, and further suggestions that the actual black hole mass distribution is broadly platykurtic were evident from JWST observations of early large galaxies. Recent analyses agree, suggesting a broad mass distribution with a mode around one solar mass. Many PBHs may have the mass of an asteroid but the size of a hydrogen atom and be travelling at enormous speeds, with one likely being within the solar system at any given time. Most likely, such PBHs would pass right through a star "like a bullet", without any significant effects on the star. However, the ones traveling slowly would have a chance of being captured by the star. Stephen Hawking proposed that our Sun may harbor such a PBH. History Depending on the model, primordial black holes could have initial masses ranging from (the so-called Planck relics) to more than thousands of solar masses. However, primordial black holes originally having masses lower than would not have survived to the present due to Hawking radiation, which causes complete evaporation in a time much shorter than the age of the Universe. Primordial black holes are non-baryonic, and as such are plausible dark matter candidates. Primordial black holes are also good candidates for being the seeds of the supermassive black holes at the center of massive galaxies, as well as of intermediate-mass black holes. Primordial black holes belong to the class of massive compact halo objects (MACHOs). They are naturally a good dark matter candidate: they are (nearly) collision-less and stable (if sufficiently massive), they have non-relativistic velocities, and they form very early in the history of the Universe (typically less than one second after the Big Bang). Nevertheless, critics maintain that tight limits on their abundance have been set up from various astrophysical and cosmological observations, which would exclude that they contribute significantly to dark matter over most of the plausible mass range. However, new research has provided for the possibility again, whereby these black holes would sit in clusters with a 30-solar-mass primordial black hole at the center. In March 2016, one month after the announcement of the detection by Advanced LIGO/VIRGO of gravitational waves emitted by the merging of two 30 solar mass black holes (about ), three groups of researchers proposed independently that the detected black holes had a primordial origin. Two of the groups found that the merging rates inferred by LIGO are consistent with a scenario in which all the dark matter is made of primordial black holes, if a non-negligible fraction of them are somehow clustered within halos such as faint dwarf galaxies or globular clusters, as expected by the standard theory of cosmic structure formation. The third group claimed that these merging rates are incompatible with an all-dark-matter scenario and that primordial black holes could only contribute to less than one percent of the total dark matter. The unexpected large mass of the black holes detected by LIGO has strongly revived interest in primordial black holes with masses in the range of 1 to 100 solar masses. It is still debated whether this range is excluded or not by other observations, such as the absence of micro-lensing of stars, the cosmic microwave background anisotropies, the size of faint dwarf galaxies, and the absence of correlation between X-ray and radio sources toward the galactic center. In May 2016, Alexander Kashlinsky suggested that the observed spatial correlations in the unresolved gamma-ray and X-ray background radiations could be due to primordial black holes with similar masses, if their abundance is comparable to that of dark matter. In August 2019, a study was published opening up the possibility of making up all dark matter with asteroid-mass primordial black holes (3.5 × 10−17 – 4 × 10−12 solar masses, or 7 × 1013 – 8 × 1018 kg). In September 2019, a report by James Unwin and Jakub Scholtz proposed the possibility of a primordial black hole (PBH) with mass 5–15  (Earth masses), about the diameter of a tennis ball, existing in the extended Kuiper Belt to explain the orbital anomalies that are theorized to be the result of a 9th planet in the solar system. In October 2019, Derek Inman and Yacine Ali-Haïmoud published an article in which they discovered that the nonlinear velocities that arise from the structure formation are too small to significantly affect the constraints that arise from CMB anisotropies In September 2021, the NANOGrav collaboration announced that they had found a low-frequency signal that could be attributed to gravitational waves and potentially could be associated with PBHs. In September 2022, primordial black holes were used to explain the unexpected very large early (high redshift) galaxies discovered by the James Webb Space Telescope. On 26 November 2023, evidence, for the first time, of an overmassive black hole galaxy (O.B.G.), the result of "heavy black hole seed formation from direct collapse", an alternative way of producing a black hole other than the collapse of a dead star, was reported. This discovery was found in studies of UHZ1, a very early galaxy containing a quasar, by the Chandra X-ray Observatory and James Webb Space Telescope. In 2024, a review by Bernard Carr and colleagues concluded that PBHs formed in the quantum chromodynamics (QCD) epoch prior to 10–5 seconds after the Big Bang, resulting in a broadly platykurtic mass distribution today, "with a number of distinct bumps, the most prominent one being at around one solar mass." Formation Primordial black holes could have formed in the very early Universe (less than one second after the Big Bang) during the inflationary era, or in the very early radiation-dominated era. The essential ingredient for the formation of a primordial black hole is a fluctuation in the density of the Universe, inducing its gravitational collapse. One typically requires density contrasts (where is the density of the Universe) to form a black hole. Production mechanisms There are several mechanisms able to produce such inhomogeneities in the context of cosmic inflation (in hybrid inflation models). Some examples include: Axion inflation Axion inflation is a theoretical model in which the axion acts as an inflaton field. Because of the time period it is created at, the field is oscillating at its minimal potential energy. These oscillations are responsible for the energy density fluctuations in the early universe. For a review of production mechanisms arising from various models of inflation, see Ref. Reheating Reheating is the transitory process between the inflationary and the hot, dense, radiation-dominated period. During this time the inflaton field decays into other particles. These particles begin to interact in order to reach thermal equilibrium. However, if this process is incomplete it creates density fluctuations, and if these are big enough they could be responsible for the formation of PBH. Cosmological phase transitions Cosmological phase transitions may cause inhomogeneities in different ways depending on the specific details of each transition. For example, one mechanism is concerned with the collapse of overdense regions that arise from these phase transitions, while another mechanism involves highly energetic particles that are produced in these phase transitions and then go through gravitational collapse forming PBHs. Implications Dark matter problem The dark matter problem, proposed in 1933 by Swiss-American astronomer Fritz Zwicky, refers to the fact that scientists still do not know what form dark matter takes. PBH can solve that in a few ways. First, if PBHs accounted for all or a significant amount of the dark matter in the universe, this could explain the gravitational effects seen in galaxies and galactic clusters. Secondly, PBHs have different proposed production mechanisms. Unlike WIMPs, they can emit gravitational waves that interact with regular matter. Finally, the discovery of PBHs could explain some of the observed gravitational lensing effects that couldn't arise from ordinary matter. While evidence that primordial black holes may constitute dark matter is inconclusive as of 2023, researchers such as Bernard Carr and others are strong proponents. Galaxy formation Since primordial black holes do not necessarily have to be small (they can have any size), they may have contributed to formation of galaxies, such as those earlier than expected. Cosmological domain wall problem The cosmological domain wall problem, proposed in 1974 by Soviet physicist Yakov Zeldovich, discussed the formation of domain walls during phase transitions of the early universe and what could arise from their large energy densities. PBHs could serve as a solution to this problem in various ways. One explanation could be that PBHs can prevent the formation of domain walls due to them exerting gravitational forces on the surrounding matter making it clump and theoretically preventing the formation of said walls. Another explanation could be that PBHs could decay domain walls; if these were formed in the early universe before PBHs then due to gravitational interactions these could eventually collapse into PBHs. Finally, a third explanation could be that PBHs do not violate the observational constraints; if PBHs in the 1012–1013 kg mass range were to be detected then these would have the right density to make up all dark matter in the universe without violating constraints, thus the domain wall problem wouldn't arise. Cosmological monopole problem The Cosmological monopole problem, also proposed by Yakov Zeldovich in the late 1970s, consisted of the absence of magnetic monopoles nowadays. PBHs can also serve as a solution to this problem. To start, if magnetic monopoles did exist in the early universe these could have gravitationally interacted with PBHs and been absorbed thus explaining their absence. Another explanation due to PBHs could be that PBHs would have exerted gravitational forces on matter causing it to clump and dilute the density of magnetic monopoles. String theory General relativity predicts the smallest primordial black holes would have evaporated by now, but if there were a fourth spatial dimension – as predicted by string theory – it would affect how gravity acts on small scales and "slow down the evaporation quite substantially". In essence, the energy stored in the fourth spatial dimension as a stationary wave would bestow a significant rest mass to the object when regarded in the conventional four-dimensional space-time. This could mean there are several thousand primordial black holes in our galaxy. To test this theory, scientists will use the Fermi Gamma-ray Space Telescope, which was put into orbit by NASA on June 11, 2008. If they observe specific small interference patterns within gamma-ray bursts, it could be the first indirect evidence for primordial black holes and string theory. Observational limits and detection strategies A variety of observations have been interpreted to place limits on the abundance and mass of primordial black holes: Lifetime, Hawking radiation and gamma-rays: One way to detect primordial black holes, or to constrain their mass and abundance, is by their Hawking radiation. Stephen Hawking theorized in 1974 that large numbers of such smaller primordial black holes might exist in the Milky Way in our galaxy's halo region. All black holes are theorized to emit Hawking radiation at a rate inversely proportional to their mass. Since this emission further decreases their mass, black holes with very small mass would experience runaway evaporation, creating a burst of radiation at the final phase, equivalent to a hydrogen bomb yielding millions of megatons of explosive force. A regular black hole (of about 3 solar masses) cannot lose all of its mass within the current age of the universe (they would take about 1069 years to do so, even without any matter falling in). However, since primordial black holes are not formed by stellar core collapse, they may be of any size. A black hole with a mass of about 1011 kg would have a lifetime about equal to the age of the universe. If such low-mass black holes were created in sufficient number in the Big Bang, we should be able to observe explosions by some of those that are relatively nearby in our own Milky Way galaxy. NASA's Fermi Gamma-ray Space Telescope satellite, launched in June 2008, was designed in part to search for such evaporating primordial black holes. Fermi data set up the limit that less than one percent of dark matter could be made of primordial black holes with masses up to 1013 kg. Evaporating primordial black holes would have also had an impact on the Big Bang nucleosynthesis and change the abundances of light elements in the Universe. However, if theoretical Hawking radiation does not actually exist, such primordial black holes would be extremely difficult, if not impossible, to detect in space due to their small size and lack of large gravitational influence. Temperature anisotropies in the cosmic microwave background: Accretion of matter onto primordial black holes in the early Universe should lead to energy injection in the medium that affects the recombination history of the Universe. This effect induces signatures in the statistical distribution of the cosmic microwave background (CMB) anisotropies. The Planck observations of the CMB exclude that primordial black holes with masses in the range 100–104 solar masses contribute importantly to the dark matter, at least in the simplest conservative model. It is still debated whether the constraints are stronger or weaker in more realistic or complex scenarios. Gamma-ray signatures from annihilating dark matter: If the dark matter in the Universe is in the form of weakly interacting massive particles or WIMPs, primordial black holes would accrete a halo of WIMPs around them in the early universe. The annihilation of WIMPs in the halo leads to a signal in the gamma-ray spectrum which is potentially detectable by dedicated instruments such as the Fermi Gamma-ray Space Telescope. In the future, new limits will be set up by various observations: The Square Kilometre Array (SKA) radio telescope will probe the effects of primordial black holes on the reionization history of the Universe, due to energy injection into the intergalactic medium, induced by matter accretion onto primordial black holes. LIGO, VIRGO and future gravitational waves detectors will detect new black hole merging events, from which one could reconstruct the mass distribution of primordial black holes. These detectors could allow distinguishing unambiguously between primordial or stellar origins if merging events involving black holes with a mass lower than 1.4 solar mass are detected. Another way would be to measure the large orbital eccentricity of primordial black hole binaries. Gravitational wave detectors, such as the Laser Interferometer Space Antenna (LISA) and pulsar timing arrays, will also probe the stochastic background of gravitational waves emitted by primordial black hole binaries when they are still orbiting relatively far from each other. New detections of faint dwarf galaxies, and observations of their central star clusters, could be used to test the hypothesis that these dark matter-dominated structures contain primordial black holes in abundance. Monitoring star positions and velocities within the Milky Way could be used to detect the influence of a nearby primordial black hole. It has been suggested that a small black hole passing through the Earth would produce a detectable acoustic signal. Because of its tiny diameter and large mass as compared to a nucleon, as well as its relatively high speed, a primordial black hole would simply transit Earth virtually unimpeded with only a few impacts on nucleons, exiting the planet with no ill effects. Another way to detect primordial black holes could be by watching for ripples on the surfaces of stars. If the black hole passed through a star, its density would cause observable vibrations. Monitoring quasars in the microwave wavelength and detection of the wave optics feature of gravitational microlensing by the primordial black holes. Observing deviations in the motion of Solar System objects, such as the planet Mars, which could arise from the rapid transit of a primordial black hole through the inner Solar System. Facilities able to provide PBH measurement None of these facilities are focused on the direct detection of PBH due to them being a theoretical phenomenon, but the information collected in each respective experiment provides secondary data which can help provide insight and constraints on the nature of PBHs. GW-detectors LIGO/VIRGO – These detectors already place important constraints on the limits of PBHs. However, they're always in the search for new unexpected signals; if they detect a black hole in the mass range that does not correspond to stellar evolution theory, it could serve as evidence for PBHs. Cosmic Explorer/Einstein Telescope – Both of these projects serve as the next generation of LIGO/VIRGO, these would increase sensitivity around the 10–100 Hz band and would allow to probe PBH information at higher redshifts NANOGrav – This collaboration detected a stochastic signal but it is not yet a certified gravitational wave signal since quadrupolar correlations have not been detected. But, should this be confirmed, it could serve as evidence for sub-solar mass PBHs. Laser Interferometer Space Antenna (LISA) – Like any GW detector, LISA has great potential to detect PBHs. The uniqueness of LISA lies with the ability to detect extreme mass ratio inspirals when low mass black holes merge with massive objects. Due to its sensitivity it will also allow for the detection and confirmation of the stochastic NANOGrav signal. AEDGE Atomic Experiment for Dark Matter and Gravity Exploration in Space – This proposed mid-range gravitational wave experiment has a uniqueness which lies in its detection ability of intermediate mass ratio mergers like the ones theorized during early supermassive black hole assembly, should the detection of these happen it would serve as evidence for PBHs. Space telescopes Nancy Grace Roman Space Telescope (WFIRST) – As a space telescope, WFIRST will have the capacity of detecting or at least placing constraints on PBHs through different types of lensing, one of which is Astrometric Lensing. When an object passes in front of a known light source, such as a star, it slightly (to the order of microarcseconds) shifts its position and this is known as Astrometric lensing. Sky Surveys Vera C. Rubin Observatory (LSST) – This will provide the capability of directly measuring the mass function of compact objects by microlensing. It will be able to observe both low and high-mass objects thus placing constraints on both sides of the spectrum. LSST will also have the ability to detect Kilonovae that lack gravitational wave signals which is related to the existence of PBHs. Very Large Arrays ngVLA – the next generation Very Large Array will be able to improve GW bounds by a magnitude of the current contraints placed by the NANOGrav. This increased sensitivity will be able to confirm the nature of the GW signal from NANOGrav. It will also be able to discriminate a PBH explanation from other sources. Fast Radio Bursts observatories Experiments like these obtain large numbers of Fast Radio Bursts and promote the statistical measures of their lensing which would allow for PBH lenses. Some examples include: Canadian Hydrogen Intensity Mapping Experiment (CHIME) Hydrogen Intensity and Real-time Analysis eXperiment (HIRAX) CHORD (Successor to CHIME) MeV Gamma-Ray Telescopes Since the MeV gamma-ray band has yet to be explored, proposed experiments could place tighter constraints on the abundance of PBHs in the asteroid-mass range. Some examples of the proposed telescopes include: AdEPT AMEGO All-Sky ASTROGAM GECCO GRAMS MAST PANGU GeV and TeV Gamma-Ray Observatories Wide field of view survey observatory PBHs in the mass range of ~ 5×1019 solar masses would be producing TeV gamma-rays due to evaporation. Since these would occur in isotropic bursts across the sky, wide field survey observatories would be ideal to searching for these, a few examples could be: FGST’s LAT High Altitude Water Cherenkov Experiment (HAWC) Southern Wide-field Gamma-ray Observatory (SWGO) Atmospheric Cherenkov telescopes Although these have a narrow field of view, they have great sensitivity for TeV cosmic-rays and thus can provide upper limits on the burst rate density. Some of these telescopes are: Very Energetic Radiation Imaging Telescope Array System (VERITAS) Major Atmospheric Gamma Imaging Cherenkov Telescopes (MAGIC) High Energy Stereoscopic System (HESS) Cherenkov Telescope Array (CTA) Difference from direct collapse black holes A direct collapse black hole is the result of the collapse of unusually dense and large regions of gas, after the radiation-dominated era, while primordial black holes would have resulted from the direct collapse of energy, ionized matter, or both, during the inflationary or radiation-dominated eras. See also Primordial gravitational wave Primordial fluctuations References Stephen W. Hawking, Commun.Math. Phys. 43 (1975) 199 : Original article proposing existence of radiation D. Page, Phys. Rev. D13 (1976) 198 : First detailed studies of the evaporation mechanism B. J. Carr & S. W. Hawking, Mon. Not. Roy. Astron. Soc 168 (1974) 399 : Describes links between primordial black holes and the early universe A. Barrau et al., Astron. Astrophys. 388 (2002) 676, Astron. Astrophys. 398 (2003) 403, Astrophys. J. 630 (2005) 1015 : Experimental searches for primordial black holes due to the emitted antimatter A. Barrau & G. Boudoul, Review talk given at the International Conference on Theoretical Physics TH2002 : Cosmology with primordial black holes A. Barrau & J. Grain, Phys. Lett. B 584 (2004) 114 : Searches for new physics (quantum gravity) with primordial black holes P. Kanti, Int. J. Mod. Phys. A19 (2004) 4899 : Evaporating black holes and extra-dimensions Bird, Simeon; Albert, Andrea; Dawson, Will; Ali-Haimoud, Ali-Haimoud, Yacine; Coogan, Adam; Drlica-Wagner, Alex; Feng, Qi; Inman, Derek; Inomata, Keisuke; Kovetz, Ely; Kusenko, Alexander; Lehmann, Benjamin V.; Muñoz, Julian B.; Singh, Rajeev; Takhistov, Volodymyr; Tsai, Yu-Dai. 2022. Snowmass2021 Cosmic Frontier White Paper: Primordial Black Hole Dark Matter Inman, Derek; Ali-Haïmoud, Yacine. 2019. Early structure formation in primordial black hole cosmologies. Lincoln, Don. 2022. Is dark matter real? Astronomy’s multi-decade mystery Vaskonen, Ville; Veermäe, Hardi. 2021. Did NANOGrav See a Signal from Primordial Black Hole Formation? Caputo, Andrea. 2019. Radiative axion inflation. Allahverdi, Rouzbeh; Brandenberger, Robert; Cyr-Racine, Francis-Yan; Mazumdar, Anupam. 2010. Reheating in Inflationary Cosmology: Theory and Applications. Mazumdar, Anupam; White Graham. 2019. Review of cosmic phase transitions: their significance and experimental signatures. External link Black holes Dark matter
Primordial black hole
[ "Physics", "Astronomy" ]
5,220
[ "Dark matter", "Physical phenomena", "Black holes", "Unsolved problems in astronomy", "Physical quantities", "Concepts in astronomy", "Unsolved problems in physics", "Astrophysics", "Density", "Exotic matter", "Stellar phenomena", "Astronomical objects", "Physics beyond the Standard Model", ...
23,923,990
https://en.wikipedia.org/wiki/Multiple-scale%20analysis
In mathematics and physics, multiple-scale analysis (also called the method of multiple scales) comprises techniques used to construct uniformly valid approximations to the solutions of perturbation problems, both for small as well as large values of the independent variables. This is done by introducing fast-scale and slow-scale variables for an independent variable, and subsequently treating these variables, fast and slow, as if they are independent. In the solution process of the perturbation problem thereafter, the resulting additional freedom – introduced by the new independent variables – is used to remove (unwanted) secular terms. The latter puts constraints on the approximate solution, which are called solvability conditions. Mathematics research from about the 1980s proposes that coordinate transforms and invariant manifolds provide a sounder support for multiscale modelling (for example, see center manifold and slow manifold). Example: undamped Duffing equation Differential equation and energy conservation As an example for the method of multiple-scale analysis, consider the undamped and unforced Duffing equation: which is a second-order ordinary differential equation describing a nonlinear oscillator. A solution y(t) is sought for small values of the (positive) nonlinearity parameter 0 < ε ≪ 1. The undamped Duffing equation is known to be a Hamiltonian system: with q = y(t) and p = dy/dt. Consequently, the Hamiltonian H(p, q) is a conserved quantity, a constant, equal to H =  +  ε for the given initial conditions. This implies that both y and dy/dt have to be bounded: Straightforward perturbation-series solution A regular perturbation-series approach to the problem proceeds by writing and substituting this into the undamped Duffing equation. Matching powers of gives the system of equations Solving these subject to the initial conditions yields Note that the last term between the square braces is secular: it grows without bound for large |t|. In particular, for this term is O(1) and has the same order of magnitude as the leading-order term. Because the terms have become disordered, the series is no longer an asymptotic expansion of the solution. Method of multiple scales To construct a solution that is valid beyond , the method of multiple-scale analysis is used. Introduce the slow scale t1: and assume the solution y(t) is a perturbation-series solution dependent both on t and t1, treated as: So: using dt1/dt = ε. Similarly: Then the zeroth- and first-order problems of the multiple-scales perturbation series for the Duffing equation become: Solution The zeroth-order problem has the general solution: with A(t1) a complex-valued amplitude to the zeroth-order solution Y0(t, t1) and i2 = −1. Now, in the first-order problem the forcing in the right hand side of the differential equation is where c.c. denotes the complex conjugate of the preceding terms. The occurrence of secular terms can be prevented by imposing on the – yet unknown – amplitude A(t1) the solvability condition The solution to the solvability condition, also satisfying the initial conditions and , is: As a result, the approximate solution by the multiple-scales analysis is using and valid for . This agrees with the nonlinear frequency changes found by employing the Lindstedt–Poincaré method. This new solution is valid until . Higher-order solutions – using the method of multiple scales – require the introduction of additional slow scales, i.e., , , etc. However, this introduces possible ambiguities in the perturbation series solution, which require a careful treatment (see ; ). Coordinate transform to amplitude/phase variables Alternatively, modern approaches derive these sorts of models using coordinate transforms, like in the method of normal forms, as described next. A solution is sought in new coordinates where the amplitude varies slowly and the phase varies at an almost constant rate, namely Straightforward algebra finds the coordinate transform transforms Duffing's equation into the pair that the radius is constant and the phase evolves according to That is, Duffing's oscillations are of constant amplitude but have different frequencies depending upon the amplitude. More difficult examples are better treated using a time-dependent coordinate transform involving complex exponentials (as also invoked in the previous multiple time-scale approach). A web service will perform the analysis for a wide range of examples. See also Method of matched asymptotic expansions WKB approximation Method of averaging Krylov–Bogoliubov averaging method Notes References External links Mathematical physics Asymptotic analysis Perturbation theory
Multiple-scale analysis
[ "Physics", "Mathematics" ]
968
[ "Mathematical analysis", "Applied mathematics", "Theoretical physics", "Quantum mechanics", "Asymptotic analysis", "Mathematical physics", "Perturbation theory" ]
810,183
https://en.wikipedia.org/wiki/Access%20network
An access network is a type of telecommunications network which connects subscribers to their immediate service provider. It is contrasted with the core network, which connects local providers to one another. The access network may be further divided between feeder plant or distribution network, and drop plant or edge network. Telephone heritage An access network, also referred to as an outside plant, refers to the series of wires, cables and equipment lying between a consumer/business telephone termination point (the point at which a telephone connection reaches the customer) and the local telephone exchange. The local exchange contains banks of automated switching equipment which direct a call or connection to the consumer. The access network is perhaps one of the oldest assets a telecoms operator would own. In 2007–2008 many telecommunication operators experienced increasing problems maintaining the quality of the records which describe the network. In 2006, according to an independent Yankee Group report, globally operators experience profit leakage in excess of $17 billion each year. The access network is also perhaps the most valuable asset an operator owns since this is what physically allows them to offer a service. Access networks consist largely of pairs of copper wires, each traveling in a direct path between the exchange and the customer. In some instances, these wires may even consist of aluminum, which was commonly used in the 1960s and 1970s following a massive increase in the cost of copper. The price increase was temporary, but the effects of this decision are still felt today as electromigration within the aluminum wires can cause an increase in on-state resistance. This resistance causes degradation which can eventually lead to the complete failure of the wire to transport data. Access is essential to the future profitability of operators who are experiencing massive reductions in revenue from plain old telephone services, due in part to the opening of historically nationalized companies to competition, and in part to increased use of mobile phones and voice over IP (VoIP) services. Operators offered additional services such as xDSL based broadband and IPTV (Internet Protocol television) to guarantee profit. The access network is again the main barrier to achieving these profits since operators worldwide have accurate records of only 40% to 60% of the network. Without understanding or even knowing the characteristics of these enormous copper spider webs, it is very difficult, and expensive to 'provision' (connect) new customers and assure the data rates required to receive next-generation services. Access networks around the world evolved to include more and more optical fiber technology. Optical fiber already makes up the majority of core networks and will start to creep closer and closer to the customer, until a full transition is achieved, delivering value-added services over fiber to the home (FTTH). Access process The process of communicating with a network begins with an access attempt, in which one or more users interact with a communications system to enable initiation of user information transfer. An access attempt itself begins with issuance of an access request by an access originator. An access attempt ends either in successful access or in access failure - an unsuccessful access that results in termination of the attempt in any manner other than initiation of user information transfer between the intended source and destination (sink) within the specified maximum access time. Access time is the time delay or latency between a requested access attempt and successful access being completed. In a telecommunications system, access time values are measured only on access attempts that result in successful access. Access failure can be the result of access outage, user blocking, incorrect access, or access denial. Access denial (system blocking) can include: Access failure caused by the issuing of a system blocking signal by a communications system that does not have a camp-on busy signal feature. Access failure caused by exceeding the maximum access time and nominal system access time fraction during an access attempt. Charging for access An access charge is a charge made by a local exchange carrier for use of its local exchange facilities for a purpose such as the origination or termination of network traffic that is carried to or from a distant exchange by an interexchange carrier. Although some access charges are billed directly to interexchange carriers, a significant percentage of all access charges are paid by the local end users. Mobile access networks GERAN UTRAN E-UTRAN CDMA2000 GSM UMTS 1xEVDO voLTE Wi-Fi in* WiMAX Optical distribution network A passive optical distribution network (PON) uses single-mode optical fiber in the outside plant, optical splitters and optical distribution frames, duplexed so that both upstream and downstream signals share the same fiber on separate wavelengths. Faster PON standards generally support a higher split ratio of users per PON, but may also use reach extenders/amplifiers where extra coverage is needed. Optical splitters creating a point to multipoint topology are also the same technology regardless of the type of PON system, making any PON network upgradable by changing the optical network terminals (ONT) and optical line terminal (OLT) terminals at each end, with minimal change to the physical network. Access networks usually also must support point-to-point technologies such as Ethernet, which bypasses any outside plant splitter to achieve a dedicated link to the telephone exchange. Some PON networks use a "home run" topology where roadside cabinets only contain patch panels so that all splitters are located centrally. While a 20% higher capital cost could be expected, home run networks may encourage a more competitive wholesale market since providers' equipment can achieve higher use. See also Edge device Hierarchical internetworking model Internet access IP connectivity access network Local loop Passive Optical Network References External links Interactive presentation introducing the technology and design of access networks Telecommunications infrastructure Network access Fiber to the premises
Access network
[ "Engineering" ]
1,139
[ "Electronic engineering", "Network access" ]
811,030
https://en.wikipedia.org/wiki/Schwinger%E2%80%93Dyson%20equation
The Schwinger–Dyson equations (SDEs) or Dyson–Schwinger equations, named after Julian Schwinger and Freeman Dyson, are general relations between correlation functions in quantum field theories (QFTs). They are also referred to as the Euler–Lagrange equations of quantum field theories, since they are the equations of motion corresponding to the Green's function. They form a set of infinitely many functional differential equations, all coupled to each other, sometimes referred to as the infinite tower of SDEs. In his paper "The S-Matrix in Quantum electrodynamics", Dyson derived relations between different S-matrix elements, or more specific "one-particle Green's functions", in quantum electrodynamics, by summing up infinitely many Feynman diagrams, thus working in a perturbative approach. Starting from his variational principle, Schwinger derived a set of equations for Green's functions non-perturbatively, which generalize Dyson's equations to the Schwinger–Dyson equations for the Green functions of quantum field theories. Today they provide a non-perturbative approach to quantum field theories and applications can be found in many fields of theoretical physics, such as solid-state physics and elementary particle physics. Schwinger also derived an equation for the two-particle irreducible Green functions, which is nowadays referred to as the inhomogeneous Bethe–Salpeter equation. Derivation Given a polynomially bounded functional over the field configurations, then, for any state vector (which is a solution of the QFT), , we have where is the action functional and is the time ordering operation. Equivalently, in the density state formulation, for any (valid) density state , we have This infinite set of equations can be used to solve for the correlation functions nonperturbatively. To make the connection to diagrammatic techniques (like Feynman diagrams) clearer, it is often convenient to split the action as where the first term is the quadratic part and is an invertible symmetric (antisymmetric for fermions) covariant tensor of rank two in the deWitt notation whose inverse, is called the bare propagator and is the "interaction action". Then, we can rewrite the SD equations as If is a functional of , then for an operator , is defined to be the operator which substitutes for . For example, if and is a functional of , then If we have an "analytic" (a function that is locally given by a convergent power series) functional (called the generating functional) of (called the source field) satisfying then, from the properties of the functional integrals the Schwinger–Dyson equation for the generating functional is If we expand this equation as a Taylor series about , we get the entire set of Schwinger–Dyson equations. An example: φ4 To give an example, suppose for a real field φ. Then, The Schwinger–Dyson equation for this particular example is: Note that since is not well-defined because is a distribution in , and , this equation needs to be regularized. In this example, the bare propagator D is the Green's function for and so, the Schwinger–Dyson set of equations goes as and etc. (Unless there is spontaneous symmetry breaking, the odd correlation functions vanish.) See also Functional renormalization group Dyson equation Path integral formulation Source field References Further reading There are not many books that treat the Schwinger–Dyson equations. Here are three standard references: There are some review article about applications of the Schwinger–Dyson equations with applications to special field of physics. For applications to Quantum Chromodynamics there are Quantum field theory Differential equations Freeman Dyson
Schwinger–Dyson equation
[ "Physics", "Mathematics" ]
802
[ "Quantum field theory", "Mathematical objects", "Differential equations", "Quantum mechanics", "Equations" ]
811,890
https://en.wikipedia.org/wiki/Fourteen-segment%20display
A fourteen-segment display (FSD) (sometimes referred to as a starburst display or Union Jack display) is a type of display based on 14 segments that can be turned on or off to produce letters and numerals. It is an expansion of the more common seven-segment display, having an additional four diagonal and two vertical segments with the middle horizontal segment broken in half. A seven-segment display suffices for numerals and certain letters, but unambiguously rendering the ISO basic Latin alphabet requires more detail. A slight variation is the sixteen-segment display which allows additional legibility in displaying letters or other symbols. A decimal point or comma may be present as an additional segment, or pair of segments; the comma (used for triple-digit groupings or as a decimal separator in many regions) is commonly formed by combining the decimal point with a closely 'attached' leftwards-descending arc-shaped segment. Electronic alphanumeric displays may use LEDs, LCDs, or vacuum fluorescent display devices. The LED variant is typically manufactured in single or dual character packages, allowing the system designer to choose the number of characters suiting the application. Often a character generator is used to translate 7-bit ASCII character codes to the 14 bits that indicate which of the 14 segments to turn on or off. Character encoding By lighting different elements, different characters can be displayed. In a 14-segment display, there is also an optional 15th segment which is a decimal point (denoted as "DP"). Decimal Latin alphabet A 14-segment display is mostly used to display text because the 14 elements allow all Latin letters to be displayed both in upper case and lower case (with a few exceptions like "s"). Applications Multiple-segment display devices use fewer elements than a full dot-matrix display, and may produce a better character appearance where the segments are shaped appropriately. This can reduce power consumption and the number of driver components. Fourteen-segment gas-plasma displays were used in pinball machines from 1986 through 1991 with an additional comma and period part making for a total of 16 segments. Fourteen and sixteen-segment displays were used to produce alphanumeric characters on calculators and other embedded systems. Applications today include displays fitted to telephone Caller ID units, gymnasium equipment, VCRs, car stereos, microwave ovens, slot machines, and DVD players. Such displays were very common on pinball machines for displaying the score and other information, before the widespread use of dot-matrix display panels. Incandescent lamp Multiple segment alphanumeric displays are nearly as old as the use of electricity. A 1908 textbook describes an alphanumeric display system using incandescent lamps and a mechanical switching arrangement. Each of 21 lamps was connected to a switch operated by a set of slotted bars, installed in a rotating drum. This commutator assembly could be arranged so that as the drum was rotated, different sets of switches were closed and different letters and figures could be displayed. The scheme would have been used for "talking" signs to spell out messages, but a complete set of commutator switches, drums and lamps would have been required for each letter of a message, making the resulting sign quite expensive. Cold-cathode neon A few different versions of the fourteen segment display exist as cold-cathode neon lamps. For example, one type made by Burroughs Corporation was called "Panaplex". Instead of using a filament as the incandescent versions do, these use a cathode charged to a 180 V potential which causes the electrified segment to glow a bright orange color. They operated similarly to Nixie tubes but instead of the full-formed numeric shapes, used segments to make up numerals and letters. Examples See also Seven-segment display Eight-segment display Nine-segment display Sixteen-segment display Dot matrix display Nixie tube display Vacuum fluorescent display References External links Display technology
Fourteen-segment display
[ "Engineering" ]
809
[ "Electronic engineering", "Display technology" ]
812,290
https://en.wikipedia.org/wiki/DIIS
DIIS (direct inversion in the iterative subspace or direct inversion of the iterative subspace), also known as Pulay mixing, is a technique for extrapolating the solution to a set of linear equations by directly minimizing an error residual (e.g. a Newton–Raphson step size) with respect to a linear combination of known sample vectors. DIIS was developed by Peter Pulay in the field of computational quantum chemistry with the intent to accelerate and stabilize the convergence of the Hartree–Fock self-consistent field method. At a given iteration, the approach constructs a linear combination of approximate error vectors from previous iterations. The coefficients of the linear combination are determined so to best approximate, in a least squares sense, the null vector. The newly determined coefficients are then used to extrapolate the function variable for the next iteration. Details At each iteration, an approximate error vector, , corresponding to the variable value, is determined. After sufficient iterations, a linear combination of previous error vectors is constructed The DIIS method seeks to minimize the norm of under the constraint that the coefficients sum to one. The reason why the coefficients must sum to one can be seen if we write the trial vector as the sum of the exact solution () and an error vector. In the DIIS approximation, we get: We minimize the second term while it is clear that the sum coefficients must be equal to one if we want to find the exact solution. The minimization is done by a Lagrange multiplier technique. Introducing an undetermined multiplier , a Lagrangian is constructed as Equating zero to the derivatives of with respect to the coefficients and the multiplier leads to a system of linear equations to be solved for the coefficients (and the Lagrange multiplier). Moving the minus sign to , results in an equivalent symmetric problem. The coefficients are then used to update the variable as References Literature See also GMRES External links The Mathematics of DIIS Quantum chemistry Computational chemistry Numerical linear algebra
DIIS
[ "Physics", "Chemistry" ]
419
[ "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", "Computational chemistry", " molecular", "Atomic", " and optical physics" ]
813,041
https://en.wikipedia.org/wiki/International%20HapMap%20Project
The International HapMap Project was an organization that aimed to develop a haplotype map (HapMap) of the human genome, to describe the common patterns of human genetic variation. HapMap is used to find genetic variants affecting health, disease and responses to drugs and environmental factors. The information produced by the project is made freely available for research. The International HapMap Project is a collaboration among researchers at academic centers, non-profit biomedical research groups and private companies in Canada, China (including Hong Kong), Japan, Nigeria, the United Kingdom, and the United States. It officially started with a meeting on October 27 to 29, 2002, and was expected to take about three years. It comprises three phases; the complete data obtained in Phase I were published on 27 October 2005. The analysis of the Phase II dataset was published in October 2007. The Phase III dataset was released in spring 2009 and the publication presenting the final results published in September 2010. Background Unlike with the rarer Mendelian diseases, combinations of different genes and the environment play a role in the development and progression of common diseases (such as diabetes, cancer, heart disease, stroke, depression, and asthma), or in the individual response to pharmacological agents. To find the genetic factors involved in these diseases, one could in principle do a genome-wide association study: obtain the complete genetic sequence of several individuals, some with the disease and some without, and then search for differences between the two sets of genomes. At the time, this approach was not feasible because of the cost of full genome sequencing. The HapMap project proposed a shortcut. Although any two unrelated people share about 99.5% of their DNA sequence, their genomes differ at specific nucleotide locations. Such sites are known as single nucleotide polymorphisms (SNPs), and each of the possible resulting gene forms is called an allele. The HapMap project focuses only on common SNPs, those where each allele occurs in at least 1% of the population. Each person has two copies of all chromosomes, except the sex chromosomes in males. For each SNP, the combination of alleles a person has is called a genotype. Genotyping refers to uncovering what genotype a person has at a particular site. The HapMap project chose a sample of 269 individuals and selected several million well-defined SNPs, genotyped the individuals for these SNPs, and published the results. The alleles of nearby SNPs on a single chromosome are correlated. Specifically, if the allele of one SNP for a given individual is known, the alleles of nearby SNPs can often be predicted, a process known as genotype imputation. This is because each SNP arose in evolutionary history as a single point mutation, and was then passed down on the chromosome surrounded by other, earlier, point mutations. SNPs that are separated by a large distance on the chromosome are typically not very well correlated, because recombination occurs in each generation and mixes the allele sequences of the two chromosomes. A sequence of consecutive alleles on a particular chromosome is known as a haplotype. To find the genetic factors involved in a particular disease, one can proceed as follows. First a certain region of interest in the genome is identified, possibly from earlier inheritance studies. In this region one locates a set of tag SNPs from the HapMap data; these are SNPs that are very well correlated with all the other SNPs in the region. Using these, genotype imputation can be used to determine (impute) the other SNPs and thus the entire haplotype with high confidence. Next, one determines the genotype for these tag SNPs in several individuals, some with the disease and some without. By comparing the two groups, one determines the likely locations and haplotypes that are involved in the disease. Samples used Haplotypes are generally shared between populations, but their frequency can differ widely. Four populations were selected for inclusion in the HapMap: 30 adult-and-both-parents Yoruba trios from Ibadan, Nigeria (YRI), 30 trios of Utah residents of northern and western European ancestry (CEU), 44 unrelated Japanese individuals from Tokyo, Japan (JPT) and 45 unrelated Han Chinese individuals from Beijing, China (CHB). Although the haplotypes revealed from these populations should be useful for studying many other populations, parallel studies are currently examining the usefulness of including additional populations in the project. All samples were collected through a community engagement process with appropriate informed consent. The community engagement process was designed to identify and attempt to respond to culturally specific concerns and give participating communities input into the informed consent and sample collection processes. In phase III, 11 global ancestry groups have been assembled: ASW (African ancestry in Southwest USA); CEU (Utah residents with Northern and Western European ancestry from the CEPH collection); CHB (Han Chinese in Beijing, China); CHD (Chinese in Metropolitan Denver, Colorado); GIH (Gujarati Indians in Houston, Texas); JPT (Japanese in Tokyo, Japan); LWK (Luhya in Webuye, Kenya); MEX (Mexican ancestry in Los Angeles, California); MKK (Maasai in Kinyawa, Kenya); TSI (Tuscans in Italy); YRI (Yoruba in Ibadan, Nigeria). Three combined panels have also been created, which allow better identification of SNPs in groups outside the nine homogenous samples: CEU+TSI (Combined panel of Utah residents with Northern and Western European ancestry from the CEPH collection and Tuscans in Italy); JPT+CHB (Combined panel of Japanese in Tokyo, Japan and Han Chinese in Beijing, China) and JPT+CHB+CHD (Combined panel of Japanese in Tokyo, Japan, Han Chinese in Beijing, China and Chinese in Metropolitan Denver, Colorado). CEU+TSI, for instance, is a better model of UK British individuals than is CEU alone. Scientific strategy It was expensive in the 1990s to sequence patients’ whole genomes. So the National Institutes of Health embraced the idea for a "shortcut", which was to look just at sites on the genome where many people have a variant DNA unit. The theory behind the shortcut was that, since the major diseases are common, so too would be the genetic variants that caused them. Natural selection keeps the human genome free of variants that damage health before children are grown, the theory held, but fails against variants that strike later in life, allowing them to become quite common (In 2002 the National Institutes of Health started a $138 million project called the HapMap to catalog the common variants in European, East Asian and African genomes). For the Phase I, one common SNP was genotyped every 5,000 bases. Overall, more than one million SNPs were genotyped. The genotyping was carried out by 10 centres using five different genotyping technologies. Genotyping quality was assessed by using duplicate or related samples and by having periodic quality checks where centres had to genotype common sets of SNPs. The Canadian team was led by Thomas J. Hudson at McGill University in Montreal and focused on chromosomes 2 and 4p. The Chinese team was led by Huanming Yang in Beijing and Shanghai, and Lap-Chee Tsui in Hong Kong and focused on chromosomes 3, 8p and 21. The Japanese team was led by Yusuke Nakamura at the University of Tokyo and focused on chromosomes 5, 11, 14, 15, 16, 17 and 19. The British team was led by David R. Bentley at the Sanger Institute and focused on chromosomes 1, 6, 10, 13 and 20. There were four United States' genotyping centres: a team led by Mark Chee and Arnold Oliphant at Illumina Inc. in San Diego (studying chromosomes 8q, 9, 18q, 22 and X), a team led by David Altshuler and Mark Daly at the Broad Institute in Cambridge, USA (chromosomes 4q, 7q, 18p, Y and mitochondrion), a team led by Richard Gibbs at the Baylor College of Medicine in Houston (chromosome 12), and a team led by Pui-Yan Kwok at the University of California, San Francisco (chromosome 7p). To obtain enough SNPs to create the Map, the Consortium funded a large re-sequencing project to discover millions of additional SNPs. These were submitted to the public dbSNP database. As a result, by August 2006, the database included more than ten million SNPs, and more than 40% of them were known to be polymorphic. By comparison, at the start of the project, fewer than 3 million SNPs were identified, and no more than 10% of them were known to be polymorphic. During Phase II, more than two million additional SNPs were genotyped throughout the genome by David R. Cox, Kelly A. Frazer and others at Perlegen Sciences and 500,000 by the company Affymetrix. Data access All of the data generated by the project, including SNP frequencies, genotypes and haplotypes, were placed in the public domain and are available for download. This website also contains a genome browser which allows to find SNPs in any region of interest, their allele frequencies and their association to nearby SNPs. A tool that can determine tag SNPs for a given region of interest is also provided. These data can also be directly accessed from the widely used Haploview program. Publications Secko, David (2005). "Phase I of the HapMap Complete" . The Scientist See also Genealogical DNA test The 1000 Genomes Project Population groups in biomedicine Human Variome Project Human genetic variation References External links International HapMap Project (HapMap Homepage) National Human Genome Research Institute (NHGRI) HapMap Page Browsing HapMap Data Using the Genome Browser The Mexican Genome Diversity Project Human genome projects Genetic genealogy projects Genealogy websites Biological databases Open science Single-nucleotide polymorphisms Genome projects
International HapMap Project
[ "Chemistry", "Biology" ]
2,140
[ "Single-nucleotide polymorphisms", "Bioinformatics", "Biodiversity", "Molecular biology", "Genome projects", "Human genome projects", "Biological databases" ]
813,086
https://en.wikipedia.org/wiki/Rotational%20frequency
Rotational frequency, also known as rotational speed or rate of rotation (symbols ν, lowercase Greek nu, and also n), is the frequency of rotation of an object around an axis. Its SI unit is the reciprocal seconds (s−1); other common units of measurement include the hertz (Hz), cycles per second (cps), and revolutions per minute (rpm). Rotational frequency can be obtained dividing angular frequency, ω, by a full turn (2π radians): νω/(2πrad). It can also be formulated as the instantaneous rate of change of the number of rotations, N, with respect to time, t: ndN/dt (as per International System of Quantities). Similar to ordinary period, the reciprocal of rotational frequency is the rotation period or period of rotation, Tν−1n−1, with dimension of time (SI unit seconds). Rotational velocity is the vector quantity whose magnitude equals the scalar rotational speed. In the special cases of spin (around an axis internal to the body) and revolution (external axis), the rotation speed may be called spin speed and revolution speed, respectively. Rotational acceleration is the rate of change of rotational velocity; it has dimension of squared reciprocal time and SI units of squared reciprocal seconds (s−2); thus, it is a normalized version of angular acceleration and it is analogous to chirpyness. Related quantities Tangential speed (Latin letter v), rotational frequency , and radial distance , are related by the following equation: An algebraic rearrangement of this equation allows us to solve for rotational frequency: Thus, the tangential speed will be directly proportional to when all parts of a system simultaneously have the same , as for a wheel, disk, or rigid wand. The direct proportionality of to is not valid for the planets, because the planets have different rotational frequencies. Regression analysis Rotational frequency can measure, for example, how fast a motor is running. Rotational speed is sometimes used to mean angular frequency rather than the quantity defined in this article. Angular frequency gives the change in angle per time unit, which is given with the unit radian per second in the SI system. Since 2π radians or 360 degrees correspond to a cycle, we can convert angular frequency to rotational frequency by where is rotational frequency, with unit cycles per second is angular frequency, with unit radian per second or degree per second For example, a stepper motor might turn exactly one complete revolution each second. Its angular frequency is 360 degrees per second (360°/s), or 2π radians per second (2π rad/s), while the rotational frequency is 60 rpm. Rotational frequency is not to be confused with tangential speed, despite some relation between the two concepts. Imagine a merry-go-round with a constant rate of rotation. No matter how close to or far from the axis of rotation you stand, your rotational frequency will remain constant. However, your tangential speed does not remain constant. If you stand two meters from the axis of rotation, your tangential speed will be double the amount if you were standing only one meter from the axis of rotation. See also Angular velocity Radial velocity Rotation period Rotational spectrum Tachometer Notes References Kinematic properties Temporal rates Rotation
Rotational frequency
[ "Physics", "Mathematics" ]
674
[ "Temporal quantities", "Physical phenomena", "Mechanical quantities", "Physical quantities", "Quantity", "Temporal rates", "Classical mechanics", "Rotation", "Motion (physics)", "Kinematic properties" ]
815,969
https://en.wikipedia.org/wiki/Buckling
In structural engineering, buckling is the sudden change in shape (deformation) of a structural component under load, such as the bowing of a column under compression or the wrinkling of a plate under shear. If a structure is subjected to a gradually increasing load, when the load reaches a critical level, a member may suddenly change shape and the structure and component is said to have buckled. Euler's critical load and Johnson's parabolic formula are used to determine the buckling stress of a column. Buckling may occur even though the stresses that develop in the structure are well below those needed to cause failure in the material of which the structure is composed. Further loading may cause significant and somewhat unpredictable deformations, possibly leading to complete loss of the member's load-carrying capacity. However, if the deformations that occur after buckling do not cause the complete collapse of that member, the member will continue to support the load that caused it to buckle. If the buckled member is part of a larger assemblage of components such as a building, any load applied to the buckled part of the structure beyond that which caused the member to buckle will be redistributed within the structure. Some aircraft are designed for thin skin panels to continue carrying load even in the buckled state. Forms of buckling Columns The ratio of the effective length of a column to the least radius of gyration of its cross section is called the slenderness ratio (sometimes expressed with the Greek letter lambda, λ). This ratio affords a means of classifying columns and their failure mode. The slenderness ratio is important for design considerations. All the following are approximate values used for convenience. If the load on a column is applied through the center of gravity (centroid) of its cross section, it is called an axial load. A load at any other point in the cross section is known as an eccentric load. A short column under the action of an axial load will fail by direct compression before it buckles, but a long column loaded in the same manner will fail by springing suddenly outward laterally (buckling) in a bending mode. The buckling mode of deflection is considered a failure mode, and it generally occurs before the axial compression stresses (direct compression) can cause failure of the material by yielding or fracture of that compression member. However, intermediate-length columns will fail by a combination of direct compressive stress and bending. In particular: A short steel column is one whose slenderness ratio does not exceed 50; an intermediate length steel column has a slenderness ratio ranging from about 50 to 200, and its behavior is dominated by the strength limit of the material, while a long steel column may be assumed to have a slenderness ratio greater than 200 and its behavior is dominated by the modulus of elasticity of the material. A short concrete column is one having a ratio of unsupported length to least dimension of the cross section equal to or less than 10. If the ratio is greater than 10, it is considered a long column (sometimes referred to as a slender column). Timber columns may be classified as short columns if the ratio of the length to least dimension of the cross section is equal to or less than 10. The dividing line between intermediate and long timber columns cannot be readily evaluated. One way of defining the lower limit of long timber columns would be to set it as the smallest value of the ratio of length to least cross sectional area that would just exceed a certain constant K of the material. Since K depends on the modulus of elasticity and the allowable compressive stress parallel to the grain, it can be seen that this arbitrary limit would vary with the species of the timber. The value of K is given in most structural handbooks. The theory of the behavior of columns was investigated in 1757 by mathematician Leonhard Euler. He derived the formula, termed Euler's critical load, that gives the maximum axial load that a long, slender, ideal column can carry without buckling. An ideal column is one that is: perfectly straight made of a homogeneous material free from initial stress. When the applied load reaches the Euler load, sometimes called the critical load, the column comes to be in a state of unstable equilibrium. At that load, the introduction of the slightest lateral force will cause the column to fail by suddenly "jumping" to a new configuration, and the column is said to have buckled. This is what happens when a person stands on an empty aluminum can and then taps the sides briefly, causing it to then become instantly crushed (the vertical sides of the can may be understood as an infinite series of extremely thin columns). The formula derived by Euler for long slender columns is where , maximum or critical force (vertical load on column), , modulus of elasticity, , smallest area moment of inertia (second moment of area) of the cross section of the column, , unsupported length of column, , column effective length factor, whose value depends on the conditions of end support of the column, as follows. For both ends pinned (hinged, free to rotate), . For both ends fixed, . For one end fixed and the other end pinned, . For one end fixed and the other end free to move laterally, . is the effective length of the column. Examination of this formula reveals the following facts with regard to the load-bearing ability of slender columns. The elasticity of the material of the column and not the compressive strength of the material of the column determines the column's buckling load. The buckling load is directly proportional to the second moment of area of the cross section. The boundary conditions have a considerable effect on the critical load of slender columns. The boundary conditions determine the mode of bending of the column and the distance between inflection points on the displacement curve of the deflected column. The inflection points in the deflection shape of the column are the points at which the curvature of the column changes sign and are also the points at which the column's internal bending moments of the column are zero. The closer the inflection points are, the greater the resulting axial load capacity (bucking load) of the column. A conclusion from the above is that the buckling load of a column may be increased by changing its material to one with a higher modulus of elasticity (E), or changing the design of the column's cross section so as to increase its moment of inertia. The latter can be done without increasing the weight of the column by distributing the material as far from the principal axis of the column's cross section as possible. For most purposes, the most effective use of the material of a column is that of a tubular section. Another insight that may be gleaned from this equation is the effect of length on critical load. Doubling the unsupported length of the column quarters the allowable load. The restraint offered by the end connections of a column also affects its critical load. If the connections are perfectly rigid (not allowing rotation of its ends), the critical load will be four times that for a similar column where the ends are pinned (allowing rotation of its ends). Since the radius of gyration is defined as the square root of the ratio of the column's moment of inertia about an axis to its cross sectional area, the above Euler formula may be reformatted by substituting the radius of gyration for : where is the stress that causes buckling in the column, and is the slenderness ratio. Since structural columns are commonly of intermediate length, the Euler formula has little practical application for ordinary design. Issues that cause deviation from the pure Euler column behaviour include imperfections in geometry of the column in combination with plasticity/non-linear stress strain behaviour of the column's material. Consequently, a number of empirical column formulae have been developed that agree with test data, all of which embody the slenderness ratio. Due to the uncertainty in the behavior of columns, for design, appropriate safety factors are introduced into these formulae. One such formula is the Perry Robertson formula which estimates the critical buckling load based on an assumed small initial curvature, hence an eccentricity of the axial load. The Rankine Gordon formula, named for William John Macquorn Rankine and Perry Hugesworth Gordon (1899 – 1966), is also based on experimental results and suggests that a column will buckle at a load Fmax given by: where is the Euler maximum load and is the maximum compressive load. This formula typically produces a conservative estimate of . Self-buckling A free-standing, vertical column, with density , Young's modulus , and cross-sectional area , will buckle under its own weight if its height exceeds a certain critical value: where is the acceleration due to gravity, is the second moment of area of the beam cross section, and is the first zero of the Bessel function of the first kind of order −1/3, which is equal to 1.86635086... Plate buckling A plate is a 3-dimensional structure defined as having a width of comparable size to its length, with a thickness that is very small in comparison to its other two dimensions. Similar to columns, thin plates experience out-of-plane buckling deformations when subjected to critical loads; however, contrasted to column buckling, plates under buckling loads can continue to carry loads, called local buckling. This phenomenon is incredibly useful in numerous systems, as it allows systems to be engineered to provide greater loading capacities. For a rectangular plate, supported along every edge, loaded with a uniform compressive force per unit length, the derived governing equation can be stated by: where , out-of-plane deflection , uniformly distributed compressive load , Poisson's ratio , modulus of elasticity , thickness The solution to the deflection can be expanded into two harmonic functions shown: where , number of half sine curvatures that occur lengthwise , number of half sine curvatures that occur widthwise , length of specimen , width of specimen The previous equation can be substituted into the earlier differential equation where equals 1. can be separated providing the equation for the critical compressive loading of a plate: where the buckling coefficient , is given by: The buckling coefficient is influenced by the aspect of the specimen, / , and the number of lengthwise curvatures. For an increasing number of such curvatures, the aspect ratio produces a varying buckling coefficient; but each relation provides a minimum value for each . This minimum value can then be used as a constant, independent from both the aspect ratio and . Given stress is found by the load per unit area, the following expression is found for the critical stress: From the derived equations, it can be seen the close similarities between the critical stress for a column and for a plate. As the width shrinks, the plate acts more like a column as it increases the resistance to buckling along the plate's width. The increase of allows for an increase of the number of sine waves produced by buckling along the length, but also increases the resistance from the buckling along the width. This creates the preference of the plate to buckle in such a way to equal the number of curvatures both along the width and length. Due to boundary conditions, when a plate is loaded with a critical stress and buckles, the edges perpendicular to the load cannot deform out-of-plane and will therefore continue to carry the stresses. This creates a non-uniform compressive loading along the ends, where the stresses are imposed on half of the effective width on either side of the specimen, given by the following: where , effective width , yielding stress As the loaded stress increases, the effective width continues to shrink; if the stresses on the ends ever reach the yield stress, the plate will fail. This is what allows the buckled structure to continue supporting loadings. When the axial load over the critical load is plotted against the displacement, the fundamental path is shown. It demonstrates the plate's similarity to a column under buckling; however, past the buckling load, the fundamental path bifurcates into a secondary path that curves upward, providing the ability to be subjected to higher loads past the critical load. Flexural-torsional buckling Flexural-torsional buckling can be described as a combination of bending and twisting response of a member in compression. Such a deflection mode must be considered for design purposes. This mostly occurs in columns with "open" cross-sections and hence have a low torsional stiffness, such as channels, structural tees, double-angle shapes, and equal-leg single angles. Circular cross sections do not experience such a mode of buckling. Lateral-torsional buckling When a simply supported beam is loaded in bending, the top side is in compression, and the bottom side is in tension. If the beam is not supported in the lateral direction (i.e., perpendicular to the plane of bending), and the flexural load increases to a critical limit, the beam will experience a lateral deflection of the compression flange as it buckles locally. The lateral deflection of the compression flange is restrained by the beam web and tension flange, but for an open section the twisting mode is more flexible, hence the beam both twists and deflects laterally in a failure mode known as lateral-torsional buckling. In wide-flange sections (with high lateral bending stiffness), the deflection mode will be mostly twisting in torsion. In narrow-flange sections, the bending stiffness is lower and the column's deflection will be closer to that of lateral bucking deflection mode. The use of closed sections such as square hollow section will mitigate the effects of lateral-torsional buckling by virtue of their high torsional stiffness. Cb is a modification factor used in the equation for nominal flexural strength when determining lateral-torsional buckling. The reason for this factor is to allow for non-uniform moment diagrams when the ends of a beam segment are braced. The conservative value for Cb can be taken as 1, regardless of beam configuration or loading, but in some cases it may be excessively conservative. Cb is always equal to or greater than 1, never less. For cantilevers or overhangs where the free end is unbraced, Cb is equal to 1. Tables of values of Cb for simply supported beams exist. If an appropriate value of Cb is not given in tables, it can be obtained via the following formula: where , absolute value of maximum moment in the unbraced segment, , absolute value of maximum moment at quarter point of the unbraced segment, , absolute value of maximum moment at centerline of the unbraced segment, , absolute value of maximum moment at three-quarter point of the unbraced segment, The result is the same for all unit systems. Plastic buckling The buckling strength of a member is less than the elastic buckling strength of a structure if the material of the member is stressed beyond the elastic material range and into the non-linear (plastic) material behavior range. When the compression load is near the buckling load, the structure will bend significantly and the material of the column will diverge from a linear stress-strain behavior. The stress-strain behavior of materials is not strictly linear even below the yield point, hence the modulus of elasticity decreases as stress increases, and significantly so as the stresses approach the material's yield strength. This reduced material rigidity reduces the buckling strength of the structure and results in a buckling load less than that predicted by the assumption of linear elastic behavior. A more accurate approximation of the buckling load can be had by the use of the tangent modulus of elasticity, Et, which is less than the elastic modulus, in place of the elastic modulus of elasticity. The tangent is equal to the elastic modulus and then decreases beyond the proportional limit. The tangent modulus is a line drawn tangent to the stress-strain curve at a particular value of strain (in the elastic section of the stress-strain curve, the tangent modulus is equal to the elastic modulus). Plots of the tangent modulus of elasticity for a variety of materials are available in standard references. Crippling Sections that are made up of flanged plates such as a channel, can still carry load in the corners after the flanges have locally buckled. Crippling is failure of the complete section. Diagonal tension Because of the thin skins typically used in aerospace applications, skins may buckle at low load levels. However, once buckled, instead of being able to transmit shear forces, they are still able to carry load through diagonal tension (DT) stresses in the web. This results in a non-linear behaviour in the load carrying behaviour of these details. The ratio of the actual load to the load at which buckling occurs is known as the buckling ratio of a sheet. High buckling ratios may lead to excessive wrinkling of the sheets which may then fail through yielding of the wrinkles. Although they may buckle, thin sheets are designed to not permanently deform and return to an unbuckled state when the applied loading is removed. Repeated buckling may lead to fatigue failures. Sheets under diagonal tension are supported by stiffeners that as a result of sheet buckling carry a distributed load along their length, and may in turn result in these structural members failing under buckling. Thicker plates may only partially form a diagonal tension field and may continue to carry some of the load through shear. This is known as incomplete diagonal tension (IDT). This behavior was studied by Wagner and these beams are sometimes known as Wagner beams. Diagonal tension may also result in a pulling force on any fasteners such as rivets that are used to fasten the web to the supporting members. Fasteners and sheets must be designed to resist being pulled off their supports. Dynamic buckling If a column is loaded suddenly and then the load released, the column can sustain a much higher load than its static (slowly applied) buckling load. This can happen in a long, unsupported column used as a drop hammer. The duration of compression at the impact end is the time required for a stress wave to travel along the column to the other (free) end and back down as a relief wave. Maximum buckling occurs near the impact end at a wavelength much shorter than the length of the rod, and at a stress many times the buckling stress of a statically loaded column. The critical condition for buckling amplitude to remain less than about 25 times the effective rod straightness imperfection at the buckle wavelength is where is the impact stress, is the length of the rod, is the elastic wave speed, and is the smaller lateral dimension of a rectangular rod. Because the buckle wavelength depends only on and , this same formula holds for thin cylindrical shells of thickness . Theory Energy method Often it is very difficult to determine the exact buckling load in complex structures using the Euler formula, due to the difficulty in determining the constant K. Therefore, maximum buckling load is often approximated using energy conservation and referred to as an energy method in structural analysis. The first step in this method is to assume a displacement mode and a function that represents that displacement. This function must satisfy the most important boundary conditions, such as displacement and rotation. The more accurate the displacement function, the more accurate the result. The method assumes that the system (the column) is a conservative system in which energy is not dissipated as heat, hence the energy added to the column by the applied external forces is stored in the column in the form of strain energy. In this method, there are two equations used (for small deformations) to approximate the "strain" energy (the potential energy stored as elastic deformation of the structure) and "applied" energy (the work done on the system by external forces). where is the displacement function and the subscripts and refer to the first and second derivatives of the displacement. Single-degree-of-freedom models Using the concept of total potential energy, , it is possible to identify four fundamental forms of buckling found in structural models with one degree of freedom. We start by expressing where is the strain energy stored in the structure, is the applied conservative load and is the distance moved by in its direction. Using the axioms of elastic instability theory, namely that equilibrium is any point where is stationary with respect to the coordinate measuring the degree(s) of freedom and that these points are only stable if is a local minimum and unstable if otherwise (e.g. maximum or a point of inflection). These four forms of elastic buckling are the saddle-node bifurcation or limit point; the supercritical or stable-symmetric bifurcation; the subcritical or unstable-symmetric bifurcation; and the transcritical or asymmetric bifurcation. All but the first of these examples is a form of pitchfork bifurcation. Simple models for each of these types of buckling behaviour are shown in the figures below, along with the associated bifurcation diagrams. Engineering examples Bicycle wheels A conventional bicycle wheel consists of a thin rim kept under high compressive stress by the (roughly normal) inward pull of a large number of spokes. It can be considered as a loaded column that has been bent into a circle. If spoke tension is increased beyond a safe level or if part of the rim is subject to a certain lateral force, the wheel spontaneously fails into a characteristic saddle shape (sometimes called a "taco" or a "pringle") like a three-dimensional Euler column. If this is a purely elastic deformation the rim will resume its proper plane shape if spoke tension is reduced or a lateral force from the opposite direction is applied. Roads Buckling is a failure mode in pavement materials, primarily with concrete, since asphalt is more flexible. Radiant heat from the sun is absorbed in the road surface, causing it to expand, forcing adjacent pieces to push against each other. If the stress is sufficient, the pavement can lift and crack without warning. Traversing a buckled section can be jarring to automobile drivers, described as running over a speed hump at highway speeds. Rail tracks Similarly, rail tracks also expand when heated, and can fail by buckling, a phenomenon called sun kink. It is more common for rails to move laterally, often pulling the underlying ties (sleepers) along. Sun kink can lead to railroads drastically reducing the speed of trains, leading to delays and cancellations. This is done to avoid derailment. Intensifying heat waves due to climate change doubled the number of hours of heat related delays in 2023, compared to 2018. These accidents were deemed to be sun kink-related (more information available at List of rail accidents (2000–2009)): April 18, 2002 Amtrak Auto-Train derailment, off CSX tracks, near Crescent City, Florida. July 29, 2002 Amtrak Capitol Limited derailment, off CSX tracks, near Kensington, Maryland. July 8, 2010 CSX train derailment in Waxhaw, North Carolina. July 6, 2012 WMATA Metrorail train derailment near West Hyattsville station, Maryland. The Federal Railroad Administration issued a Safety Advisory on July 11, 2012 alerting railroad operators to inspect tracks for "buckling-prone conditions." The Advisory included a brief summary of four derailments that had occurred between June 23 to July 4 that appeared to be "heat related incidents." Pipes and pressure vessels Pipes and pressure vessels subject to external overpressure, caused for example by steam cooling within the pipe and condensing into water with subsequent massive pressure drop, risk buckling due to compressive hoop stresses. Design rules for calculation of the required wall thickness or reinforcement rings are given in various piping and pressure vessel codes. Super- and hypersonic aerospace vehicles Aerothermal heating can lead to buckling of surface panels on super- and hypersonic aerospace vehicles such as high-speed aircraft, rockets and reentry vehicles. If buckling is caused by aerothermal loads, the situation can be further complicated by enhanced heat transfer in areas where the structure deforms towards the flow-field. See also References Further reading External links The complete theory and example experimental results for long columns are available as a 39-page PDF document at http://lindberglce.com/tech/buklbook.htm Elasticity (physics) Materials science Mechanical failure modes Structural analysis Mechanics
Buckling
[ "Physics", "Materials_science", "Technology", "Engineering" ]
5,064
[ "Structural engineering", "Physical phenomena", "Mechanical failure modes", "Applied and interdisciplinary physics", "Elasticity (physics)", "Deformation (mechanics)", "Aerospace engineering", "Structural analysis", "Technological failures", "Materials science", "Mechanics", "nan", "Mechanic...
817,680
https://en.wikipedia.org/wiki/Thixotropy
Thixotropy is a time-dependent shear thinning property. Certain gels or fluids that are thick or viscous under static conditions will flow (become thinner, less viscous) over time when shaken, agitated, shear-stressed, or otherwise stressed (time-dependent viscosity). They then take a fixed time to return to a more viscous state. Some non-Newtonian pseudoplastic fluids show a time-dependent change in viscosity; the longer the fluid undergoes shear stress, the lower its viscosity. A thixotropic fluid is a fluid which takes a finite time to attain equilibrium viscosity when introduced to a steep change in shear rate. Some thixotropic fluids return to a gel state almost instantly, such as ketchup, and are called pseudoplastic fluids. Others such as yogurt take much longer and can become nearly solid. Many gels and colloids are thixotropic materials, exhibiting a stable form at rest but becoming fluid when agitated. Thixotropy arises because particles or structured solutes require time to organize. Some fluids are anti-thixotropic: constant shear stress for a time causes an increase in viscosity or even solidification. Fluids which exhibit this property are sometimes called rheopectic. Anti-thixotropic fluids are less well documented than thixotropic fluids. History Many sources of thixotropy comes from the studies of Bauer and Collins as the earliest source of origin. Later in 1923, other researchers began experimenting with thixotropy and then began reporting that many gels consist of aqueous Fe2O3 dispersions. These researchers, Mewis and Barnes, Schalek and Szegvari, and H. Freundlich, then learned that they could make the gel turn into a liquid simply by shaking the contents. The more that was learned of this material has been found in numerous other products without the realization of the people making said products. Natural examples Some clays are thixotropic, influenced by thermochemical treatment, and their behaviour is of great importance in structural and geotechnical engineering. Landslides, such as those common in the cliffs around Lyme Regis, Dorset and in the Aberfan spoil tip disaster in Wales are evidence of this phenomenon. Similarly, a lahar is a mass of earth liquefied by a volcanic event, which rapidly solidifies once coming to rest. Drilling muds used in geotechnical applications can be thixotropic. Honey from honey bees may also exhibit this property under certain conditions (such as heather honey or mānuka honey). Both cytoplasm and the ground substance in the human body are thixotropic, as is semen. Some clay deposits found in the process of exploring caves exhibit thixotropism: an initially solid-seeming mudbank will turn soupy and yield up moisture when dug into or otherwise disturbed. These clays were deposited in the past by low-velocity streams which tend to deposit fine-grained sediment. A thixotropic fluid is best visualised by an oar blade embedded in mud. Pressure on the oar often results in a highly viscous (more solid) thixotropic mud on the high pressure side of the blade, and low viscosity (very fluid) thixotropic mud on the low pressure side of the oar blade. Flow from the high pressure side to the low pressure side of the oar blade is non-Newtonian. (i.e., fluid velocity is not linearly proportional to the square root of the pressure differential over the oar blade). Applications Many kinds of paints and inks—e.g., plastisols used in silkscreen textile printing—exhibit thixotropic qualities. In many cases it is desirable for the fluid to flow sufficiently to form a uniform layer, then to resist further flow, thereby preventing sagging on a vertical surface. Some other inks, such as those used in CMYK-type process printing, are designed to regain viscosity even faster, once they are applied, in order to protect the structure of the dots for accurate color reproduction. There are several methods to using thixotropy; one method, the most popular way, is to use a two-phase mixture to model to allow the mixture to continue without added equations entering when thixotropy is working through its process on the different materials. Thixotropic ink (along with a gas pressurized cartridge and special shearing ball design) is a key feature of the Fisher Space Pen, used for writing during zero gravity space flights by the US and Russian space programs. Solder pastes used in electronics manufacturing printing processes are thixotropic. Thread-locking fluid is a thixotropic adhesive that cures anaerobically. Thixotropy has been proposed as a scientific explanation of blood liquefaction miracles such as that of Saint Januarius in Naples. Semi-solid casting processes such as thixomoulding use the thixotropic property of some alloys (mostly light metals like magnesium). Within certain temperature ranges and with appropriate preparation, an alloy can be put into a semi-solid state, which can be injected with less shrinkage and better overall properties than by normal injection molding. Fumed silica is commonly used as a rheology agent to make otherwise low-viscous fluids thixotropic. Examples range from foods to epoxy resin in structural bonding applications like fillet joints. Common use Thixotropy has shown to be useful in many ways concerning cement paste. The thixotropy allows the cement to be broken down in a way that allows the user to slowly put down the paste in a controlled manner so it can then be set and dry. Thixotropy is also used in drilling fluids due to their rheological makeup. This however is connected to drilling hydraulics and how thixotropy affects the process of hydraulics. Negative effects While thixotropy has been seen to benefit in areas pertaining to clay and cement, the material also comes with many harmful effects. To try and prevent thixotropy from fracturing the sustainability of the concrete, catatonic polymer began to be used in order to counteract the thixotropy, however this agent is needed in order to allow the mixing of the clay and cementitious material. There is now no true way to counteract the effect of thixotropy while also allowing it to break down the materials in the cement and clay. Etymology The word comes from Ancient Greek θίξις thixis 'touch' (from thinganein 'to touch') and -tropy, -tropous, from Ancient Greek -τρόπος -tropos 'of turning', from τρόπος tropos 'a turn', from τρέπειν trepein, 'to turn'. Hence, it can be translated as something that turns (or changes) when touched. It was invented by Herbert Freundlich originally for a sol-gel transformation. See also Bingham plastic Calcium Sulfate Kaye effect Nanocellulose Polymer Silly putty Time-dependent viscosity References External links Continuum mechanics Fluid dynamics Non-Newtonian fluids Soil mechanics Soil physics Tribology
Thixotropy
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,557
[ "Tribology", "Applied and interdisciplinary physics", "Continuum mechanics", "Chemical engineering", "Soil mechanics", "Classical mechanics", "Soil physics", "Surface science", "Materials science", "Mechanical engineering", "Piping", "Fluid dynamics" ]
817,771
https://en.wikipedia.org/wiki/Dilatant
A dilatant (, ) (also termed shear thickening) material is one in which viscosity increases with the rate of shear strain. Such a shear thickening fluid, also known by the initialism STF, is an example of a non-Newtonian fluid. This behaviour is usually not observed in pure materials, but can occur in suspensions. A dilatant is a non-Newtonian fluid where the shear viscosity increases with applied shear stress. This behavior is only one type of deviation from Newton's law of viscosity, and it is controlled by such factors as particle size, shape, and distribution. The properties of these suspensions depend on Hamaker theory and Van der Waals forces and can be stabilized electrostatically or sterically. Shear thickening behavior occurs when a colloidal suspension transitions from a stable state to a state of flocculation. A large portion of the properties of these systems are due to the surface chemistry of particles in dispersion, known as colloids. This can readily be seen with a mixture of cornstarch and water (sometimes called oobleck), which acts in counterintuitive ways when struck or thrown against a surface. Sand that is completely soaked with water also behaves as a dilatant this is the reason why when walking on wet sand, a dry area appears directly underfoot. Rheopecty is a similar property in which viscosity increases with cumulative stress or agitation over time. The opposite of a dilatant material is a pseudoplastic. Definitions There are two types of deviation from Newton's law that are observed in real systems. The most common deviation is shear thinning behavior, where the viscosity of the system decreases as the shear rate is increased. The second deviation is shear thickening behavior where, as the shear rate is increased, the viscosity of the system also increases. This behavior is observed because the system crystallizes under stress and behaves more like a solid than a solution. Thus, the viscosity of a shear-thickening fluid is dependent on the shear rate. The presence of suspended particles often affects the viscosity of a solution. In fact, with the right particles, even a Newtonian fluid can exhibit non-Newtonian behavior. An example of this is cornstarch in water and is included in below. The parameters that control shear thickening behavior are: particle size and particle size distribution, particle volume fraction, particle shape, particle-particle interaction, continuous phase viscosity, and the type, rate, and time of deformation. In addition to these parameters, all shear thickening fluids are stabilized suspensions and have a volume fraction of solid that is relatively high. Viscosity of a solution as a function of shear rate is given by the power-law equation, where is the viscosity, is a material-based constant, and is the applied shear rate. Dilatant behavior occurs when is greater than 1. Below is a table of viscosity values for some common materials. Stabilized suspensions A suspension is composed of a fine, particulate phase dispersed throughout a differing, heterogeneous phase. Shear-thickening behavior is observed in systems with a solid, particulate phase dispersed within a liquid phase. These solutions are different from a Colloid in that they are unstable; the solid particles in dispersion are sufficiently large for sedimentation, causing them to eventually settle. Whereas the solids dispersed within a colloid are smaller and will not settle. There are multiple methods for stabilizing suspensions, including electrostatics and sterics. In an unstable suspension, the dispersed, particulate phase will come out of solution in response to forces acting upon the particles, such as gravity or Hamaker attraction. The magnitude of the effect these forces have on pulling the particulate phase out of solution is proportional to the size of the particulates; for a large particulate, the gravitational forces are greater than the particle-particle interactions, whereas the opposite is true for small particulates. Shear thickening behavior is typically observed in suspensions of small, solid particulates, indicating that the particle-particle Hamaker attraction is the dominant force. Therefore, stabilizing a suspension is dependent upon introducing a counteractive repulsive force. Hamaker theory describes the attraction between bodies, such as particulates. It was realized that the explanation of Van der Waals forces could be upscaled from explaining the interaction between two molecules with induced dipoles to macro-scale bodies by summing all the intermolecular forces between the bodies. Similar to Van der Waals forces, Hamaker theory describes the magnitude of the particle-particle interaction as inversely proportional to the square of the distance. Therefore, many stabilized suspensions incorporate a long-range repulsive force that is dominant over Hamaker attraction when the interacting bodies are at a sufficient distance, effectively preventing the bodies from approaching one another. However, at short distances, the Hamaker attraction dominates, causing the particulates to coagulate and fall out of solution. Two common long-range forces used in stabilizing suspensions are electrostatics and sterics. Electrostatically stabilized suspensions Suspensions of similarly charged particles dispersed in a liquid electrolyte are stabilized through an effect described by the Helmholtz double layer model. The model has two layers. The first layer is the charged surface of the particle, which creates an electrostatic field that affects the ions in the electrolyte. In response, the ions create a diffuse layer of equal and opposite charge, effectively rendering the surface charge neutral. However, the diffuse layer creates a potential surrounding the particle that differs from the bulk electrolyte. The diffuse layer serves as the long-range force for stabilization of the particles. When particles near one another, the diffuse layer of one particle overlaps with that of the other particle, generating a repulsive force. The following equation provides the energy between two colloids as a result of the Hamaker interactions and electrostatic repulsion. where: , energy between a pair of colloids, , radius of colloids, , Hamaker constant between colloid and solvent, , distance between colloids, , surface ion concentration, , Boltzmann constant, , temperature in kelvins, surface excess, inverse Debye length. Sterically stabilized suspensions Different from electrostatics, sterically stabilized suspensions rely on the physical interaction of polymer chains attached to the surface of the particles to keep the suspension stabilized; the adsorbed polymer chains act as a spacer to keep the suspended particles separated at a sufficient distance to prevent the Hamaker attraction from dominating and pulling the particles out of suspension. The polymers are typically either grafted or adsorbed onto the surface of the particle. With grafted polymers, the backbone of the polymer chain is covalently bonded to the particle surface. Whereas an adsorbed polymer is a copolymer composed of lyophobic and lyophilic region, where the lyophobic region non-covalently adheres to the particle surface and the lyophilic region forms the steric boundary or spacer. Theories behind shear thickening behavior Dilatancy in a colloid, or its ability to order in the presence of shear forces, is dependent on the ratio of interparticle forces. As long as interparticle forces such as Van der Waals forces dominate, the suspended particles remain in ordered layers. However, once shear forces dominate, particles enter a state of flocculation and are no longer held in suspension; they begin to behave like a solid. When the shear forces are removed, the particles spread apart and once again form a stable suspension. Shear thickening behavior is highly dependent upon the volume fraction of solid particulate suspended within the liquid. The higher the volume fraction, the less shear required to initiate the shear thickening behavior. The shear rate at which the fluid transitions from a Newtonian flow to a shear thickening behavior is known as the critical shear rate. Order to disorder transition When shearing a concentrated stabilized solution at a relatively low shear rate, the repulsive particle-particle interactions keep the particles in an ordered, layered, equilibrium structure. However, at shear rates elevated above the critical shear rate, the shear forces pushing the particles together overcome the repulsive particle-particle interactions, forcing the particles out of their equilibrium positions. This leads to a disordered structure, causing an increase in viscosity. The critical shear rate here is defined as the shear rate at which the shear forces pushing the particles together are equivalent to the repulsive particle interactions. Hydroclustering When the particles of a stabilized suspension transition from an immobile state to mobile state, small groupings of particles form hydroclusters, increasing the viscosity. These hydroclusters are composed of particles momentarily compressed together, forming an irregular, rod-like chain of particles akin to a logjam or traffic jam. In theory the particles have extremely small interparticle gaps, rendering this momentary, transient hydrocluster as incompressible. It is possible that additional hydroclusters will form through aggregation. Examples Corn starch and water (oobleck) Cornstarch is a common thickening agent used in cooking. It is also a very good example of a shear-thickening system. When a force is applied to a 1:1.25 mixture of water and cornstarch, the mixture acts as a solid and resists the force. Silica and polyethylene glycol Silica nano-particles are dispersed in a solution of polyethylene glycol. The silica particles provide a high-strength material when flocculation occurs. This allows it to be used in applications such as liquid body armor and brake pads. Applications Traction control Dilatant materials have certain industrial uses due to their shear-thickening behavior. For example, some all-wheel drive systems use a viscous coupling unit full of dilatant fluid to provide power transfer between front and rear wheels. On high-traction road surfacing, the relative motion between primary and secondary drive wheels is the same, so the shear is low and little power is transferred. When the primary drive wheels start to slip, the shear increases, causing the fluid to thicken. As the fluid thickens, the torque transferred to the secondary drive wheels increases proportionally, until the maximum amount of power possible in the fully thickened state is transferred. (See also limited-slip differential, some types of which operate on the same principle.) To the operator, this system is entirely passive, engaging all four wheels to drive when needed and dropping back to two wheel drive once the need has passed. This system is generally used for on-road vehicles rather than off-road vehicles, since the maximum viscosity of the dilatant fluid limits the amount of torque that can be passed across the coupling. Body armor Various corporate and government entities are researching the application of shear-thickening fluids for use as body armor. Such a system could allow the wearer flexibility for a normal range of movement, yet provide rigidity to resist piercing by bullets, stabbing knife blows, and similar attacks. The principle is similar to that of mail armor, though body armor using a dilatant would be much lighter. The dilatant fluid would disperse the force of a sudden blow over a wider area of the user's body, reducing the blunt force trauma. However, the dilatant would not provide any additional protection against slow attacks, such as a slow but forceful stab, which would allow flow to occur. In one study, standard Kevlar fabric was compared to a composite armor of Kevlar and a proprietary shear-thickening fluid. The results showed that the Kevlar/fluid combination performed better than the pure-Kevlar material, despite having less than one-third the Kevlar thickness. Four examples of dilatant materials being used in personal protective equipment are Armourgel, D3O, ArtiLage (Artificial Cartilage foam) and "Active Protection System" manufactured by Dow Corning. In 2002, researchers at the U.S. Army Research Laboratory and University of Delaware began researching the use of liquid armor, or a shear-thickening fluid in body armor. Researchers demonstrated that high-strength fabrics such as Kevlar can be made more bulletproof and stab-resistant when impregnated with the fluid. The goal of the “liquid armor” technology is to create a new material that is low-cost and lightweight while still offering equivalent or superior ballistic properties compared to current Kevlar fabric. For their work on liquid armor, Dr. Eric Wetzel, an ARL mechanical engineer, and his team were awarded the 2002 Paul A. Siple Award, the Army’s highest award for scientific achievement, at the Army Science Conference. The company D3O invented a non-Newtonian–based material that has seen wide adaptation across a broad range of standard and custom applications, including motorcycle and extreme-sports protective gear, industrial work wear, military applications, and impact protection for electronics. The materials allow flexibility during normal wear but become stiff and protective when strongly impacted. While some products are marketed directly, much of their manufacturing capability goes to selling and license the material to other companies for use in their own lines of protective products. See also References External links Army Science: Robots, Liquid Armor and Virtual Reality Continuum mechanics Fluid dynamics Non-Newtonian fluids
Dilatant
[ "Physics", "Chemistry", "Engineering" ]
2,779
[ "Continuum mechanics", "Chemical engineering", "Classical mechanics", "Piping", "Fluid dynamics" ]
19,767,753
https://en.wikipedia.org/wiki/Einasto%20profile
The Einasto profile (or Einasto model) is a mathematical function that describes how the density of a spherical stellar system varies with distance from its center. Jaan Einasto introduced his model at a 1963 conference in Alma-Ata, Kazakhstan. The Einasto profile possesses a power law logarithmic slope of the form: which can be rearranged to give The parameter controls the degree of curvature of the profile. This can be seen by computing the slope on a log-log plot: The larger , the more rapidly the slope varies with radius (see figure). Einasto's law can be described as a generalization of a power law, , which has a constant slope on a log-log plot. Einasto's model has the same mathematical form as Sersic's law, which is used to describe the surface brightness (i.e. projected density) profile of galaxies, except that the Einasto model describes a spherically symmetric density distribution in 3 dimensions, whereas the Sersic law describes a circularly symmetric surface density distribution in two dimensions. Einasto's model has been used to describe many types of system, including galaxies, and dark matter halos. See also NFW profile References External links Spherical galaxy models with power-law logarithmic slope. A comprehensive paper that derives many properties of stellar systems obeying Einasto's law. Astrophysics Dark matter Equations of astronomy
Einasto profile
[ "Physics", "Astronomy" ]
299
[ "Dark matter", "Unsolved problems in astronomy", "Astronomical sub-disciplines", "Concepts in astronomy", "Equations of astronomy", "Unsolved problems in physics", "Astrophysics", "Exotic matter", "Physics beyond the Standard Model", "Matter" ]
19,774,226
https://en.wikipedia.org/wiki/Taylor%20column
A Taylor column is a fluid dynamics phenomenon that occurs as a result of the Coriolis effect. It was named after Geoffrey Ingram Taylor. Rotating fluids that are perturbed by a solid body tend to form columns parallel to the axis of rotation called Taylor columns. An object moving parallel to the axis of rotation in a rotating fluid experiences more drag force than what it would experience in a non rotating fluid. For example, a strongly buoyant ball (such as a pingpong ball) will rise to the surface more slowly than it would in a non-rotating fluid. This is because fluid in the path of the ball that is pushed out of the way tends to circulate back to the point it is shifted away from, due to the Coriolis effect. The faster the rotation rate, the smaller the radius of the inertial circle traveled by the fluid. In a non-rotating fluid the fluid parts above the rising ball and closes in underneath it, offering relatively little resistance to the ball. In a rotating fluid, the ball needs to push up a whole column of fluid above it, and it needs to drag a whole column of fluid along beneath it in order to rise to the surface. A rotating fluid thus displays some degree of rigidity. History Taylor columns were first observed by William Thomson, Lord Kelvin, in 1868. Taylor columns were featured in lecture demonstrations by Kelvin in 1881 and by John Perry in 1890. The phenomenon is explained via the Taylor–Proudman theorem, and it has been investigated by Taylor, Grace, Stewartson, and Maxworthy—among others. Theory Taylor columns have been rigorously studied. For Re<<1, Ek<<1, Ro<<1, the drag equation for a cylinder of radius, a, the following relation has been found. To derive this, Moore and Saffman solved the linearised Navier–Stokes equation along in cylindrical coordinates, where some of the vertical and radial components of the viscous term are taken to be small relative to the Coriolis term: To solve these equations, we incorporate the volume conservation condition as well: We use the Ekman compatibility relation for this geometry to restrict the form of the velocity at the disk surface: The resultant velocity fields can be solved in terms of Bessel functions. whereby for Ek<<1 the function A(k) is given by, Integrating the equation for the v, we can find the pressure and thus the drag force given by the first equation. Geophysical example Taylor columns form above peaks in the South Scotia Ridge in the Southern Ocean. They affect circulation and mixing in the Weddell-Scotia Confluence where the Antarctic circumpolar current (ACC) and the subpolar Weddell Gyre mix. The iceberg A23a is approximately 50km across and broke free from the Antarctic coast in 1986. From July 2021 it tracked along the length of the Antarctic Peninsula, and in April 2024 it got stuck in one such Taylor Column in the ACC located over the Pirie Bank. References Further reading External links Taylor columns (Martha Buckley, MIT) Record Player Fluid Dynamics: A Taylor Column Experiment (UCLA Spin Lab) Fluid mechanics Physical oceanography
Taylor column
[ "Physics", "Engineering" ]
645
[ "Civil engineering", "Applied and interdisciplinary physics", "Physical oceanography", "Fluid mechanics" ]
19,777,162
https://en.wikipedia.org/wiki/Electrical%20resistance%20heating
Electrical resistance heating (ERH) is an intensive environmental remediation method that uses the flow of alternating current electricity to heat soil and groundwater and evaporate contaminants. Electric current is passed through a targeted soil volume between subsurface electrode elements. The resistance to electrical flow that exists in the soil causes the formation of heat; resulting in an increase in temperature until the boiling point of water at depth is reached. After reaching this temperature, further energy input causes a phase change, forming steam and removing volatile contaminants. ERH is typically more cost effective when used for treating contaminant source areas. Technology Electrical resistance heating is used by the environmental restoration industry for remediation of contaminated soil and groundwater. ERH consists of constructing electrodes in the ground, applying alternating current (AC) electricity to the electrodes and heating the subsurface to temperatures that promote the evaporation of contaminants. Volatilized contaminants are captured by a subsurface vapor recovery system and conveyed to the surface along with recovered air and steam. Similar to Soil vapor extraction, the air, steam and volatilized contaminants are then treated at the surface to separate water, air and the contaminants. Treatment of the various streams depends on local regulations and the amount of contaminant. Some low volatility organic contaminants have a short hydrolysis half-life For contaminants like these, i.e. 1,1,2,2-Tetrachloroethane and 1,1,1-trichloroethane, hydrolysis can be the primary form of remediation. As the subsurface is heated the hydrolysis half-life of the contaminant will decrease as described by the Arrhenius equation. This results in a rapid degradation of the contaminant. The hydrolysis by-product may be remediated by conventional ERH, however the majority of the mass of the primary contaminant will not be recovered but rather will degrade to a by-product. There are predominantly two electrical load arrangements for ERH: three-phase and six-phase. Three-phase heating consists of electrodes in a repeating triangular or delta pattern. Adjacent electrodes are of a different electrical phase so electricity conducts between them as shown in Figure 1. The contaminated area is depicted by the green shape while the electrodes are depicted by the numbered circles. Six-phase heating consists of six electrodes in a hexagonal pattern with a neutral electrode in the center of the array. The six-phase arrays are outlined in blue in Figure 2 below. Once again the contaminated area is depicted by the green shape while the electrodes are depicted by the numbered circles. In a six-phase heating pattern there can be hot spots and cold spots depending on the phases that are next to each other. For this reason, six-phase heating typically works best on small circular areas that are less than 65 feet in diameter. ERH is typically most effective on volatile organic compounds (VOCs). The chlorinated compounds perchloroethylene, trichloroethylene, and cis- or trans- 1,2-dichloroethylene are contaminants that are easily remediated with ERH. The table shows contaminants that can be remediated with ERH along with their respective boiling points. Less volatile contaminants like xylene or diesel can also be remediated with ERH but energy requirements increase as the volatility decreases. Electrode spacing and operating time can be adjusted to balance the overall remediation cost with the desired cleanup time. A typical remediation may consist of electrodes spaced 15 to 20 feet apart with operating times usually less than a year. The design and cost of an ERH remediation system depends on a number of factors, primarily the volume of soil/groundwater to be treated, the type of contamination, and the treatment goals. The physical and chemical properties of the target compounds are governed by laws that make heated remediations advantageous over most conventional methods. The electrical energy usage required for heating the subsurface and volatilizing the contaminants can account for 5 to 40% of the overall remediation cost. There are several laws that govern an ERH remediation. Dalton's law governs the boiling point of a relatively insoluble contaminant. Raoult's law governs the boiling point of mutually soluble co-contaminants and Henry's law governs the ratio of the contaminant in the vapor phase to the contaminant in the liquid phase. Dalton's law For mutually insoluble compounds, Dalton's law states that the partial pressure of a non aqueous phase liquid (NAPL) is equal to its vapor pressure, and that the NAPL in contact with water will boil when the vapor pressure of water plus the vapor pressure of the VOC is equal to ambient pressure. When a VOC-steam bubble is formed the composition of the bubble is proportional to the composite's respective vapor pressures. Raoult's law For mutually soluble compounds, Raoult's law states that the partial pressure of a compound is equal to its vapor pressure times its mole fraction. This means that mutually soluble contaminants will volatilize slower than if there was only one compound present. Henry's law Henry's law describes the tendency of a compound to join air in the vapor phase or dissolve in water. The Henry's Law constant, sometimes called coefficient, is specific to each compound and depends on the system temperature. The constant is used to predict the amount of contaminant what will remain in the vapor phase (or transfer to the liquid phase), upon exiting the condenser. Recent Innovations in ERH Significant ERH technological advancements have occurred over the last five years. Three areas of focus have been: bedrock remediation, 1,4-dioxane and other emerging contaminants, and controlled low temperature heat to enhance other remedial or natural processes. Bedrock Treatment ERH has been used for over 15 years for treatment of unconsolidated soils in both the vadose and saturated zones. Recent advancements and results show that ERH can be an effective treatment method for bedrock. At an ERH site, the primary electrical current path is on the thin layer of water immediately adjacent to the soil or rock grains. Little current is carried by the water in the pore volume. It is not the pore fluid that dominates the electrical conductivity; it is the grain wetting fluid that dominates the electrical conductivity. Sedimentary rock will typically possess the thin layer of water required for current flow. This means ERH can effectively be used for treatment of sedimentary bedrock, which typically has significant primary porosity. 1,4-Dioxane 1,4-dioxane is a recently identified contaminant of concern. The regulatory criteria for 1,4-dioxane is constantly changing as more is learned about this contaminant. 1,4-dioxane has a high solubility in water and a low Henry's Law constant which combine to present complex challenges associated with remediation. At ambient conditions, the physical properties of 1,4-dioxane indicate air stripping is not an efficient treatment mechanism. Recent ERH remediation results indicate that ERH creates conditions favorable for treatment. ERH remediation involves steam stripping, which historically had not been investigated for 1,4-dioxane. At ERH sites, steam stripping was observed to effectively transfer 1,4-dioxane to the vapor phase for subsequent treatment. 99.8% reductions (or greater) in 1,4-dioxane concentrations in groundwater have been documented on recent ERH remediation. Monitoring of the above grade treatment streams indicates that 95% of 1,4-dioxane remained in the vapor stream after removal from the subsurface. Furthermore, granular activated carbon has proven to be an effective 1,4-dioxane vapor treatment method. Controlled Low Temperature Heating Volatilization is the primary removal mechanism on most ERH sites. However, ERH can also be used to enhance other processes, some naturally occurring, to reduce the cost for treatment of a plume. ERH can be used to provide controlled low temperature heating for projects with remediation processes that do not involve steam stripping. "Low temperature heating" refers to the targeting of a subsurface temperature that is less than the boiling point of water. Examples of low temperature ERH include heat-enhanced bioremediation, heating the subsurface to temperatures above the solubility of dissolved gasses to induce VOC stripping (most notably carbon dioxide ebullition), heat enhanced in situ chemical oxidation (especially for persulfate activation), and heat-enhanced reduction (such as with iron-catalyzed reactions). ERH low-temperature heating can also be used to hydrolyze chlorinated alkanes in-situ at sub-boiling temperatures where hydrochloric acid released during hydrolysis further reacts with subsurface carbonates and bicarbonates to produce carbon dioxide for subsurface stripping of VOCs. Using low temperature heating coupled with bioremediation, chemical oxidation, or dechlorination will result in increased reaction rates. This can significantly reduce the time required for these remediation processes as compared to a remediation at ambient temperature. In addition, a low temperature option does not require the use of the above grade treatment system for recovered vapors, as boiling temperatures will not be reached. This means less above grade infrastructure and lower overall cost. When heat is combined with multi-phase extraction, the elevated temperatures will reduce the viscosity and surface tension of the recovered fluids which makes removal faster and easier. This is the original purpose for the development of ERH - to enhance oil recovery (see above). Weaknesses Weaknesses of ERH include heat losses on small sites. Treatment volumes that have a large surface area but are thin with respect to depth will have significant heat losses which makes ERH less efficient. The minimum treatment interval for efficient ERH remediation is approximately 10 vertical feet. Co-contaminants like oil or grease make remediation more difficult. Oil and grease cause a Raoult's Law effect which requires more energy to remove the contaminants. Peat or high organic carbon in the subsurface will preferentially adsorb VOCs due to van der Waals forces. This preferential adsorption will increase the amount of energy required to remove the VOCs from the subsurface. Fuel sites are less-commonly treated by ERH because other less-expensive remediation technologies are available and because fuel sites are usually thin (resulting in significant heat losses). Sites within landfills are also challenging because metallic debris can distort the electric current paths. ERH is more uniform in natural soil or rock. Strengths ERH is adaptable to all soil types and sedimentary bedrock. ERH is also effective in both the vadose and saturated zones. Certain lithologies can limit traditional methods of remediation by preventing a reliable removal/destruction pathway for the contamination of concern. Because electricity can and does travel through any lithology that contains some water, ERH can be effective in any soil type. By forming buoyant steam bubbles during the heating process, ERH creates a carrier gas that transports the contamination of concern up and out of any soil type. ERH is not capable of desiccating the subsurface. In order for the subsurface to conduct electricity, there must be water present in the subsurface. Conductivity will cease before the subsurface is desiccated. ERH is commonly applied under active buildings or manufacturing facilities. Electrodes can be installed above grade within a fenced area or below grade to allow for unrestricted surface access to the treatment area. Although principally used for contaminant source areas, ERH can be used to achieve low remedial goals such as maximum contaminant levels, MCLs, for drinking water. After ERH treatment, elevated subsurface temperatures will slowly cool over a period of months or years and return to ambient. This period with elevated temperatures is an important part of the remediation process. The elevated temperatures will enhance Bioremediation, hydrolysis and iron reductive dehalogenation. References External links CLU-IN Remediation Technology Overview Citizen's Guide to In Situ Thermal Treatment EPA CLU-IN Technology News and Trends: Electrical Resistance Heating Resolves Difficult Removal of CEC Source Area – Summer 2014 Electrical Resistance Heating of Volatile Organic Compounds in Sedimentary Rock - Remediation Journal, Winter 2014 In Situ Remediation of 1,4-Dioxane Using Electrical Resistance Heating – Remediation Journal, Spring 2015 EPA CLU-IN Technology News and Trends: Strategic Sampling and Adaptive Remedy Implementation for Improved Cleanup Performance at Commencement Bay-South Tacoma Channel – Winter 2015 EPA CLU-IN Technology News and Trends: Continued Triad Approach for NAPL Removal Expedites Fort Lewis Cleanup - July 2005 EPA CLU-IN Technology News and Trends: Air Force Uses Electrical Resistance Heating for TCE Source Removal and Plume Reduction – Winter 2004 NAVFAC Cost and Performance Review for ERH - March 2007 Pollution control technologies Soil contamination Waste treatment technology
Electrical resistance heating
[ "Chemistry", "Engineering", "Environmental_science" ]
2,786
[ "Water treatment", "Environmental chemistry", "Pollution control technologies", "Soil contamination", "Environmental engineering", "Waste treatment technology" ]
19,777,655
https://en.wikipedia.org/wiki/List%20of%20elements%20by%20atomic%20properties
This is a list of chemical elements and their atomic properties, ordered by atomic number (Z). Since valence electrons are not clearly defined for the d-block and f-block elements, there not being a clear point at which further ionisation becomes unprofitable, a purely formal definition as number of electrons in the outermost shell has been used. Table [*] a few atomic radii are calculated, not experimental [—] a long dash marks properties for which there is no data available [ ] a blank marks properties for which no data has been found References Atomic Number
List of elements by atomic properties
[ "Chemistry" ]
121
[ "Lists of chemical elements" ]
2,630,105
https://en.wikipedia.org/wiki/Acoustic%20wave%20equation
In physics, the acoustic wave equation is a second-order partial differential equation that governs the propagation of acoustic waves through a material medium resp. a standing wavefield. The equation describes the evolution of acoustic pressure or particle velocity as a function of position and time . A simplified (scalar) form of the equation describes acoustic waves in only one spatial dimension, while a more general form describes waves in three dimensions. For lossy media, more intricate models need to be applied in order to take into account frequency-dependent attenuation and phase speed. Such models include acoustic wave equations that incorporate fractional derivative terms, see also the acoustic attenuation article or the survey paper. Definition in one dimension The wave equation describing a standing wave field in one dimension (position ) is where is the acoustic pressure (the local deviation from the ambient pressure) and the speed of sound, using subscript notation for the partial derivatives. Derivation Start with the ideal gas law where the absolute temperature of the gas and specific gas constant . Then, assuming the process is adiabatic, pressure can be considered a function of density . The conservation of mass and conservation of momentum can be written as a closed system of two equations This coupled system of two nonlinear conservation laws can be written in vector form as: with To linearize this equation, let where is the (constant) background state and is a sufficiently small pertubation, i.e., any powers or products of can be discarded. Hence, the taylor expansion of gives: where This results in the linearized equation Likewise, small pertubations of the components of can be rewritten as: such that and pressure pertubations relate to density pertubations as: such that: where is a constant, resulting in the alternative form of the linear acoustics equations: where is the bulk modulus of compressibility. After dropping the tilde for convenience, the linear first order system can be written as: While, in general, a non-zero background velocity is possible (e.g. when studying the sound propagation in a constant-strenght wind), it will be assumed that . Then the linear system reduces to the second-order wave equation: with the speed of sound. Hence, the acoustic equation can be derived from a system of first-order advection equations that follow directly from physics, i.e., the first integrals: with Conversely, given the second-order equation a first-order system can be derived: with where matrix and are similar. Solution Provided that the speed is a constant, not dependent on frequency (the dispersionless case), then the most general solution is where and are any two twice-differentiable functions. This may be pictured as the superposition of two waveforms of arbitrary profile, one () traveling up the x-axis and the other () down the x-axis at the speed . The particular case of a sinusoidal wave traveling in one direction is obtained by choosing either or to be a sinusoid, and the other to be zero, giving . where is the angular frequency of the wave and is its wave number. In three dimensions Equation Feynman provides a derivation of the wave equation for sound in three dimensions as where is the Laplace operator, is the acoustic pressure (the local deviation from the ambient pressure), and is the speed of sound. A similar looking wave equation but for the vector field particle velocity is given by . In some situations, it is more convenient to solve the wave equation for an abstract scalar field velocity potential which has the form and then derive the physical quantities particle velocity and acoustic pressure by the equations (or definition, in the case of particle velocity): , . Solution The following solutions are obtained by separation of variables in different coordinate systems. They are phasor solutions, that is they have an implicit time-dependence factor of where is the angular frequency. The explicit time dependence is given by Here is the wave number. Cartesian coordinates . Cylindrical coordinates . where the asymptotic approximations to the Hankel functions, when , are . Spherical coordinates . Depending on the chosen Fourier convention, one of these represents an outward travelling wave and the other a nonphysical inward travelling wave. The inward travelling solution wave is only nonphysical because of the singularity that occurs at r=0; inward travelling waves do exist. See also Acoustics Acoustic attenuation Acoustic theory Wave equation One-way wave equation Differential equations Thermodynamics Fluid dynamics Pressure Ideal gas law Notes References Acoustic equations
Acoustic wave equation
[ "Physics" ]
923
[ "Equations of physics", "Acoustic equations" ]
2,630,316
https://en.wikipedia.org/wiki/Brunt%E2%80%93V%C3%A4is%C3%A4l%C3%A4%20frequency
In atmospheric dynamics, oceanography, asteroseismology and geophysics, the Brunt–Väisälä frequency, or buoyancy frequency, is a measure of the stability of a fluid to vertical displacements such as those caused by convection. More precisely it is the frequency at which a vertically displaced parcel will oscillate within a statically stable environment. It is named after David Brunt and Vilho Väisälä. It can be used as a measure of atmospheric stratification. Derivation for a general fluid Consider a parcel of water or gas that has density . This parcel is in an environment of other water or gas particles where the density of the environment is a function of height: . If the parcel is displaced by a small vertical increment , and it maintains its original density so that its volume does not change, it will be subject to an extra gravitational force against its surroundings of: where is the gravitational acceleration, and is defined to be positive. We make a linear approximation to , and move to the RHS: The above second-order differential equation has the following solution: where the Brunt–Väisälä frequency is: For negative , the displacement has oscillating solutions (and N gives our angular frequency). If it is positive, then there is run away growth – i.e. the fluid is statically unstable. In meteorology and astrophysics For a gas parcel, the density will only remain fixed as assumed in the previous derivation if the pressure, , is constant with height, which is not true in an atmosphere confined by gravity. Instead, the parcel will expand adiabatically as the pressure declines. Therefore a more general formulation used in meteorology is: where is potential temperature, is the local acceleration of gravity, and is geometric height. Since , where is a constant reference pressure, for a perfect gas this expression is equivalent to: where in the last form , the adiabatic index. Using the ideal gas law, we can eliminate the temperature to express in terms of pressure and density: This version is in fact more general than the first, as it applies when the chemical composition of the gas varies with height, and also for imperfect gases with variable adiabatic index, in which case , i.e. the derivative is taken at constant entropy, . If a gas parcel is pushed up and , the air parcel will move up and down around the height where the density of the parcel matches the density of the surrounding air. If the air parcel is pushed up and , the air parcel will not move any further. If the air parcel is pushed up and , (i.e. the Brunt–Väisälä frequency is imaginary), then the air parcel will rise and rise unless becomes positive or zero again further up in the atmosphere. In practice this leads to convection, and hence the Schwarzschild criterion for stability against convection (or the Ledoux criterion if there is compositional stratification) is equivalent to the statement that should be positive. The Brunt–Väisälä frequency commonly appears in the thermodynamic equations for the atmosphere and in the structure of stars. In oceanography In the ocean where salinity is important, or in fresh water lakes near freezing, where density is not a linear function of temperature:where , the potential density, depends on both temperature and salinity. An example of Brunt–Väisälä oscillation in a density stratified liquid can be observed in the 'Magic Cork' movie here . Context The concept derives from Newton's Second Law when applied to a fluid parcel in the presence of a background stratification (in which the density changes in the vertical - i.e. the density can be said to have multiple vertical layers). The parcel, perturbed vertically from its starting position, experiences a vertical acceleration. If the acceleration is back towards the initial position, the stratification is said to be stable and the parcel oscillates vertically. In this case, and the angular frequency of oscillation is given . If the acceleration is away from the initial position (), the stratification is unstable. In this case, overturning or convection generally ensues. The Brunt–Väisälä frequency relates to internal gravity waves: it is the frequency when the waves propagate horizontally; and it provides a useful description of atmospheric and oceanic stability. See also Buoyancy Bénard cell References Atmospheric thermodynamics Atmospheric dynamics Fluid dynamics Oceanography Buoyancy
Brunt–Väisälä frequency
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
923
[ "Hydrology", "Applied and interdisciplinary physics", "Atmospheric dynamics", "Oceanography", "Chemical engineering", "Piping", "Fluid dynamics" ]
2,632,079
https://en.wikipedia.org/wiki/Fanning%20friction%20factor
The Fanning friction factor (named after American engineer John T. Fanning) is a dimensionless number used as a local parameter in continuum mechanics calculations. It is defined as the ratio between the local shear stress and the local flow kinetic energy density: where is the local Fanning friction factor (dimensionless); is the local shear stress (units of pascals (Pa) = kg/m, or pounds per square foot (psf) = lbm/ft); is the bulk dynamic pressure (Pa or psf), given by: is the density of the fluid (kg/m or lbm/ft) is the bulk flow velocity (m/s or ft/s) In particular the shear stress at the wall can, in turn, be related to the pressure loss by multiplying the wall shear stress by the wall area ( for a pipe with circular cross section) and dividing by the cross-sectional flow area ( for a pipe with circular cross section). Thus Fanning friction factor formula This friction factor is one-fourth of the Darcy friction factor, so attention must be paid to note which one of these is meant in the "friction factor" chart or equation consulted. Of the two, the Fanning friction factor is the more commonly used by chemical engineers and those following the British convention. The formulas below may be used to obtain the Fanning friction factor for common applications. The Darcy friction factor can also be expressed as where: is the shear stress at the wall is the density of the fluid is the flow velocity averaged on the flow cross section For laminar flow in a round tube From the chart, it is evident that the friction factor is never zero, even for smooth pipes because of some roughness at the microscopic level. The friction factor for laminar flow of Newtonian fluids in round tubes is often taken to be: where Re is the Reynolds number of the flow. For a square channel the value used is: For turbulent flow in a round tube Hydraulically smooth piping Blasius developed an expression of friction factor in 1913 for the flow in the regime . Koo introduced another explicit formula in 1933 for a turbulent flow in region of Pipes/tubes of general roughness When the pipes have certain roughness , this factor must be taken in account when the Fanning friction factor is calculated. The relationship between pipe roughness and Fanning friction factor was developed by Haaland (1983) under flow conditions of where is the roughness of the inner surface of the pipe (dimension of length) D is inner pipe diameter; The Swamee–Jain equation is used to solve directly for the Darcy–Weisbach friction factor f for a full-flowing circular pipe. It is an approximation of the implicit Colebrook–White equation. Fully rough conduits As the roughness extends into turbulent core, the Fanning friction factor becomes independent of fluid viscosity at large Reynolds numbers, as illustrated by Nikuradse and Reichert (1943) for the flow in region of . The equation below has been modified from the original format which was developed for Darcy friction factor by a factor of General expression For the turbulent flow regime, the relationship between the Fanning friction factor and the Reynolds number is more complex and is governed by the Colebrook equation which is implicit in : Various explicit approximations of the related Darcy friction factor have been developed for turbulent flow. Stuart W. Churchill developed a formula that covers the friction factor for both laminar and turbulent flow. This was originally produced to describe the Moody chart, which plots the Darcy-Weisbach Friction factor against Reynolds number. The Darcy Weisbach Formula , also called Moody friction factor, is 4 times the Fanning friction factor and so a factor of has been applied to produce the formula given below. Re, Reynolds number (unitless); ε, roughness of the inner surface of the pipe (dimension of length); D, inner pipe diameter; ln is the Natural logarithm; Here, is not the Darcy-Weisbach Friction factor , is 4 times lower than ; Flows in non-circular conduits Due to geometry of non-circular conduits, the Fanning friction factor can be estimated from algebraic expressions above by using hydraulic radius when calculating for Reynolds number Application The friction head can be related to the pressure loss due to friction by dividing the pressure loss by the product of the acceleration due to gravity and the density of the fluid. Accordingly, the relationship between the friction head and the Fanning friction factor is: where: is the friction loss (in head) of the pipe. is the Fanning friction factor of the pipe. is the flow velocity in the pipe. is the length of pipe. is the local acceleration of gravity. is the pipe diameter. References Further reading Dimensionless numbers of fluid mechanics Equations of fluid dynamics Fluid dynamics Piping
Fanning friction factor
[ "Physics", "Chemistry", "Engineering" ]
989
[ "Equations of fluid dynamics", "Equations of physics", "Building engineering", "Chemical engineering", "Mechanical engineering", "Piping", "Fluid dynamics" ]
2,634,020
https://en.wikipedia.org/wiki/Patterson%20function
The Patterson function is used to solve the phase problem in X-ray crystallography. It was introduced in 1935 by Arthur Lindo Patterson while he was a visiting researcher in the laboratory of Bertram Eugene Warren at MIT. The Patterson function is defined as It is essentially the Fourier transform of the intensities rather than the structure factors. The Patterson function is also equivalent to the electron density convolved with its inverse: Furthermore, a Patterson map of N points will have peaks, excluding the central (origin) peak and any overlap. The peaks' positions in the Patterson function are the interatomic distance vectors and the peak heights are proportional to the product of the number of electrons in the atoms concerned. Because for each vector between atoms i and j there is an oppositely oriented vector of the same length (between atoms j and i), the Patterson function always has centrosymmetry. One-dimensional example Consider the series of delta functions given by The Patterson function is given by the following series of delta functions and unit step functions References External links Crystallography
Patterson function
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
215
[ "Crystallography", "Condensed matter physics", "Materials science" ]
2,634,195
https://en.wikipedia.org/wiki/Electrical%20ballast
An electrical ballast is a device placed in series with a load to limit the amount of current in an electrical circuit. A familiar and widely used example is the inductive ballast used in fluorescent lamps to limit the current through the tube, which would otherwise rise to a destructive level due to the negative differential resistance of the tube's voltage-current characteristic. Ballasts vary greatly in complexity. They may be as simple as a resistor, inductor, or capacitor (or a combination of these) wired in series with the lamp; or as complex as the electronic ballasts used in compact fluorescent lamps (CFLs). Current limiting An electrical ballast is a device that limits the current through an electrical load. These are most often used when a load (such as an arc discharge) has its terminal voltage decline when current through the load increases. If such a device were connected to a constant-voltage power supply, it would draw an increasing amount of current until it is destroyed or causes the power supply to fail. To prevent this, a ballast provides a positive resistance or reactance that limits the current. The ballast provides for the proper operation of the negative-resistance device by limiting current. Ballasts can also be used simply to limit the current in an ordinary, positive-resistance circuit. Prior to the advent of solid-state ignition, automobile ignition systems commonly included a ballast resistor to regulate the voltage applied to the ignition system. Resistors Fixed resistors For simple, low-powered loads such as a neon lamp, a fixed resistor is commonly used. Because the resistance of the ballast resistor is large it determines the current in the circuit, even in the face of negative resistance introduced by the neon lamp. Ballast was also a component used in early model automobile engines that lowered the supply voltage to the ignition system after the engine had been started. Starting the engine requires a significant amount of electrical current from the battery, resulting in an equally significant voltage drop. To allow the engine to start, the ignition system was designed to operate on this lower voltage. But once the vehicle was started and the starter disengaged, the normal operating voltage was too high for the ignition system. To avoid this problem, a ballast resistor was inserted in series with the ignition system, resulting in two different operating voltages for the starting and ignition systems. Occasionally, this ballast resistor would fail and the classic symptom of this failure was that the engine ran while being cranked (while the resistor was bypassed) but stalled immediately when cranking ceased (and the resistor was reconnected in the circuit via the ignition switch). Modern electronic ignition systems (those used since the 1980s or late '70s) do not require a ballast resistor as they are flexible enough to operate on the lower cranking voltage or the normal operating voltage. Another common use of a ballast resistor in the automotive industry is adjusting the ventilation fan speed. The ballast is a fixed resistor with usually two center taps, and the fan speed selector switch is used to bypass portions of the ballast: all of them for full speed, and none for the low speed setting. A very common failure occurs when the fan is being constantly run at the next-to-full speed setting (usually 3 out of 4). This will cause a very short piece of resistor coil to be operated with a relatively high current (up to 10 A), eventually burning it out. This will render the fan unable to run at the reduced speed settings. In some consumer electronic equipment, notably in television sets in the era of valves (vacuum tubes), but also in some low-cost record players, the vacuum tube heaters were connected in series. Since the voltage drop across all the heaters in series was usually less than the full mains voltage, it was necessary to provide a ballast to drop the excess voltage. A resistor was often used for this purpose, as it was cheap and worked with both alternating current (AC) and direct current (DC). Self-variable resistors Some ballast resistors have the property of increasing in resistance as current through them increases, and decreasing in resistance as current decreases. Physically, some such devices are often built quite like incandescent lamps. Like the tungsten filament of an ordinary incandescent lamp, if current increases, the ballast resistor gets hotter, its resistance goes up, and its voltage drop increases. If current decreases, the ballast resistor gets colder, its resistance drops, and the voltage drop decreases. Therefore, the ballast resistor reduces variations in current, despite variations in applied voltage or changes in the rest of an electric circuit. These devices are sometimes called "barretters" and were used in the series heating circuits of 1930s to 1960s AC/DC radio and TV home receivers. This property can lead to more precise current control than merely choosing an appropriate fixed resistor. The power lost in the resistive ballast is also reduced because a smaller portion of the overall power is dropped in the ballast compared to what might be required with a fixed resistor. Earlier, household clothes dryers sometimes incorporated a germicidal lamp in series with an ordinary incandescent lamp; the incandescent lamp operated as the ballast for the germicidal lamp. A commonly used light in the home in the 1960s in 220–240 V countries was a circular tube ballasted by an under-run regular mains filament lamp. Self ballasted mercury-vapor lamps incorporate ordinary tungsten filaments within the overall envelope of the lamp to act as the ballast, and to partially compensate for the red-deficient light produced by the mercury vapor process . Reactive ballasts An inductor, usually a choke, is very common in line-frequency ballasts to provide the proper starting and operating electrical condition to power a fluorescent lamp or a high intensity discharge lamp. (Because of the use of the inductor, such ballasts are usually called magnetic ballasts.) The inductor has two benefits: Its reactance limits the power available to the lamp with only minimal power losses in the inductor The voltage spike produced when current through the inductor is rapidly interrupted is used in some circuits to first strike the arc in the lamp. A disadvantage of the inductor is that current is shifted out of phase with the voltage, producing a poor power factor. In more expensive ballasts, a capacitor is often paired with the inductor to correct the power factor. In autotransformer ballasts that control two or more lamps, line-frequency ballasts commonly use different phase relationships between the multiple lamps. This not only mitigates the flicker of the individual lamps, it also helps maintain a high power factor. These ballasts are often called lead-lag ballasts because the current in one lamp leads the mains phase and the current in the other lamp lags the mains phase. In most 220-240V ballasts, the capacitor isn't incorporated inside the ballast like in North American ballasts, but is wired in parallel or in series with the ballast. In Europe, and most 220-240 V territories, the line voltage is sufficient to start lamps over 30W with a series inductor. In North America and Japan however, the line voltage (120 V or 100 V respectively) may not be sufficient to start lamps over 30 W with a series inductor, so an autotransformer winding is included in the ballast to step up the voltage. The autotransformer is designed with enough leakage inductance (short-circuit inductance) so that the current is appropriately limited. Because of the large size inductors and capacitors that must be used as well as the heavy iron core of the inductor, reactive ballasts operated at line frequency tend to be large and heavy. They commonly also produce acoustic noise (line-frequency hum). Prior to 1980 in the United States, polychlorinated biphenyl (PCB)-based oils were used as an insulating oil in many ballasts to provide cooling and electrical isolation (see Transformer oil). Electronic ballasts An electronic ballast uses solid state electronic circuitry to provide the proper starting and operating electrical conditions to power discharge lamps. An electronic ballast can be smaller and lighter than a comparably rated magnetic one. An electronic ballast is usually quieter than a magnetic one, which produces a line-frequency hum by vibration of the core laminations. Electronic ballasts are often based on switched-mode power supply (SMPS) topology, first rectifying the input power and then chopping it at a high frequency. Advanced electronic ballasts may allow dimming via pulse-width modulation or via changing the frequency to a higher value. Ballasts incorporating a microcontroller (digital ballasts) may offer remote control and monitoring via networks such as LonWorks, Digital Addressable Lighting Interface (DALI), DMX512, Digital Serial Interface (DSI) or simple analog control using a 0-10 V DC brightness control signal. Systems with remote control of light level via a wireless mesh network have been introduced. Electronic ballasts usually supply power to the lamp at a frequency of or higher, rather than the mains frequency of ; this substantially eliminates the stroboscopic effect of flicker, a product of the line frequency associated with fluorescent lighting (see photosensitive epilepsy). The high output frequency of an electronic ballast refreshes the phosphors in a fluorescent lamp so rapidly that there is no perceptible flicker. The flicker index, used for measuring perceptible light modulation, has a range from 0.00 to 1.00, with 0 indicating the lowest possibility of flickering and 1 indicating the highest. Lamps operated on magnetic ballasts have a flicker index between 0.04 and 0.07 while digital ballasts have a flicker index of below 0.01. Because more gas remains ionized in the arc stream, the lamp operates at about 9% higher efficacy above approximately 10 kHz. Lamp efficiency increases sharply at about 10 kHz and continues to improve until approximately 20 kHz. Electronic ballast retrofits to existing street lights had been tested in some Canadian provinces circa 2012; since then LED retrofits have become more common. With the higher efficiency of the ballast itself and the higher lamp efficacy at higher frequency, electronic ballasts offer higher system efficacy for low pressure lamps like the fluorescent lamp. For HID lamps, there is no improvement of the lamp efficacy in using higher frequency. More than this: HID lamps like the metal halide lamps and high pressure sodium lamps have reduced reliability when operated at high frequencies in the range of , due to acoustic resonance; for these lamps a square wave low frequency current drive is mostly used with frequency in the range of , with the same advantage of lower light depreciation. Most newer generation electronic ballasts can operate both high pressure sodium (HPS) lamps as well as metal-halide lamps. The ballast initially works as a starter for the arc by its internal ignitor, supplying a high-voltage impulse and, later, it works as a limiter/regulator of the electric flow inside the circuit. Electronic ballasts also run much cooler and are lighter than their magnetic counterparts. Fluorescent lamp ballast topologies Preheating This technique uses a combination filament–cathode at each end of the lamp in conjunction with a mechanical or automatic (bi-metallic or electronic) switch that initially connect the filaments in series with the ballast to preheat them. When filaments are disconnected, an inductive pulse from the ballast starts the lamp. This system is described as "Preheat" in North America and "Switch Start" in the UK, and has no specific description in the rest of the world. This system is common in 200–240 V countries (and for 100–120 V lamps up to about 30 watts). Although an inductive pulse makes it more likely that the lamp will start when the starter switch opens, it is not actually necessary. The ballast in such systems can equally be a resistor. A number of fluorescent lamp fittings used a filament lamp as the ballast in the late 1950s through to the 1960s. Special lamps were manufactured that were rated at 170 volts and 120 watts. The lamp had a thermal starter built into the 4 pin base. The power requirements were much larger than using an inductive ballast (though the consumed current was the same), but the warmer light from the lamp type of ballast was often preferred by users particularly in a domestic environment. Resistive ballasts were the only type that was usable when the only supply available to power the fluorescent lamp was DC. Such fittings used the thermal type of starter (mostly because they had gone out of use long before the glow starter was invented), but it was possible to include a choke in the circuit whose sole purpose was to provide a pulse on opening of the starter switch to improve starting. DC fittings were complicated by the need to reverse the polarity of the supply to the tube each time it started. Failure to do so vastly shortened the life of the tube. Instant start An instant start ballast does not preheat the electrodes, instead using a relatively high voltage (~600 V) to initiate the discharge arc. It is the most energy efficient type, but yields the fewest lamp-start cycles, as material is blasted from the surface of the cold electrodes each time the lamp is turned on. Instant-start ballasts are best suited to applications with long duty cycles, where the lamps are not frequently turned on and off. Although these were mostly used in countries with 100-120 volt mains supplies (for lamps of 40 W or above), they were briefly popular in other countries because the lamp started without the flicker of switch start systems. The popularity was short lived because of the short lamp life. Rapid start A rapid start ballast always heats the lamp electrodes using the same heating power, before, during and after lamp starting, by using a heating transformer coil. It provides longer lamp life and more cycle life than instant start, but have very high ballast losses compared to other types of ballasts, as the electrodes in each end of the lamp continue to consume heating power as the lamp operates. Again, although popular in the United States and Canada for lamps of 40 W and above, rapid start is sometimes used in other countries particularly where the flicker of switch start systems is undesirable. Some American electronic fluorescent lamp ballasts which are labeled "Rapid start" are otherwise completely different than the classical American rapid start ballast, because they use resonance to start the lamp and heat the cathodes, and don't supply all the time the same heating power regardless the lamp conditions. Dimmable ballast A dimmable ballast is very similar to a rapid start ballast, except that the autotransformer is connected to a dimmer. A quadrac type light dimmer can be used with a dimming ballast, which maintains the heating current while allowing lamp current to be controlled. A resistor of about 10 kΩ is required to be connected in parallel with the fluorescent tube to allow reliable firing of the quadrac at low light levels. There are dimmable electronic ballast that uses 1-10V or DALI interfaces to dim the lamp. Emergency An electronic ballast with an integrated rechargeable battery is designed to provide emergency egress lighting in the event of a power failure. It can be incorporated into an existing fluorescent light fixture or mounted remotely outside of it. When power is lost, the ballast will illuminate one or more lamps in the fixture at a reduced output for a minimum of 90 minutes (as required by code). These can be used as an alternative to egress lighting powered by a back-up electrical generator. Hybrid A hybrid ballast has a magnetic core-and-coil transformer and an electronic switch for the electrode-heating circuit. Like a magnetic ballast, a hybrid unit operates at line power frequency—50 Hz in Europe, for example. These types of ballasts, which are also referred to as cathode-disconnect ballasts, disconnect the electrode-heating circuit after they start the lamps. ANSI ballast factor For a lighting ballast, the ANSI ballast factor is used in North America to compare the light output (in lumens) of a lamp operated on a ballast compared to the lamp operating on an ANSI reference ballast. Reference ballast operates the lamp at its ANSI specified nominal power rating. The ballast factor of practical ballasts must be considered in lighting design; a low ballast factor may save energy, but will produce less light and short the lamp life. With fluorescent lamps, ballast factor can vary from the reference value of 1.0. Ballast triode Early tube-based color TV sets used a ballast triode, such as the PD500, as a parallel shunt stabilizer for the cathode-ray tube (CRT) acceleration voltage, to keep the CRT's deflection factor constant. See also Iron-hydrogen resistor Sodium lamp References External links Gas discharge lamps Analog circuits Resistive components Electrical power control Electric power systems components
Electrical ballast
[ "Physics", "Engineering" ]
3,516
[ "Physical quantities", "Analog circuits", "Resistive components", "Electronic engineering", "Electrical resistance and conductance" ]
2,634,856
https://en.wikipedia.org/wiki/Enzyme%20assay
Enzyme assays are laboratory methods for measuring enzymatic activity. They are vital for the study of enzyme kinetics and enzyme inhibition. Enzyme units The quantity or concentration of an enzyme can be expressed in molar amounts, as with any other chemical, or in terms of activity in enzyme units. Enzyme activity Enzyme activity is a measure of the quantity of active enzyme present and is thus dependent on various physical conditions, which should be specified. It is calculated using the following formula: where = Enzyme activity = Moles of substrate converted per unit time = Rate of the reaction = Reaction volume The SI unit is the katal, 1 katal = 1 mol s−1 (mole per second), but this is an excessively large unit. A more practical and commonly used value is enzyme unit (U) = 1 μmol min−1 (micromole per minute). 1 U corresponds to 16.67 nanokatals. Enzyme activity as given in katal generally refers to that of the assumed natural target substrate of the enzyme. Enzyme activity can also be given as that of certain standardized substrates, such as gelatin, then measured in gelatin digesting units (GDU), or milk proteins, then measured in milk clotting units (MCU). The units GDU and MCU are based on how fast one gram of the enzyme will digest gelatin or milk proteins, respectively. 1 GDU approximately equals 1.5 MCU. An increased amount of substrate will increase the rate of reaction with enzymes, however once past a certain point, the rate of reaction will level out because the amount of active sites available has stayed constant. Specific activity The specific activity of an enzyme is another common unit. This is the activity of an enzyme per milligram of total protein (expressed in μmol min−1 mg−1). Specific activity gives a measurement of enzyme purity in the mixture. It is the micro moles of product formed by an enzyme in a given amount of time (minutes) under given conditions per milligram of total proteins. Specific activity is equal to the rate of reaction multiplied by the volume of reaction divided by the mass of total protein. The SI unit is katal/kg, but a more practical unit is μmol/(mg*min). Specific activity is a measure of enzyme processivity (the capability of enzyme to be processed), at a specific (usually saturating) substrate concentration, and is usually constant for a pure enzyme. An active site titration process can be done for the elimination of errors arising from differences in cultivation batches and/or misfolded enzyme and similar issues. This is a measure of the amount of active enzyme, calculated by e.g. titrating the amount of active sites present by employing an irreversible inhibitor. The specific activity should then be expressed as μmol min−1 mg−1 active enzyme. If the molecular weight of the enzyme is known, the turnover number, or μmol product per second per μmol of active enzyme, can be calculated from the specific activity. The turnover number can be visualized as the number of times each enzyme molecule carries out its catalytic cycle per second. Related terminology The rate of a reaction is the concentration of substrate disappearing (or product produced) per unit time (mol L−1 s−1). The % purity is 100% × (specific activity of enzyme sample / specific activity of pure enzyme). The impure sample has lower specific activity because some of the mass is not actually enzyme. If the specific activity of 100% pure enzyme is known, then an impure sample will have a lower specific activity, allowing purity to be calculated and then getting a clear result. Types of assays All enzyme assays measure either the consumption of substrate or production of product over time. A large number of different methods of measuring the concentrations of substrates and products exist and many enzymes can be assayed in several different ways. Biochemists usually study enzyme-catalysed reactions using four types of experiments: Initial rate experiments. When an enzyme is mixed with a large excess of the substrate, the enzyme-substrate intermediate builds up in a fast initial transient. Then the reaction achieves a steady-state kinetics in which enzyme substrate intermediates remains approximately constant over time and the reaction rate changes relatively slowly. Rates are measured for a short period after the attainment of the quasi-steady state, typically by monitoring the accumulation of product with time. Because the measurements are carried out for a very short period and because of the large excess of substrate, the approximation that the amount of free substrate is approximately equal to the amount of the initial substrate can be made. The initial rate experiment is the simplest to perform and analyze, being relatively free from complications such as back-reaction and enzyme degradation. It is therefore by far the most commonly used type of experiment in enzyme kinetics. Progress curve experiments. In these experiments, the kinetic parameters are determined from expressions for the species concentrations as a function of time. The concentration of the substrate or product is recorded in time after the initial fast transient and for a sufficiently long period to allow the reaction to approach equilibrium. Progress curve experiments were widely used in the early period of enzyme kinetics, but are less common now. Transient kinetics experiments. In these experiments, reaction behaviour is tracked during the initial fast transient as the intermediate reaches the steady-state kinetics period. These experiments are more difficult to perform than either of the above two classes because they require specialist techniques (such as flash photolysis of caged compounds) or rapid mixing (such as stopped-flow, quenched flow or continuous flow). Relaxation experiments. In these experiments, an equilibrium mixture of enzyme, substrate and product is perturbed, for instance by a temperature, pressure or pH jump, and the return to equilibrium is monitored. The analysis of these experiments requires consideration of the fully reversible reaction. Moreover, relaxation experiments are relatively insensitive to mechanistic details and are thus not typically used for mechanism identification, although they can be under appropriate conditions. Enzyme assays can be split into two groups according to their sampling method: continuous assays, where the assay gives a continuous reading of activity, and discontinuous assays, where samples are taken, the reaction stopped and then the concentration of substrates/products determined. Continuous assays Continuous assays are most convenient, with one assay giving the rate of reaction with no further work necessary. There are many different types of continuous assays. Spectrophotometric In spectrophotometric assays, you follow the course of the reaction by measuring a change in how much light the assay solution absorbs. If this light is in the visible region you can actually see a change in the color of the assay, and these are called colorimetric assays. The MTT assay, a redox assay using a tetrazolium dye as substrate is an example of a colorimetric assay. UV light is often used, since the common coenzymes NADH and NADPH absorb UV light in their reduced forms, but do not in their oxidized forms. An oxidoreductase using NADH as a substrate could therefore be assayed by following the decrease in UV absorbance at a wavelength of 340 nm as it consumes the coenzyme. Direct versus coupled assays Even when the enzyme reaction does not result in a change in the absorbance of light, it can still be possible to use a spectrophotometric assay for the enzyme by using a coupled assay. Here, the product of one reaction is used as the substrate of another, easily detectable reaction. For example, figure 1 shows the coupled assay for the enzyme hexokinase, which can be assayed by coupling its production of glucose-6-phosphate to NADPH production, using glucose-6-phosphate dehydrogenase. Fluorometric Fluorescence is when a molecule emits light of one wavelength after absorbing light of a different wavelength. Fluorometric assays use a difference in the fluorescence of substrate from product to measure the enzyme reaction. These assays are in general much more sensitive than spectrophotometric assays, but can suffer from interference caused by impurities and the instability of many fluorescent compounds when exposed to light. An example of these assays is again the use of the nucleotide coenzymes NADH and NADPH. Here, the reduced forms are fluorescent and the oxidised forms non-fluorescent. Oxidation reactions can therefore be followed by a decrease in fluorescence and reduction reactions by an increase. Synthetic substrates that release a fluorescent dye in an enzyme-catalyzed reaction are also available, such as 4-methylumbelliferyl-β-D-galactoside for assaying β-galactosidase or 4-methylumbelliferyl-butyrate for assaying Candida rugosa lipase. Calorimetric Calorimetry is the measurement of the heat released or absorbed by chemical reactions. These assays are very general, since many reactions involve some change in heat and with use of a microcalorimeter, not much enzyme or substrate is required. These assays can be used to measure reactions that are impossible to assay in any other way. Chemiluminescent Chemiluminescence is the emission of light by a chemical reaction. Some enzyme reactions produce light and this can be measured to detect product formation. These types of assay can be extremely sensitive, since the light produced can be captured by photographic film over days or weeks, but can be hard to quantify, because not all the light released by a reaction will be detected. The detection of horseradish peroxidase by enzymatic chemiluminescence (ECL) is a common method of detecting antibodies in western blotting. Another example is the enzyme luciferase, this is found in fireflies and naturally produces light from its substrate luciferin. Light scattering Static light scattering measures the product of weight-averaged molar mass and concentration of macromolecules in solution. Given a fixed total concentration of one or more species over the measurement time, the scattering signal is a direct measure of the weight-averaged molar mass of the solution, which will vary as complexes form or dissociate. Hence the measurement quantifies the stoichiometry of the complexes as well as kinetics. Light scattering assays of protein kinetics is a very general technique that does not require an enzyme. Microscale thermophoresis Microscale thermophoresis (MST) measures the size, charge and hydration entropy of molecules/substrates at equilibrium. The thermophoretic movement of a fluorescently labeled substrate changes significantly as it is modified by an enzyme. This enzymatic activity can be measured with high time resolution in real time. The material consumption of the all optical MST method is very low, only 5 μl sample volume and 10nM enzyme concentration are needed to measure the enzymatic rate constants for activity and inhibition. MST allows analysts to measure the modification of two different substrates at once (multiplexing) if both substrates are labeled with different fluorophores. Thus substrate competition experiments can be performed. Discontinuous assays Discontinuous assays are when samples are taken from an enzyme reaction at intervals and the amount of product production or substrate consumption is measured in these samples. Radiometric Radiometric assays measure the incorporation of radioactivity into substrates or its release from substrates. The radioactive isotopes most frequently used in these assays are 14C, 32P, 35S and 125I. Since radioactive isotopes can allow the specific labelling of a single atom of a substrate, these assays are both extremely sensitive and specific. They are frequently used in biochemistry and are often the only way of measuring a specific reaction in crude extracts (the complex mixtures of enzymes produced when you lyse cells). Radioactivity is usually measured in these procedures using a scintillation counter. Chromatographic Chromatographic assays measure product formation by separating the reaction mixture into its components by chromatography. This is usually done by high-performance liquid chromatography (HPLC), but can also use the simpler technique of thin layer chromatography. Although this approach can need a lot of material, its sensitivity can be increased by labelling the substrates/products with a radioactive or fluorescent tag. Assay sensitivity has also been increased by switching protocols to improved chromatographic instruments (e.g. ultra-high pressure liquid chromatography) that operate at pump pressure a few-fold higher than HPLC instruments (see High-performance liquid chromatography#Pump pressure). Factors affecting assays Several factors effect the assay outcome and a recent review summarizes the various parameters that needs to be monitored to keep an assay up and running. Salt Concentration Most enzymes cannot tolerate extremely high salt concentrations. The ions interfere with the weak ionic bonds of proteins. Typical enzymes are active in salt concentrations of 1-500 mM. As usual there are exceptions such as the halophilic algae and halophilic bacteria. Effects of Temperature All enzymes work within a range of temperature specific to the organism. Increases in temperature generally lead to increases in reaction rates. There is a limit to the increase because higher temperatures lead to a sharp decrease in reaction rates. This is due to the denaturating (alteration) of protein structure resulting from the breakdown of the weak ionic and hydrogen bonding that stabilize the three-dimensional structure of the enzyme active site. The "optimum" temperature for human enzymes is usually between 35 and 40 °C. The average temperature for humans is 37 °C. Human enzymes start to denature quickly at temperatures above 40 °C. Enzymes from thermophilic archaea found in the hot springs are stable up to 100 °C. However, the idea of an "optimum" rate of an enzyme reaction is misleading, as the rate observed at any temperature is the product of two rates, the reaction rate and the denaturation rate. If you were to use an assay measuring activity for one second, it would give high activity at high temperatures, however if you were to use an assay measuring product formation over an hour, it would give you low activity at these temperatures. Effects of pH Most enzymes are sensitive to pH and have specific ranges of activity. All have an optimum pH. The pH can stop enzyme activity by denaturating (altering) the three-dimensional shape of the enzyme by breaking ionic, and hydrogen bonds. Most enzymes function between a pH of 6 and 8; however pepsin in the stomach works best at a pH of 2 and trypsin at a pH of 8. Enzyme Saturation Increasing the substrate concentration increases the rate of reaction (enzyme activity). However, enzyme saturation limits reaction rates. An enzyme is saturated when the active sites of all the molecules are occupied most of the time. At the saturation point, the reaction will not speed up, no matter how much additional substrate is added. The graph of the reaction rate will plateau. Level of crowding Large amounts of macromolecules in a solution will alter the rates and equilibrium constants of enzyme reactions, through an effect called macromolecular crowding. List of enzyme assays MTT assay Fluorescein diacetate hydrolysis para-Nitrophenylphosphate See also Restriction enzyme DNase footprinting assay Enzyme kinetics References External links Protein methods assay Chemical pathology Clinical pathology Pathology
Enzyme assay
[ "Chemistry", "Biology" ]
3,211
[ "Biochemistry methods", "Pathology", "Protein methods", "Protein biochemistry", "Biochemistry", "Chemical pathology" ]
2,635,006
https://en.wikipedia.org/wiki/Perfect%20conductor
In electrostatics, a perfect conductor is an idealized model for real conducting materials. The defining property of a perfect conductor is that static electric field and the charge density both vanish in its interior. If the conductor has excess charge, it accumulates as an infinitesimally thin layer of surface charge. An external electric field is screened from the interior of the material by rearrangement of the surface charge. Alternatively, a perfect conductor is an idealized material exhibiting infinite electrical conductivity or, equivalently, zero resistivity (cf. perfect dielectric). While perfect electrical conductors do not exist in nature, the concept is a useful model when electrical resistance is negligible compared to other effects. One example is ideal magnetohydrodynamics, the study of perfectly conductive fluids. Another example is electrical circuit diagrams, which carry the implicit assumption that the wires connecting the components have no resistance. Yet another example is in computational electromagnetics, where perfect conductor can be simulated faster, since the parts of equations that take finite conductivity into account can be neglected. Properties of perfect conductors Perfect conductors: have exactly zero electrical resistancea steady current within a perfect conductor will flow without losing energy to resistance. Resistance is what causes heating in conductors, thus a perfect conductor will generate no heat. Since energy is not being lost to heat, the current will not dissipate; it will flow indefinitely within the perfect conductor until there exists no potential difference. require a constant magnetic fluxthe magnetic flux within the perfect conductor must be constant with time. Any external field applied to a perfect conductor will have no effect on its internal field configuration. Distinction between a perfect conductor and a superconductor Superconductors, in addition to having no electrical resistance, exhibit quantum effects such as the Meissner effect and quantization of magnetic flux. In perfect conductors, the interior magnetic field must remain fixed but can have a zero or nonzero value. In real superconductors, all magnetic flux is expelled during the phase transition to superconductivity (the Meissner effect), and the magnetic field is always zero within the bulk of the superconductor. See also Persistent current References Superconductivity Computational electromagnetics
Perfect conductor
[ "Physics", "Materials_science", "Engineering" ]
455
[ "Materials science stubs", "Computational electromagnetics", "Computational physics stubs", "Physical quantities", "Superconductivity", "Materials science", "Computational physics", "Condensed matter physics", "Electromagnetism stubs", "Electrical resistance and conductance" ]
2,635,128
https://en.wikipedia.org/wiki/Molecular%20tweezers
Molecular tweezers, and molecular clips, are host molecules with open cavities capable of binding guest molecules. The open cavity of the molecular tweezers may bind guests using non-covalent bonding, which includes hydrogen bonding, metal coordination, hydrophobic forces, van der Waals forces, π–π interactions, and/or electrostatic effects. These complexes are a subset of macrocyclic molecular receptors and their structure is that the two "arms" that bind the guest molecule between them are only connected at one end leading to a certain flexibility of these receptor molecules (induced fit model). History The term "molecular tweezers" was first used by Whitlock. The class of hosts was developed and popularized by Zimmerman in the mid-1980s to early 1990s and later by Klärner. Examples Some molecular tweezers bind aromatic guests. These molecular tweezers consist of a pair of anthracene arms held at a distance that allows aromatic guests to gain π–π interactions from both (see Figure). Other molecular tweezers feature a pair of tethered porphyrins. Yet another type of molecular tweezers binds fullerenes. These "buckycatchers" are composed of two corannulene pincers that complement the surface of the convex fullerene guest (Figure 2). An association constant (Ka) of 8,600 M−1 was calculated using 1H NMR spectroscopy. Stoermer and co-workers described clefts capable of capturing cyclohexane or chloroform molecules. Intriguingly, pi interactions played key roles in guest capture as well as cleft formation rate. Water-soluble phosphate-substituted molecular tweezers made of alternating phenyl and norbornenyl substituents bind to positively charged aliphatic side chains of basic amino acids, such as lysine and arginine (Figure 3). Similar compounds called "molecular clips", whose side walls are flat rather than convex, prefer to enclose flat pyridinium rings (for example the nicotinamide ring of NAD(P)+) between their plane naphthalene sidewalls (Figure 4). These mutually exclusive binding modes make these compounds valuable tools for probing critical biological interactions of basic amino acid side chains in peptides and proteins as well as of NAD(P)+ and similar cofactors. For example, both types of compounds inhibit the oxidation reactions of ethanol by alcohol dehydrogenase or of glucose-6-phosphate by glucose-6-phosphate dehydrogenase, respectively. The molecular tweezers, but not the clips, efficiently inhibit the formation of toxic oligomers and aggregates by amyloidogenic proteins associated with different diseases. Examples include the proteins involved in Alzheimer's disease – amyloid β-protein (Aβ) and tau; α-synuclein, which is thought to cause Parkinson's disease and other synucleinopathies and is involved in spinal-cord injury; mutant huntingtin, which causes Huntington's disease; islet amyloid polypeptide (amylin), which kills pancreatic β-cells in type-2 diabetes; transthyretin (TTR), which causes familial amyloid polyneuropathy, familial amyloid cardiomyopathy, and senile systemic amyloidosis; aggregation-prone mutants of the tumor-suppressor protein p53; and semen proteins whose aggregation enhances HIV infection. Importantly, the molecular tweezers have been found to be effective and safe not only in the test tube but also in animal models of different diseases, suggesting that they may be developed as drugs against diseases caused by abnormal protein aggregation, all of which currently have no cure. They were also shown to destroy the membranes of enveloped viruses, such as HIV, herpes, and hepatitis C, which makes them good candidates for development of microbicides. The above examples show the potential reactivity and specificity of these molecules. The binding cavity between the side arms of the tweezer can evolve to bind to an appropriate guest with high specificity, depending on the configuration of the tweezer. That makes this overall class of macromolecule truly synthetic molecular receptors with important application to biology and medicine. See also Clathrate compound Host–guest chemistry References External links Journal of Chemical Education Featured Molecules December 2004: Nanoscale Molecular Tweezers and article Crystalmaker molecular tweezers Supramolecular chemistry Molecular machines
Molecular tweezers
[ "Physics", "Chemistry", "Materials_science", "Technology" ]
930
[ "Machines", "Molecular machines", "Physical systems", "nan", "Nanotechnology", "Supramolecular chemistry" ]
22,412,019
https://en.wikipedia.org/wiki/Oligosaccharide%20nomenclature
Oligosaccharides and polysaccharides are an important class of polymeric carbohydrates found in virtually all living entities. Their structural features make their nomenclature challenging and their roles in living systems make their nomenclature important. Oligosaccharides Oligosaccharides are carbohydrates that are composed of several monosaccharide residues joined through glycosidic linkage, which can be hydrolyzed by enzymes or acid to give the constituent monosaccharide units. While a strict definition of an oligosaccharide is not established, it is generally agreed that a carbohydrate consisting of two to ten monosaccharide residues with a defined structure is an oligosaccharide. Some oligosaccharides, for example maltose, sucrose, and lactose, were trivially named before their chemical constitution was determined, and these names are still used today. Trivial names, however, are not useful for most other oligosaccharides and, as such, systematic rules for the nomenclature of carbohydrates have been developed. To fully understand oligosaccharide and polysaccharide nomenclature, one must understand how monosaccharides are named. An oligosaccharide has both a reducing and a non-reducing end. The reducing end of an oligosaccharide is the monosaccharide residue with hemiacetal functionality, thereby capable of reducing the Tollens’ reagent, while the non-reducing end is the monosaccharide residue in acetal form, thus incapable of reducing the Tollens’ reagent. The reducing and non-reducing ends of an oligosaccharide are conventionally drawn with the reducing-end monosaccharide residue furthest to the right and the non-reducing (or terminal) end furthest to the left. Naming of oligosaccharides proceeds from left to right (from the non-reducing end to the reducing end) as glycosyl [glycosyl]n glycoses or glycosyl [glycosyl]n glycosides, depending on whether or not the reducing end is a free hemiacetal group. In parentheses, between the names of the monosaccharide residues, the number of the anomeric carbon atom, an arrow symbol, and the number of the carbon atom bearing the connecting oxygen of the next monosaccharide unit are listed. Appropriate symbols are used to indicate the stereochemistry of the glycosidic bonds (α or β), the configuration of the monosaccharide residue (D orL), and the substitutions at oxygen atoms (O). Maltose and a derivative of sucrose illustrate these concepts: Maltose: α-D-Glucopyranosyl-(1→4)-β-D-glucopyranose Methyl 2,3,4-tri-O-benzyl-6-deoxy-6-fluoro-α-D-galactopyranosyl-(1→4)-2,3,6-tri-O-acetyl-β-D-glucopyranoside In the case of branched oligosaccharides, meaning that the structure contains at least one monosaccharide residue linked to more than two other monosaccharide residues, terms designating the branches should be listed in square brackets, with the longest linear chain (the parent chain) written without square brackets. The following example will help illustrate this concept: Allyl α-L-fucopyranosyl-(1→3)-[α-D-galactopyranosyl-(1→4)]-α-D-glucopyranosyl-(1→3)-α-D-galactopyranoside These systematic names are quite useful in that they provide information about the structure of the oligosaccharide. They do require a lot of space, however, so abbreviated forms are used when possible. In these abbreviated forms, the names of the monosaccharide units are shortened to their corresponding three-letter abbreviations, followed by p for pyranose or f for furanose ring structures, with the abbreviated aglyconic alcohol placed at the end of the name. Using this system, the previous example would have the abbreviated name α-L-Fucp-(1→3)-[α-D-Galp-(1→4)]-α-D-Glcp-(1→3)-α-D-GalpOAll (General formula: . structure formula C12H22O11). Polysaccharides Polysaccharides are considered to be polymers of monosaccharides containing ten or more monosaccharide residues. Polysaccharides have been given trivial names that reflect their origin. Two common examples are cellulose, a main component of the cell wall in plants, and starch, a name derived from the Anglo-Saxon stercan, meaning to stiffen. To name a polysaccharide composed of a single type of monosaccharide, that is a homopolysaccharide, the ending “-ose” of the monosaccharide is replaced with “-an”. For example, a glucose polymer is named glucan, a mannose polymer is named mannan, and a galactose polymer is named galactan. When the glycosidic linkages and configurations of the monosaccharides are known, they may be included as a prefix to the name, with the notation for glycosidic linkages preceding the symbols designating the configuration. The following example will help illustrate this concept: (1→4)-β-D-Glucan A heteropolysaccharide is a polymer containing more than one kind of monosaccharide residue. The parent chain contains only one type of monosaccharide and should be listed last with the ending “-an”, and the other types of monosaccharides listed in alphabetical order as “glyco-” prefixes. When there is no parent chain, all different monosaccharide residues are to be listed alphabetically as “glyco-” prefixes and the name should end with “-glycan”. The following example will help illustrate this concept: ((1→2)-α-D-galacto)-(1→4)-β-D-Glucan See also Oligosaccharide Carbohydrate Conformation Monosaccharide Monosaccharide nomenclature References Carbohydrate chemistry Chemical nomenclature Oligosaccharides
Oligosaccharide nomenclature
[ "Chemistry" ]
1,452
[ "Carbohydrates", "Oligosaccharides", "Carbohydrate chemistry", "nan", "Chemical synthesis", "Glycobiology" ]
22,414,431
https://en.wikipedia.org/wiki/Monosaccharide%20nomenclature
Monosaccharide nomenclature is the naming system of the building blocks of carbohydrates, the monosaccharides, which may be monomers or part of a larger polymer. Monosaccharides are subunits that cannot be further hydrolysed in to simpler units. Depending on the number of carbon atom they are further classified into trioses, tetroses, pentoses, hexoses etc., which is further classified in to aldoses and ketoses depending on the type of functional group present in them. Systematic name of molecular graph The elementary formula of a simple monosaccharide is CnH2nOn, where the integer n is at least 3 and rarely greater than 7. Simple monosaccharides may be named generically based on the number of carbon atoms n: trioses, tetroses, pentoses, hexoses, etc. Every simple monosaccharide has an acyclic (open chain) form, which can be written as H-(CH(OH))_\mathit{x}-(C=O)-(CH(OH))_\mathit{y}-H; that is, a straight chain of carbon atoms, one of which is a carbonyl group, all the others bearing a hydrogen -H and a hydroxyl -OH each, with one extra hydrogen at either end. The carbons of the chain are conventionally numbered from 1 to n, starting from the end which is closest to the carbonyl. If the carbonyl is at the very beginning of the chain (carbon 1), the monosaccharide is said to be an aldose, otherwise it is a ketose. These names can be combined with the chain length prefix, as in aldohexose or ketopentose. Most ketoses found in nature have the carbonyl in position 2; when that is not the case, one uses a numeric prefix to indicate the carbonyl's position. Thus for example, aldohexose means H(C=O)(CHOH)5H, ketopentose means H(CHOH)(C=O)(CHOH)3H, and 3-ketopentose means H(CHOH)2(C=O)(CHOH)2H. An alternative nomenclature uses the suffix '-ose' only for aldoses, and '-ulose' for ketoses. The position of the carbonyl (when it is not 1 or 2) is indicated by a numerical infix. For example, hexose in this nomenclature means H(C=O)(CHOH)5H, pentulose means H(CHOH)(C=O)(CHOH)3H, and hexa-3-ulose means H(CHOH)2(C=O)(CHOH)3H. Naming of acyclic stereoisomers Open-chain monosaccharides with same molecular graph may exist as two or more stereoisomers. The Fischer projection is a systematic way of drawing the skeletal formula of an open-chain monosaccharide so that each stereoisomer is uniquely identified. Two isomers whose molecules are mirror-images of each other are identified by prefixes '-' or '-', according to the handedness of the chiral carbon atom that is farthest from the carbonyl. In the Fischer projection, that is the second carbon from the bottom; the prefix is '-' or '-' according to whether the hydroxyl on that carbon lies to the right or left of the backbone, respectively. If the molecular graph is symmetrical (H(CHOH)(CO)(CHOH)H) and the two halves are mirror images of each other, then the molecule is identical to its mirror image, and there is no '-' form. A distinct common name, such as "glucose" or "ribose", is traditionally assigned to each pair of mirror-image stereoisomers, and to each achiral stereoisomer. These names have standard three-letter abbreviations, such as 'Glc' for glucose and 'Rib' for ribose. Another nomenclature uses the systematic name of the molecular graph, a '-' or '-' prefix to indicate the position of the last chiral hydroxyl on the Fischer diagram (as above), and another italic prefix to indicate the positions of the remaining hydroxyls relative to the first one, read from bottom to top in the diagram, skipping the keto group if any. These prefixes are attached to the systematic name of the molecular graph. So for example, -glucose is -gluco-hexose, -ribose is -ribo-pentose, and -psicose is -ribo-hexulose. Note that, in this nomenclature, mirror-image isomers differ only in the ''/'' prefix, even though all their hydroxyls are reversed. The following tables shows the Fischer projections of selected monosaccharides (in open-chain form), with their conventional names. The table shows all aldoses with 3 to 6 carbon atoms, and a few ketoses. For chiral molecules, only the '-' form (with the next-to-last hydroxyl on the right side) is shown; the corresponding forms have mirror-image structures. Some of these monosaccharides are only synthetically prepared in the laboratory and not found in nature. Names of aldoses Names of ketoses Names of 3-ketoses Cyclic forms For monosaccharides in their cyclic form, an infix is placed before the '-ose', '-ulose', or n''-ulose' suffix to specify the ring size. The infix is "furan" for a 5-atom ring, "pyran" for 6, "septan" for 7, and so on. Ring closure creates another chiral center at the anomeric carbon (the one with the hemiacetal or acetal functionality), and therefore each open-chain stereoisomer gives rise to two distinct stereoisomers (anomers). These are identified by the prefixes 'α-' and 'β-', which denote the relative configuration of the anomeric carbon to that of the stereocenter at the other end of the carbon chain. If the conformation (R or S) is identical at both the anomeric carbon and the most distant stereocenter, the configuration is 'α-'. If the conformations are different, the configuration is 'β-' Examples Glycosides Glycosides are saccharides in which the hydroxyl -OH at the anomeric centre is replaced by an oxygen-bridged group -OR. The carbohydrate part of the molecule is called glycone, the -O- bridge is the glycosisdic oxygen, and the attached group is the aglycone. Glycosides are named by giving the aglyconic alcohol HOR, followed by the saccharide name with the '-e' ending replaced by '-ide'; as in phenol D-glucopyranoside. Modified sugars Modification of sugar is generally done by replacing one or more –OH group with other functional groups at all positions except C-1.Rules for nomenclature of modified sugars: State if the sugar is a deoxy sugar, which means the –OH group is replaced by H. Specify the position of deoxygenation. If there is a substituent other than H in the place of –OH, specify what it is. Specify the relative configuration of all stereogenic centres (manno, gluco etc.). Specify the ring size (furanose, pyranose etc.) and anomeric configuration (a or b). State the chain length only in situation where –OH is replaced with H. Alphabetize all the substituent groups (deoxy, -iodo, -amino etc.). Di-, tri- etc. prefixes do not count. Examples Protected sugars Sugars in which –OH is protected by some modification are called protected sugars.Rules for nomenclature for protected sugars:''' Specify the number of particular protecting groups (di, tri, tetra etc.). List groups alphabetically along with all other substituents (di, tri prefixes do not count). See also Carbohydrate conformation Symbol Nomenclature For Glycans Polysaccharide Oligosaccharide Oligosaccharide nomenclature References Chemical nomenclature Carbohydrates Carbohydrate chemistry
Monosaccharide nomenclature
[ "Chemistry" ]
1,846
[ "Biomolecules by chemical classification", "Carbohydrates", "Organic compounds", "Carbohydrate chemistry", "Chemical synthesis", "nan", "Glycobiology" ]
22,416,553
https://en.wikipedia.org/wiki/Tropospheric%20propagation
Tropospheric propagation describes electromagnetic propagation in relation to the troposphere. The service area from a VHF or UHF radio transmitter extends to just beyond the optical horizon, at which point signals start to rapidly reduce in strength. Viewers living in such a "deep fringe" reception area will notice that during certain conditions, weak signals normally masked by noise increase in signal strength to allow quality reception. Such conditions are related to the current state of the troposphere. Tropospheric propagated signals travel in the part of the atmosphere adjacent to the surface and extending to some 25,000 feet (8 km). Such signals are thus directly affected by weather conditions extending over some hundreds of miles. During very settled, warm anticyclonic weather (i.e., high pressure), usually weak signals from distant transmitters improve in strength. Another symptom during such conditions may be interference to the local transmitter resulting in co-channel interference, usually horizontal lines or an extra floating picture with analog broadcasts and break-up with digital broadcasts. A settled high-pressure system gives the characteristic conditions for enhanced tropospheric propagation, in particular favouring signals which travel along the prevailing isobar pattern (rather than across it). Such weather conditions can occur at any time, but generally the summer and autumn months are the best periods. In certain favourable locations, enhanced tropospheric propagation may enable reception of ultra high frequency (UHF) TV signals up to or more. The observable characteristics of such high-pressure systems are usually clear, cloudless days with little or no wind. At sunset the upper air cools, as does the surface temperature, but at different rates. This produces a boundary or temperature gradient, which allows an inversion level to form – a similar effect occurs at sunrise. The inversion is capable of allowing very high frequency (VHF) and UHF signal propagation well beyond the normal radio horizon distance. The inversion effectively reduces sky wave radiation from a transmitter – normally VHF and UHF signals travel on into space when they reach the horizon, the refractive index of the ionosphere preventing signal return. With temperature inversion, however, the signal is to a large extent refracted over the horizon rather than continuing along a direct path into outer space. Fog also produces good tropospheric results, again due to inversion effects. Fog occurs during high-pressure weather, and if such conditions result in a large belt of fog with clear sky above, there will be heating of the upper fog level and thus an inversion. This situation often arises towards night fall, continues overnight and clears with the sunrise over a period of around 4 – 5 hours. Tropospheric ducting Tropospheric ducting is a type of radio propagation that tends to happen during periods of stable, anticyclonic weather. In this propagation method, when the signal encounters a rise in temperature in the atmosphere instead of the normal decrease (known as a temperature inversion), the higher refractive index of the atmosphere there will cause the signal to be bent. Tropospheric ducting affects all frequencies, and signals enhanced this way tend to travel up to (though some people have received "tropo" beyond 1,000 miles / 1,600 km), while with tropospheric-bending, stable signals with good signal strength from 500+ miles (800+ km) away are not uncommon when the refractive index of the atmosphere is fairly high. Tropospheric ducting of radio and television signals is relatively common during the summer and autumn months, and is the result of change in the refractive index of the atmosphere at the boundary between air masses of different temperatures and humidities. Using an analogy, it can be said that the denser air at ground level slows the wave front a little more than does the rare upper air, imparting a downward curve to the wave travel. Ducting can occur on a very large scale when a large mass of cold air is overrun by warm air. This is termed a temperature inversion, and the boundary between the two air masses may extend for or more along a stationary weather front. Temperature inversions occur most frequently along coastal areas bordering large bodies of water. This is the result of natural onshore movement of cool, humid air shortly after sunset when the ground air cools more quickly than the upper air layers. The same action may take place in the morning when the rising sun warms the upper layers. Even though tropospheric ducting has been occasionally observed down to 40 MHz, the signal levels are usually very weak. Higher frequencies above 90 MHz are generally more favourably propagated. High mountainous areas and undulating terrain between the transmitter and receiver can form an effective barrier to tropospheric signals. Ideally, a relatively flat land path between the transmitter and receiver is ideal for tropospheric ducting. Sea paths also tend to produce superior results. In certain parts of the world, notably the Mediterranean Sea and the Persian Gulf, tropospheric ducting conditions can become established for many months of the year to the extent that viewers regularly receive quality reception of signals over distances of . Such conditions are normally optimum during very hot settled summer weather. Tropospheric ducting over water, particularly between California and Hawaii, Brazil and Africa, Australia and New Zealand, Australia and Indonesia, Strait of Florida, and Bahrain and Pakistan, has produced VHF/UHF reception ranging from 1000 to 3,000 miles (1,600 – 4,800 km). A US listening post was built in Ethiopia to exploit a common ducting of signals from southern Russia. Tropospheric signals exhibit a slow cycle of fading and will occasionally produce signals sufficiently strong for noise-free stereo, reception of Radio Data System (RDS) data, and solid locks of HD Radio streams on FM, noise-free, color TV pictures, or stable DTV reception, as well stable DAB Radio reception. With DVB-T it can also enable a wide SFN, so long as the two transmitters are within a guard interval and are almost equidistant from the receiver as well as synchronised. However, if they are not synchronised and are not equidistant they will interfere with each other. Virtually all long-distance reception of digital television occurs by tropospheric ducting (due to most, but not all, TV stations broadcasting in the UHF band). Notable and record distance tropospheric DX receptions "DXing is the art and science of listening to distant stations (D=distance X=xmitter or transmitter)." The ARRL, association for amateur radio maintains the list of North American distance records, which includes tropo results. On October 18, 1975, Rijn Muntjewerff, the Netherlands, received UHF channel E34 Pajala, Sweden, at a distance of . On June 13, 1989, Shel Remington, Keaau, Hawaii, received several 88–108 MHz FM signals from Tijuana, Mexico, at a distance of . Throughout the 1990s, Fernando Garcia, located at what could be considered an ideal tropospheric DX location near Monterrey, Mexico, received numerous 1,000+ mile (1,600+ km) stations via tropospheric propagation, both over the Gulf of Mexico and past land. Among his receptions are WGNT-27 from Portsmouth, Virginia, at a distance of and low-power (LPTV) station W38BB from Raleigh, North Carolina, at a distance of On May 11, 2003, Jeff Kruszka, living in south Louisiana, received a few UHF DTV signals from 800+ miles. The longest of these was WNCN-DT, channel 55, Goldsboro, North Carolina, at a distance of (at the time, the record for UHF DTV). On December 9, 2004, Polish DXer Maciej Ługowski received "Five" TV station on UHF ch.37 from London-Croydon transmitter and BBC2 UHF ch.46 from Bluebell Hill transmitter near Warsaw, Poland at and , respectively. On October 15, 2006, a German DXer known on YouTube as EifelDX received the Norge Mux on channel E58, transmitter Oslo, with a distance of . On the late evening of June 19, 2007 and into the early morning hours of June 20, 2007, three DXers in eastern Massachusetts, Jeff Lehmann, Keith McGinnis, and Roy Barstow, received FM signals from southern Florida via tropo. All three logged WEAT 104.3 West Palm Beach, Florida, and WRMF 97.9 Palm Beach, Florida, at distances of around , and Barstow logged WHDR 93.1 Miami, Florida, at a distance of . On December 17, 2007, Polish DXer Maciej Ługowski received BBC Radio Scotland on 93,7 MHz from Keelylang Hill (Orkney Islands) transmitter near Warsaw, Poland at distance. BBC Scotland reception continued for next two days. On November 3, 2008, Swedish Radio Amateur Kjell Jarl SM7GVF contacted Russian Radio Amateur RA6HHT at a distance of on 144Mhz. On April 23, 2009, a San Antonio-area DXer received WFTS-TV 28's digital signal from Tampa, Florida, at a distance of . On the late evening of August 24 into the afternoon of August 25, 2009, a DX'er in Burnt River, Ontario, Canada, received several FM radio stations via tropo from Arkansas, Illinois, Iowa, Kansas, Michigan, Missouri, Ohio, Oklahoma, Pennsylvania, and Wisconsin. On September 28, 2016, European tropospheric FM DX record was newly set by Jürgen Bartels in Süllwarden, Northern Germany who received Spanish station RNE5TN on 93.7 MHz from Santiago de Compostela/Monte Pedroso transmitter at distance. On September 27 and 28, 2017, various DXers in northeastern Europe observed extreme ducting in VHF broadcast band. Top distance was achieved by Łukasz K. in Tomaszów Mazowiecki, Poland, who reported signals from Kolari transmitter, northern Finland at . On October 10, 2018, Ukrainian DXer Vladimir Doroshenko (MrVlaDor) received a signal from Danish transmitter Holstebro/Mejrup in Dnipro at a distance of . It sets the new tropo FM DX record for Europe. At the same time, FM DXers in Poland received FM radiostations from Moscow for the first time via troposphere at distances – . On July 16, 2023, an Australian DXer known on YouTube as VK2KRR received UHF television signals from Port Pirie at The Rock Hill in NSW, at a distance of . This ultimately sets a new tropo UHF DTV DX record for Australia. See also MW DX Skywave Radio propagation Tropospheric scatter Velocity of propagation Thermal fade Clear-channel station Federal Standard 1037C Looming and similar refraction phenomena References External links Tropospheric Ducting YouTube Channel of FMDXUA – oldtvguides.com Radio frequency propagation th:ทีวีดีเอกซ์
Tropospheric propagation
[ "Physics" ]
2,330
[ "Physical phenomena", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Waves" ]
12,947,073
https://en.wikipedia.org/wiki/Hepatitis%20B%20vaccine
Hepatitis B vaccine is a vaccine that prevents hepatitis B. The first dose is recommended within 24 hours of birth with either two or three more doses given after that. This includes those with poor immune function such as from HIV/AIDS and those born premature. It is also recommended that health-care workers be vaccinated. In healthy people, routine immunization results in more than 95% of people being protected. Blood testing to verify that the vaccine has worked is recommended in those at high risk. Additional doses may be needed in people with poor immune function but are not necessary for most people. In those who have been exposed to the hepatitis B virus (HBV) but not immunized, hepatitis B immune globulin should be given in addition to the vaccine. The vaccine is given by injection into a muscle. Serious side effects from the hepatitis B vaccine are very uncommon. Pain may occur at the site of injection. It is safe for use during pregnancy or while breastfeeding. It has not been linked to Guillain–Barré syndrome. Hepatitis B vaccines are produced with recombinant DNA techniques and contain immunologic adjuvant. They are available both by themselves and in combination with other vaccines. The first hepatitis B vaccine was approved in the United States in 1981. A recombinant version came to market in 1986. It is on the World Health Organization's List of Essential Medicines. Both versions were developed by Maurice Hilleman and his team. Medical uses In the United States vaccination is recommended for nearly all babies at birth. Many countries routinely vaccinate infants against hepatitis B. In countries with high rates of hepatitis B infection, vaccination of newborns has not only reduced the risk of infection but has also led to a marked reduction in liver cancer. This was reported in Taiwan where the implementation of a nationwide hepatitis B vaccination program in 1984 was associated with a decline in the incidence of childhood hepatocellular carcinoma. In the UK, the vaccine is offered to men who have sex with men (MSM), usually as part of a sexual health check-up. A similar situation is in operation in Ireland. In many areas, vaccination against hepatitis B is also required for all health-care and laboratory staff. Both types of the vaccine, the plasma-derived vaccine (PDV) and recombinant vaccine (RV), seems to be able to elicit similar protective anti-HBs levels. The US Centers for Disease Control and Prevention (CDC) issued recommendations for vaccination against hepatitis B among patients with diabetes mellitus. The World Health Organization (WHO) recommends a pentavalent vaccine, combining vaccines against diphtheria, tetanus, pertussis and Haemophilus influenzae type B with the vaccine against hepatitis B. There is not yet sufficient evidence on how effective this pentavalent vaccine is compared to the individual vaccines. A pentavalent vaccine combining vaccines against diphtheria, tetanus, pertussis, hepatitis B, and poliomyelitis is approved in the U.S. and is recommended by the Advisory Committee on Immunization Practices (ACIP). Hepatitis B vaccination, hepatitis B immunoglobulin, and the combination of hepatitis B vaccine plus hepatitis B immunoglobulin, all are considered as preventive for babies born to mothers infected with hepatitis B virus (HBV). The combination is superior for protecting these infants. The effectiveness of being vaccinated during pregnancy to prevent vertical transmission of hepatitis B to infants has not been studied. Hepatitis B immunoglobulin before birth has not been well studied. Effectiveness Studies have found that that immune memory against HepB is sustained for at least 30 years after vaccination, and protects against clinical disease and chronic HepB infection, even in cases where anti-hepatitis B surface antigen (anti-Hbs) levels decline below detectable levels. Testing to confirm successful immunization or sustained immunity is not necessary or recommended for most people, but is recommended for infants born to a mother who tests positive for HBsAg or whose HBsAg status is not known; for healthcare and public safety workers; for immunocompromised people such as haemodialysis patients, HIV patients, haematopoietic stem cell transplant [HSCT] recipients, or people receiving chemotherapy; and for sexual partners of HBsAg-positive people. An anti-Hbs antibody level above 100mIU/ml is deemed adequate and occurs in about 85–90% of individuals. An antibody level between 10 and 100mIU/ml is considered a poor response, and these people should receive a single booster vaccination at this time, but do not need further retesting. People who fail to respond (anti-Hbs antibody level below 10mIU/ml) should be tested to exclude current or past hepatitis B infection, and given a repeat course of three vaccinations, followed by further retesting 1–4 months after the second course. Those who still do not respond to a second course of vaccination may respond to intradermal injection or to a high dose vaccine or to a double dose of a combined hepatitis A and B vaccine. Those who still fail to respond will require hepatitis B immunoglobulin (HBIG) if later exposed to the hepatitis B virus. Poor responses are mostly associated with being over the age of 40 years, obesity, celiac disease, and tobacco smoking, and also in alcoholics, especially if with advanced liver disease. People who are immunosuppressed or on dialysis may not respond as well and require larger or more frequent doses of vaccine. At least one study suggests that hepatitis B vaccination is less effective in patients with HIV. The immune response to the hepatitis B vaccine can be impaired by the presence of parasitic infections such as helminthiasis. Duration of protection The Hepatitis B vaccine is now believed to provide indefinite protection. Older literature assumed that immunity would wane with antibody titers and only effectively last five to seven years, but immune-challenge studies show that even after 30 years, the immune system maintains the ability to produce an anamnestic response, i.e. to rapidly bump up antibody levels when the previously seen antigen is detected. This shows that the immunological memory is not affected by the loss of antibody levels. As a result, subsequent antibody testing and administration of booster doses is not required in successfully vaccinated immunocompetent individuals. UK guidelines suggest that people who respond to the vaccine and are at risk of occupational exposure, such as for healthcare workers, a single booster is recommended five years after initial immunization. Side effects Serious side effects from the hepatitis B vaccine are very rare. Pain may occur at the site of injection. It is generally considered safe for use, during pregnancy or while breastfeeding. It has not been linked to Guillain–Barré syndrome. Multiple sclerosis Several studies have looked for an association between recombinant hepatitis B vaccine and multiple sclerosis (MS) in adults. Most studies do not support a causal relationship between hepatitis B vaccination and demyelinating diseases such as MS. A 2004 study reported a significant increase in risk within three years of vaccination. Some of these studies were criticized for methodological problems. This controversy created public misgivings about hepatitis B vaccination, and hepatitis B vaccination in children remained low in several countries. A 2006 study concluded that evidence did not support an association between hepatitis B vaccination and sudden infant death syndrome, chronic fatigue syndrome, or multiple sclerosis. A 2007 study found that the vaccination does not seem to increase the risk of a first episode of MS in childhood. Hepatitis B vaccination has not been linked to onset of autoimmune diseases in adulthood. Usage The following is a list of countries by the percentage of infants receiving three doses of hepatitis B vaccine as published by the World Health Organization (WHO) in 2017. History Preliminary work In 1963, the American physician/geneticist Baruch Blumberg, working at the Fox Chase Cancer Center, discovered what he called the "Australia Antigen" (HBsAg) in the serum of an Australian Aboriginal person. In 1968, this protein was found to be part of the virus that causes "serum hepatitis" (hepatitis B) by virologist Alfred Prince. In 1976, Blumberg won the Nobel Prize in Physiology or Medicine for his work on hepatitis B (sharing it with Daniel Carleton Gajdusek for his work on kuru). Blumberg had identified Australia antigen, the important first step, and later discovered the way to make the first hepatitis B vaccine. Blumberg's vaccine was a unique approach to the production of a vaccine; that is, obtaining the immunizing antigen directly from the blood of human carriers of the virus. In October 1969, acting on behalf of the Institute for Cancer Research, they applied for a patent for the production of a vaccine. This patent [USP 3,636,191] was subsequently (January 1972) granted in the United States and other countries. In 2002, Blumberg published a book, Hepatitis B: The Hunt for a Killer Virus. In the book, Blumberg wrote: “It took some time before the concept was accepted by virologists and vaccine manufacturers who were more accustomed to dealing with vaccines produced by attenuation of viruses, or the use of killed viruses produced in tissue culture, or related viruses that were non-pathogenic protective (i.e., smallpox). However, by 1971, we were able to interest Merck, which had considerable experience with vaccines." Blood-derived vaccine During the next few years, a series of human and primate observations by scientists including Maurice Hilleman (who was responsible for vaccines at Merck), S. Krugman, R. Purcell, P. Maupas, and others provided additional support for the vaccine. In 1980, the results of the first field trial were published by W. Szmuness and his colleagues in New York City." The American microbiologist/vaccinologist Maurice Hilleman at Merck used three treatments (pepsin, urea and formaldehyde) of blood serum together with rigorous filtration to yield a product that could be used as a safe vaccine. Hilleman hypothesized that he could make an HBV vaccine by injecting patients with hepatitis B surface protein. In theory, this would be very safe, as these excess surface proteins lacked infectious viral DNA. The immune system, recognizing the surface proteins as foreign, would manufacture specially shaped antibodies, custom-made to bind to, and destroy, these proteins. Then, in the future, if the patient were infected with HBV, the immune system could promptly deploy protective antibodies, destroying the viruses before they could do any harm. Hilleman collected blood from gay men and intravenous drug users—groups known to be at risk for viral hepatitis. This was in the late 1970s when HIV was yet unknown to medicine. In addition to the sought-after hepatitis B surface proteins, the blood samples likely contained HIV. Hilleman devised a multistep process to purify this blood so that only the hepatitis B surface proteins remained. Every known virus was killed by this process, and Hilleman was confident that the vaccine was safe. The first large-scale trials for the blood-derived vaccine were performed on gay men, due to their high-risk status. Later, Hilleman's vaccine was falsely blamed for igniting the AIDS epidemic. (See Wolf Szmuness) But, although the purified blood vaccine seemed questionable, it was determined to have indeed been free of HIV. The purification process had destroyed all viruses—including HIV. The vaccine was approved in 1981. Recombinant vaccine The blood-derived hepatitis B vaccine was withdrawn from the marketplace in 1986, replaced by Maurice Hilleman's improved recombinant hepatitis B vaccine which was approved by the FDA on 23 July 1986. It was the first human vaccine produced by recombinant DNA methods. For this work, scientists at Merck & Co. collaborated with William J. Rutter and colleagues at the University of California at San Francisco, as well as Benjamin Hall and colleagues at the University of Washington. In 1981, William J. Rutter, Pablo DT Valenzuela and Edward Penhoet (UC Berkeley) co-founded the Chiron Corporation in Emeryville, California, which collaborated with Merck. The recombinant vaccine is based on a Hepatitis B surface antigen (HBsAg) gene inserted into yeast (Saccharomyces cerevisiae) cells which are free of any concerns associated with human blood products. This allows the yeast to produce only the noninfectious surface protein, without any danger of introducing actual viral DNA into the final product. The vaccine contains the adjuvant amorphous aluminum hydroxyphosphate sulfate. In 2017, a two-dose HBV vaccine for adults, Heplisav-B gained U.S. Food and Drug Administration (FDA) approval. It uses recombinant HB surface antigen, similar to previous vaccines, but includes a novel CpG 1018 adjuvant, a 22-mer phosphorothioate-linked oligodeoxynucleotide. It was non-inferior concerning immunogenicity. In November 2021, Hepatitis B Vaccine (Recombinant) (Prehevbrio) was approved by the FDA. Immunization schedule The US CDC ACIP first recommended the vaccine for all newborns in 1991. Before this, the vaccine was only recommended for high-risk groups. As of the 1991 recommendation for universal newborn Hepatitis B vaccination, no other vaccines were routinely recommended for all newborns in the United States and remains one of the very few vaccines routinely recommended for administration at birth. Manufacture The vaccine contains one of the viral envelope proteins, Hepatitis B surface antigen (HBsAg). It is produced by yeast cells, into which the gene for HBsAg has been inserted. Afterward an immune system antibody to HBsAg is established in the bloodstream. The antibody is known as anti-HBs. This antibody and immune system memory then provide immunity to hepatitis B virus (HBV) infection. Society and culture Legal status On 10 December 2020, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product HeplisavB, intended for the active immunization against hepatitisB virus infection (HBV). The applicant for this medicinal product is Dynavax GmbH. It was approved for medical use in the European Union in February 2021. On 24 February 2022, the CHMP adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product PreHevbri, intended for the active immunization against hepatitis B virus infection (HBV). The applicant for this medicinal product is VBI Vaccines B.V. PreHevbri was approved for medical use in the European Union in April 2022. Brand names The common brands available are Recombivax HB (Merck), Engerix-B (GSK), Elovac B (Human Biologicals Institute, a division of Indian Immunologicals Limited), Genevac B (Serum Institute), Shanvac B, Heplisav-B, Prehevbrio, and Euvax B (LG Chem). Twinrix (GSK) is a vaccine against hepatitis A and hepatitis B. Pediarix is a vaccine against diphtheria, tetanus, pertussis, hepatitis B, and poliomyelitis. Vaxelis is a vaccine against diphtheria, tetanus, pertussis, poliomyelitis, Haemophilus influenzae type B (Meningococcal Protein Conjugate), and hepatitis B. Fendrix (hepatitis B (rDNA) vaccine (adjuvanted, adsorbed)) was approved for medical use in the European Union in 2005. References Further reading External links Hepatitis B Cancer vaccines Drugs developed by GSK plc Drugs developed by Merck & Co. Recombinant proteins Subunit vaccines Hepatitis vaccines World Health Organization essential medicines (vaccines) Wikipedia medicine articles ready to translate
Hepatitis B vaccine
[ "Biology" ]
3,409
[ "Recombinant proteins", "Biotechnology products" ]
12,947,946
https://en.wikipedia.org/wiki/Flexural%20modulus
In mechanics, the flexural modulus or bending modulus is an intensive property that is computed as the ratio of stress to strain in flexural deformation, or the tendency for a material to resist bending. It is determined from the slope of a stress-strain curve produced by a flexural test (such as the ASTM D790), and uses units of force per area. The flexural modulus defined using the 2-point (cantilever) and 3-point bend tests assumes a linear stress strain response. For a 3-point test of a rectangular beam behaving as an isotropic linear material, where w and h are the width and height of the beam, I is the second moment of area of the beam's cross-section, L is the distance between the two outer supports, and d is the deflection due to the load F applied at the middle of the beam, the flexural modulus: From elastic beam theory and for rectangular beam thus (Elastic modulus) For very small strains in isotropic materials – like glass, metal or polymer – flexural or bending modulus of elasticity is equivalent to the tensile modulus (Young's modulus) or compressive modulus of elasticity. However, in anisotropic materials, for example wood, these values may not be equivalent. Moreover, composite materials like fiber-reinforced polymers or biological tissues are inhomogeneous combinations of two or more materials, each with different material properties, therefore their tensile, compressive, and flexural moduli usually are not equivalent. Related pages Stiffness References Materials science Elasticity (physics)
Flexural modulus
[ "Physics", "Materials_science", "Engineering" ]
339
[ "Physical phenomena", "Materials science stubs", "Applied and interdisciplinary physics", "Elasticity (physics)", "Deformation (mechanics)", "Materials science", "nan", "Physical properties" ]
12,952,238
https://en.wikipedia.org/wiki/Algebraic%20topology%20%28object%29
In mathematics, the algebraic topology on the set of group representations from G to a topological group H is the topology of pointwise convergence, i.e. pi converges to p if the limit of pi(g) = p(g) for every g in G. This terminology is often used in the case of the algebraic topology on the set of discrete, faithful representations of a Kleinian group into PSL(2,C). Another topology, the geometric topology (also called the Chabauty topology), can be put on the set of images of the representations, and its closure can include extra Kleinian groups that are not images of points in the closure in the algebraic topology. This fundamental distinction is behind the phenomenon of hyperbolic Dehn surgery and plays an important role in the general theory of hyperbolic 3-manifolds. References William Thurston, The geometry and topology of 3-manifolds, Princeton lecture notes (1978–1981). 3-manifolds object
Algebraic topology (object)
[ "Mathematics" ]
202
[ "Algebra stubs", "Algebraic topology", "Fields of abstract algebra", "Topology", "Algebra" ]
12,953,027
https://en.wikipedia.org/wiki/Tunnel%20junction
In electronics, a tunnel junction is a barrier, such as a thin insulating layer or electric potential, between two electrically conducting materials. Electrons (or quasiparticles) pass through the barrier by the process of quantum tunnelling. Classically, the electron has zero probability of passing through the barrier. However, according to quantum mechanics, the electron has a non-zero wave amplitude in the barrier, and hence it has some probability of passing through the barrier. Tunnel junctions serve a variety of different purposes. Multijunction photovoltaic cell In multijunction photovoltaic cells, tunnel junctions form the connections between consecutive p-n junctions. They function as an ohmic electrical contact in the middle of a semiconductor device. Magnetic tunnel junction In magnetic tunnel junctions, electrons tunnel through a thin insulating barrier from one magnetic material to another. This can serve as a basis for a magnetic detector. Superconducting tunnel junction In superconducting tunnel junctions, two superconducting electrodes are separated by a non-superconducting barrier. Cooper pairs carry the supercurrent through the barrier by quantum tunneling, a phenomenon known as the Josephson effect. This setup can form the basis for extremely sensitive magnetometers, known as SQUIDs, as well as many other devices. Tunnel diode In tunnel diodes, a diode allows the tunneling of electrons for certain voltages. This allows them to be used for generating high-frequency signals. Scanning tunneling microscope In scanning tunneling microscopy (STM), the tip/air/substrate (metal-insulator-metal) can be viewed as a tunnel junction. References Quantum electronics Electrodes Mesoscopic physics
Tunnel junction
[ "Physics", "Chemistry", "Materials_science" ]
347
[ "Quantum electronics", "Electrodes", "Quantum mechanics", "Electrochemistry", "Condensed matter physics", "Electrochemistry stubs", "Nanotechnology", "Mesoscopic physics", "Physical chemistry stubs", "Quantum physics stubs" ]
12,953,288
https://en.wikipedia.org/wiki/IEEE%20Alexander%20Graham%20Bell%20Medal
The IEEE Alexander Graham Bell Medal is an award honoring "exceptional contributions to communications and networking sciences and engineering" in the field of telecommunications. The medal is one of the highest honors awarded by the Institute of Electrical and Electronics Engineers (IEEE) for achievements in telecommunication sciences and engineering. It was instituted in 1976 by the directors of IEEE, commemorating the centennial of the invention of the telephone by Alexander Graham Bell. The award is presented either to an individual, or to a team of two or three persons. The institute's reasoning for the award was described thus: Recipients of the award receive a gold medal, bronze replica, certificate, and an honorarium. Recipients As listed by the IEEE: 1976 Amos E. Joel, Jr., William Keister, and Raymond W. Ketchledge 1977 Eberhardt Rechtin 1978 M. Robert Aaron, John S. Mayo, and Eric E. Sumner 1979 A. Christian Jacobaeus 1980 Richard R. Hough 1981 David Slepian 1982 Harold A. Rosen 1983 Stephen O. Rice 1984 Andrew J. Viterbi 1985 Charles K. Kao 1986 Bernard Widrow 1987 Joel S. Engel, Richard H. Frenkiel, and William C. Jakes, Jr. 1988 Robert M. Metcalfe 1989 Gerald R. Ash and Billy B. Oliver 1990 Paul Baran 1991 C. Chapin Cutler, John O. Limb, and Arun N. Netravali 1992 James L. Massey 1993 Donald C. Cox 1994 Hiroshi Inose 1995 Irwin M. Jacobs 1996 Tadahiro Sekimoto 1997 Vinton G. Cerf and Robert E. Kahn 1998 Richard E. Blahut 1999 David G. Messerschmitt 2000 Vladimir A. Kotelnikov 2002 Tsuneo Nakahara 2003 Joachim Hagenauer 2005 Jim K. Omura 2006 John Wozencraft 2007 Norman Abramson 2008 Gerard J. Foschini 2009 Robert McEliece 2010 John Cioffi 2011 Arogyaswami Paulraj 2012 Leonard Kleinrock 2013 Andrew Chraplyvy, Robert Tkach 2014 Dariush Divsalar 2015 Frank Kelly 2016 Roberto Padovani 2017 H. Vincent Poor 2018 Nambirajan Seshadri 2019 Teresa H. Meng 2020 Rajiv Laroia 2021 Nick McKeown 2022 Panganamala R. Kumar 2023 Erwin Hochmair, Ingeborg Hochmair 2025 Richard D. Gitlin See also Alexander Graham Bell honors and tributes IEEE Medal of Honor IEEE awards World Communication Awards References Alexander Graham Bell Medal Alexander Graham Bell Telecommunications engineering
IEEE Alexander Graham Bell Medal
[ "Engineering" ]
522
[ "Electrical engineering", "Telecommunications engineering" ]
12,953,305
https://en.wikipedia.org/wiki/IEEE%20Nikola%20Tesla%20Award
The IEEE Nikola Tesla Award is a Technical Field Award given annually to an individual or team that has made an outstanding contribution to the generation or utilization of electric power. It is awarded by the Board of Directors of the IEEE. The award is named in honor of Nikola Tesla. This award may be presented to an individual or a team. The award was established in 1975, and its first recipient was Leon T. Rosenberg, who was given the award in 1976 "for his half-century of development and design of large steam turbine driven generators and his important contributions to literature." The actual award is a plaque and honorarium. Recipients Source 2024 - Aldo Boglieltti, Professor, Department of Energy, Politecnico di Torino, Torino, Italy For contributions to the magnetic and thermal modeling, design, and characterization of electrical machines. 2023 - Kiruba S. Haran, Professor, University of Illinois, Urbana, Illinois, USA For contributions to advanced high-power density electrical machinery and high-temperature, super-conducting technology applications 2022 - Peter W. Sauer, Grainger Chair Emeritus Professor of Electrical Engineering, University of Illinois, Urbana, Illinois, USA For contributions to dynamic modeling and simulation of synchronous generators and for leadership in power engineering education. 2021 - Zi-Qiang Zhu, Professor, Department of Electronic and Electrical Engineering, University of Sheffield, Sheffield, South Yorkshire, United Kingdom For contributions to the design, modeling, control, and application of ac permanent magnet machines and drives. 2020 - Akira Chiba, Professor, Tokyo Institute of Technology, Tokyo Japan For contributions to bearingless and reluctance motors. 2019 - Tomy Sebastian, Director; Motor Drive Systems, Halla Mechatronics, Bay City, Michigan, United States For contributions to the design and application of high-performance permanent magnet synchronous machines to electric power steering 2018 - Longya Xu, Professor, The Ohio State University, Columbus, Ohio, USA For contributions to design and control of efficient electric machines for wind power generation and electrified vehicles. 2017 - Adel Razek, Senior Research Director (Emeritus) and Professor (Honorary), The National Center for Scientific Research CNRS and CentraleSupelec, Gif Sur Yvette, France For contributions to coupled multiphysics modeling and design of electromagnetic systems. 2016 - Bruno Lequesne, President, E-Motors Consulting, LLC, Menomonee Falls, Wisconsin, USA For contributions to the design and analysis of actuators, sensors, and motors for automotive applications. 2015 - Ion Gheorghe Boldea, Professor Emeritus, Politehnica University of Timișoara, Timișoara, Romania For contributions to the design and control of rotating and linear electric machines for industry applications. 2014 - Hamid A. Toliyat, Texas A&M University (College Station, Texas) For contributions to the design, analysis, and control of fault-tolerant multiphase electric machines. 2013 - Norio Takahashi (scientist)|Norio Takahashi, Okayama University (Okayama, Japan) For contributions to finite element modeling, analysis, and optimal design tools of electrical machines. 2012 - Manoj R. Shah, General Electric (Niskayuna, New York) For advancements in electromagnetic design and analysis of electrical machines. 2011 - Nady Boules, General Motors (Warren, Michigan) For contributions to the design, analysis and optimization of permanent magnet machines and for advancing their utilization in the automotive industry. 2010 - Paul C. Krause, Purdue University (West Lafayette, Indiana) For outstanding contributions to the analysis of electric machinery using reference frame theory. 2009 - Donald Wayne Novotny, University of Wisconsin–Madison (Madison, Wisconsin) For pioneering contributions to the analysis and understanding of ac machine dynamic behavior and performance in adjustable-speed drives. 2008 - Timothy J. E. Miller, University of Glasgow (Glasgow, Scotland) For outstanding contributions to the advancement of computer-based design and analysis of electric machines and their industrial dissemination. 2007 - Thomas W. Nehl, Delphi Research Labs (Shelby Township, Michigan) For pioneering contributions to the simulation and design of electromechanical drives and actuators for automotive applications. 2006 - Konrad Reichert, ETH Zentrum (Zurich, Switzerland) For contributions to the development of numerical methods and computer analysis and simulation of electrical machines and devices. 2005 - Thomas M. Jahns, Grainger Professor of Power Electronics and Electrical Machines University of Wisconsin–Madison, Madison, Wisconsin For pioneering contributions to the design and application of AC permanent magnet machines. 2004 - Sheppard Joel Salon, Professor, Electrical, Computers, and Systems Engineering Department, Rensselaer Polytechnic Institute Troy, New York For pioneering and outstanding contributions to transient finite element computation of electric machines coupled to electronic circuits; and to electro-mechanical devices. 2003 - Austin H. Bonnett, Retired-Vice President Technology Emeritus, Emerson Electric, Elec Apparatus Service Association (EASA), Natitional Electric Manufacturers Association (NEMA), Electric Power Research Institute (EPRI), and United States Department of Energy and Affiliates (DOE) For leadership in the development and application of design standards, maintenance technology, and operating practices to optimize induction motor performance. 2002 - James L Kirtley Jr, Professor, Electrical Engineering, Massachusetts Institute of Technology Cambridge, Massachusetts For contributions to the theoretical analysis, design, and construction of high performance rotating electric machinery, including superconducting turbogenerators. 2001 - Steve Williamson, University of Manchester - Manchester, United Kingdom For the development of advanced mathematical models and computational tools for induction machine design. 2000 - Syed Abu Nasar, University of Kentucky - Lexington, Kentucky For leadership in the research, development and design of linear and rotating machines, and contributions to electrical engineering education. 1999 - Nabeel Aly Omar Demerdash, Professor and Past Chairman of the Department of Electrical and Computer Engineering Marquette University, Milwaukee, Wisconsin For pioneering contributions to electric machine and drive system design using coupled finite-element and electrical network models. 1998 - Paul Dandeno, University of Toronto - Toronto, Ontario, Canada For contribution to modelling and application of synchronous machines, power system controls, and stability analysis. 1997 - Prabhashankar Kundur, Powertech Labs Inc. - Surrey, British Columbia, Canada For contribution to modeling and application of synchronous machines, power system controls, and stability analysis. 1996 - John A. Tegopoulos, National Technical University of Athens - Athens, Greece For pioneering contributions in electrical machine design. 1995 - Thomas A. Lipo, University of Wisconsin–Madison For pioneering contributions to the simulation and application of electric machinery in solid-state ac motor drives. 1994 - Carl Flick, Techno-Lexic - Winter Park, Florida, Westinghouse Electric Corporation, Orlando, Florida For long-term creative contributions and leadership in the design and development of advanced high-speed generators. 1993 - Madabushi V. K. Chari, General Electric Co. - Schenectady, New York For pioneering contributions to finite element computations of nonlinear electromagnetic fields for design and analysis of electric machinery. 1992 - Thomas Herbert Barton, University of Calgary, Canada For the practical application of the generalized theory of electrical machines to A.C. and D.C. drives. 1991 - Michel E. Poloujadoff, Univ. Pierre et Marie Curie - Paris, France For contributions to the theory of electrical machinery and its application to linear induction motors. 1990 - Gordon R. Slemon, University of Toronto, Toronto, Ontario, Canada For application of modeling in electric power equipment and technical leadership in power education. 1989 - Dietrich R. Lambrecht, Siemens AG - Ruhr, W. Germany For leadership and contributions to advances in large turbine generator design, construction, and application. 1988 - Edward I. King, Westinghouse Electric Corporation. - Orlando, Florida For contributions to computer-aided analysis and design of large rotating machinery. 1987 - J. Coleman White, Electric Power Research Institute - Palo Alto, CA For contributions to the research, development, and design of ac and dc rotating machines. 1986 - Eric R. Laithwaite, The Imperial College of Science, Technology and Medicine - London, England For contributions to the development and understanding of electric machines and especially of the linear induction motor. 1985 - Eugene C. Whitney, Westinghouse Electric Corporation - Pittsburgh, PA For outstanding contributions to the development, design, and construction of large rotating electric machinery. 1984 - Herbert H. Woodson, University of Texas at Austin - Austin, Texas For contributions to power generation technology particularly in superconducting generators and magnetohydrodynamic generators. 1983 - NO AWARD 1982 - Sakae Yamamura, University of Tokyo, Tokyo, Japan For contributions to the theory of linear induction motors and the development of magnetic levitation of track vehicles. 1981 - Dean B. Harrington, General Electric Co. - Schenectady, New York For contributions to the design, development and performance analysis of large steam turbine-generators. 1980 - Philip H. Trickey, Duke University - Durham, North Carolina For advancement in the development and application of Tesla's theories through precise designs of small induction machines. 1979 - John W. Batchelor, Westinghouse Electric Corporation - E. Pittsburgh, PA For contributions to the design of large turbine driven generators and the development of related international standards. 1978 - Charles H. Holley, General Electric Co. - Schenectady, New York For contributions to the evolution of turbine generator designs with achievement in performance and reliability. 1977 - Cyril G. Veinott, University of Missouri For his leadership in development and application of small induction motors. 1976 - Leon T. Rosenberg, Allis-Chalmers Pwr. Sys. Inc. - West Allis, WI For his half-century of development and design of large steam turbine driven generators and his important contributions to literature. See also List of engineering awards List of prizes named after people References External links IEEE Nikola Tesla Award page at IEEE List of recipients of the IEEE Nikola Tesla Award Further reading Institute of Electrical and Electronics Engineers, "Past to present : a century of honors : the first hundred years of award winners, honorary members, past presidents, and fellows of the Institute / the Institute of Electrical and Electronics Engineers, Inc.". New York, IEEE Press, c1984. Nikola Tesla Award Tesla Award Electric power
IEEE Nikola Tesla Award
[ "Physics", "Engineering" ]
2,124
[ "Power (physics)", "Electrical engineering", "Electric power", "Physical quantities" ]
8,299,324
https://en.wikipedia.org/wiki/Internet%20Protocol%20Device%20Control
Internet Protocol Device Control (IPDC) is a 1998 specification of a communications protocol for voice over Internet Protocol (VoIP) telephony, developed by Level 3 Communications. IPDC divides the operation of telephony gateways between intelligent call routers in an Internet Protocol (IP) network and simple media gateways at the edge of the IP network and the public switched telephone network (PSTN). Internet Protocol Device Control was fused with the Simple Gateway Control Protocol (SGCP), a project independently in progress at Bellcore, to form the Media Gateway Control Protocol (MGCP). This group of protocols employs the media gateway control protocol architecture that is also the foundation of MEGACO/H.248, a similar protocol which became a standards-track protocol at the Internet Engineering Task Force (IETF). See also Session Initiation Protocol (SIP) RTP audio video profile References Network protocols Broadcast engineering
Internet Protocol Device Control
[ "Engineering" ]
186
[ "Broadcast engineering", "Electronic engineering" ]
8,299,558
https://en.wikipedia.org/wiki/Septin
Septins are a group of GTP-binding proteins expressed in all eukaryotic cells except plants. Different septins form protein complexes with each other. These complexes can further assemble into filaments, rings and gauzes. Assembled as such, septins function in cells by localizing other proteins, either by providing a scaffold to which proteins can attach, or by forming a barrier preventing the diffusion of molecules from one compartment of the cell to another, or in the cell cortex as a barrier to the diffusion of membrane-bound proteins. Septins have been implicated in the localization of cellular processes at the site of cell division, and at the cell membrane at sites where specialized structures like cilia or flagella are attached to the cell body. In yeast cells, they compartmentalize parts of the cell and build scaffolding to provide structural support during cell division at the septum, from which they derive their name. Research in human cells suggests that septins build cages around pathogenic bacteria, that immobilize and prevent them from invading other cells. As filament forming proteins, septins can be considered part of the cytoskeleton. Apart from forming non-polar filaments, septins associate with cell membranes, the cell cortex, actin filaments and microtubules. Structure Septins are P-Loop-NTPase proteins that range in weight from 30-65 kDa. Septins are highly conserved between different eukaryotic species. They are composed of a variable-length proline rich N-terminus with a basic phosphoinositide binding motif important for membrane association, a GTP-binding domain, a highly conserved Septin Unique Element domain, and a C-terminal extension including a coiled coil domain of varying length. Septins interact either via their respective GTP-binding domains, or via both their N- and C-termini. Different organisms express a different number of septins, and from those symmetric oligomers are formed. For example, in yeast the octameric complex formed is Cdc11-Cdc12-Cdc3-Cdc10-Cdc10-Cdc3-Cdc12-Cdc11. In humans, hexameric or octameric complexes are possible. Initially, it was indicated that the human complex was Sept7-Sept6-Sept2-Sept2-Sept6-Sept7; but recently this order has been revised to Sept2-Sept6-Sept7-Sept7-Sept6-Sept2 (or Sept2-Sept6-Sept7-Sept3-Sept3-Sept7-Sept6-Sept2 in case of octameric hetero-oligomers). These complexes then associate to form non-polar filaments, filament bundles, cages or ring structures in cells. Occurrence Septins are found in fungi, animals, and some eukaryotic algae but are not found in plants. In yeast There are seven different septins in Saccharomyces cerevisiae. Five of those are involved in mitosis, while two (Spr3 and Spr28) are specific to sporulation. Mitotic septins (Cdc3, Cdc10, Cdc11, Cdc12, Shs1) form a ring structure at the bud neck during cell division. They are involved in the selection of the bud-site, the positioning of the mitotic spindle, polarized growth, and cytokinesis. The sporulating septins (Spr3, Spr28) localize together with Cdc3 and Cdc11 to the edges of prospore membranes. Organization Septins form a specialised region in the cell cortex known as the septin cortex. The septin cortex undergoes several changes throughout the cell cycle: The first visible septin structure is a distinct ring which appears ~15 min before bud emergence. After bud emergence, the ring broadens to assume the shape of an hourglass around the mother-bud neck. During cytokinesis, the septin cortex splits into a double ring which eventually disappears. How can the septin cortex undergo such dramatic changes, although some of its functions may require it to be a stable structure? FRAP analysis has revealed that the turnover of septins at the neck undergoes multiple changes during the cell cycle. The predominant, functional conformation is characterized by a low turnover rate (frozen state), during which the septins are phosphorylated. Structural changes require a destabilization of the septin cortex (fluid state) induced by dephosphorylation prior to bud emergence, ring splitting and cell separation. The composition of the septin cortex does not only vary throughout the cell cycle but also along the mother-bud axis. This polarity of the septin network allows concentration of some proteins primarily to the mother side of the neck, some to the center and others to the bud site. Functions Scaffold The septins act as a scaffold, recruiting many proteins. These protein complexes are involved in cytokinesis, chitin deposition, cell polarity, spore formation, in the morphogenesis checkpoint, spindle alignment checkpoint and bud site selection. Cytokinesis Budding yeast cytokinesis is driven through two septin dependent, redundant processes: recruitment and contraction of the actomyosin ring and formation of the septum by vesicle fusion with the plasma membrane. In contrast to septin mutants, disruption of one single pathway only leads to a delay in cytokinesis, not complete failure of cell division. Hence, the septins are predicted to act at the most upstream level of cytokinesis. Cell polarity After the isotropic-apical switch in budding yeast, cortical components, supposedly of the exocyst and polarisome, are delocalized from the apical pole to the entire plasma membrane of the bud, but not the mother cell. The septin ring at the neck serves as a cortical barrier that prevents membrane diffusion of these factors between the two compartments. This asymmetric distribution is abolished in septin mutants. Some conditional septin mutants do not form buds at their normal axial location. Moreover, the typical localization of some bud-site-selection factors in a double ring at the neck is lost or disturbed in these mutants. This indicates that the septins may serve as anchoring site for such factors in axially budding cells. In filamentous fungi Since their discovery in S. cerevisiae, septin homologues have been found in other eukaryotic species, including filamentous fungi. Septins in filamentous fungi display a variety of different shapes within single cells, where they control aspects of filamentous morphology. Candida albicans The genome of C. albicans encodes homologues to all S. cerevisiae septins. Without Cdc3 and Cdc12 genes Candida albicans cannot proliferate, other septins affect morphology and chitin deposition, but are not essential. Candida albicans can display different morphologies of vegetative growth, which determines the appearance of septin structures. Newly forming hyphae form a septin ring at the base, Double rings form at sites of hyphal septation, and a septin cap forms at hyphal tips. Elongated septin-filaments encircle the spherical chlamydospores. Double rings of septins at the septation site also bear growth polarity, with the growing tip ring disassembling, while the basal ring remaining intact. Aspergillus nidulans Five septins are found in A. nidulans (AnAspAp, AnAspBp, AnAspCp, AnAspDp, AnAspEp). AnAspBp forms single rings at septation sites that eventually split into double rings. Additionally, AnAspBp forms a ring at sites of branch emergence which broadens into a band as the branch grows. Like in C. albicans, double rings reflect polarity of the hypha. In the case of Aspergillus nidulans polarity is conveyed by disassembly of the more basal ring (the ring further away from the hyphal growth tip), leaving the apical ring intact, potentially as a growth guidance cue. Ashbya gossypii The ascomycete A. gossypii possesses homologues to all S. cerevisiae septins, with one being duplicated (AgCDC3, AgCDC10, AgCDC11A, AgCDC11B, AgCDC12, AgSEP7). In vivo studies of AgSep7p-GFP have revealed that septins assemble into discontinuous hyphal rings close to growing tips and sites of branch formation, and into asymmetric structures at the base of branching points. Rings are made of filaments which are long and diffuse close to growing tips and short and compact further away from the tip. During septum formation, the septin ring splits into two to form a double ring. Agcdc3Δ, Agcdc10Δ and Agcdc12Δ deletion mutants display aberrant morphology and are defective for actin-ring formation, chitin-ring formation, and sporulation. Due to the lack of septa, septin deletion mutants are highly sensitive, and damage of a single hypha can result in complete lysis of a young mycelium. In animals In contrast to septins in yeast, and in contrast to other cytoskeletal components of animals, septins do not form a continuous network in cells, but several dispersed ones in the cytoplasm of the cell cortex. These are integrated with actin bundles and microtubules. For example, the actin bundling protein anillin is required for correct spatial control of septin organization. In the sperm cells of mammals, septins form a stable ring called annulus in the tail. In mice (and potentially in humans, too), defective annulus formation leads to male infertility. Human In humans, septins are involved in cytokinesis, cilium formation and neurogenesis through the capability to recruit other proteins or serve as a diffusion barrier. There are 13 different human genes coding for septins. The septin proteins produced by these genes are grouped into four subfamilies each named after its founding member: (i) SEPT2 (SEPT1, SEPT4, SEPT5), (ii) SEPT3 (SEPT9, SEPT12), (iii) SEPT6 (SEPT8, SEPT10, SEPT11, SEPT14), and (iv) SEPT7. Septin protein complexes are assembled to form either hetero-hexamers (incorporating monomers selected from three different groups and the monomer from each group is present in two copies; 3 x 2 = 6) or hetero-octamers (monomers from four different groups, each monomer present in two copies; 4 x 2 = 8). These hetero-oligomers in turn form higher-order structures such as filaments and rings. Septins form cage-like structures around bacterial pathogens, immobilizing harmful microbes and preventing them from invading healthy cells. This cellular defence system could potentially be exploited to create therapies for dysentery and other illnesses. For example, Shigella is a bacterium that causes lethal diarrhoea in humans. To propagate from cell to cell, Shigella bacteria develop actin-polymer 'tails', which propel the microbes and allow them to gain entry into neighbouring host cells. As part of the immune response, human cells produce a cell-signalling protein called TNF-α which trigger thick bundles of septin filaments to encircle the microbes within the infected host cell. Microbes that become trapped in these septin cages are broken down by autophagy. Disruptions in septins and mutations in the genes that code for them could be involved in causing leukaemia, colon cancer and neurodegenerative conditions such as Parkinson's disease and Alzheimer's disease. Potential therapies for these, as well as for bacterial conditions such as dysentery caused by Shigella, might bolster the body’s immune system with drugs that mimic the behaviour of TNF-α and allow the septin cages to proliferate. Caenorhabditis elegans In the nematode worm Caenorhabditis elegans there are two genes coding for septins, and septin complexes contain the two different septins in a tetrameric UNC59-UNC61-UNC61-UNC59 complex. Septins in C.elegans concentrate at the cleavage furrow and the spindle midbody during cell division. Septins are also involved in cell migration and axon guidance in C.elegans. In mitochondria The septin localized in the mitochondria is called mitochondrial septin (M-septin). It was identified as a CRMP/CRAM-interacting protein in the developing rat brain. History The septins were discovered in 1970 by Leland H. Hartwell and colleagues in a screen for temperature-sensitive mutants affecting cell division (cdc mutants) in yeast (Saccharomyces cerevisiae). The screen revealed four mutants which prevented cytokinesis at restrictive temperature. The corresponding genes represent the four original septins, ScCDC3, ScCDC10, ScCDC11, and ScCDC12. Despite disrupted cytokinesis, the cells continued budding, DNA synthesis, and nuclear division, which resulted in large multinucleate cells with multiple, elongated buds. In 1976, analysis of electron micrographs revealed ~20 evenly spaced striations of 10-nm filaments around the mother-bud neck in wild-type but not in septin-mutant cells. Immunofluorescence studies revealed that the septin proteins colocalize into a septin ring at the neck. The localization of all four septins is disrupted in conditional Sccdc3 and Sccdc12 mutants, indicating interdependence of the septin proteins. Strong support for this finding was provided by biochemical studies: The four original septins co-purified on affinity columns, together with a fifth septin protein, encoded by ScSEP7 or ScSHS1. Purified septins from budding yeast, Drosophila, Xenopus, and mammalian cells are able to self associate in vitro to form filaments. How the septins interact in vitro to form hetero-oligomers that assemble into filaments was studied in detail in S. cerevisiae. Micrographs of purified filaments raised the possibility that the septins are organized in parallel to the mother-bud axis. The 10-nm striations seen on electron micrographs may be the result of lateral interaction between the filaments. Mutant strains lacking factors important for septin organization support this view. Instead of continuous rings, the septins form bars oriented along the mother-bud axis in deletion mutants of ScGIN4, ScNAP1 and ScCLA4. References Further reading Cell biology Cell cycle Proteins Cellular processes Cytoskeleton
Septin
[ "Chemistry", "Biology" ]
3,206
[ "Biomolecules by chemical classification", "Cell biology", "Cellular processes", "Molecular biology", "Proteins", "Cell cycle" ]
8,301,902
https://en.wikipedia.org/wiki/Cleaning%20validation
Cleaning validation is the methodology used to assure that a cleaning process removes chemical and microbial residues of the active, inactive or detergent ingredients of the product manufactured in a piece of equipment, the cleaning aids utilized in the cleaning process and the microbial attributes. All residues are removed to predetermined levels to ensure the quality of the next product manufactured is not compromised by residues from the previous product and the quality of future products using the equipment, to prevent cross-contamination and as a good manufacturing practice requirement. The U.S. Food and Drug Administration (FDA) has strict regulations about cleaning validation. For example, FDA requires firms to have written general procedures on how cleaning processes will be validated. Also, FDA expects the general validation procedures to address who is responsible for performing and approving the validation study, the acceptance criteria, and when revalidation will be required. FDA also require firms to conduct the validation studies in accordance with the protocols and to document the results of studies. The valuation of cleaning validation is also regulated strictly, which usually mainly covers the aspects of equipment design, cleaning process written, analytical methods and sampling. Each of these processes has their related strict rules and requirements. Acceptance criteria for cleaning validation protocols considers limits for chemicals and actives, limits for bio burden, visually cleanliness of surfaces, and the demonstration of consistency when executing the cleaning procedure. Regarding the establishment of limits, FDA does not intend to set acceptance specifications or methods for determining whether a cleaning process is validated. Current expectations for setting cleaning limits include the application of risk management principles and the consideration of Health Based Exposure Limits as the basis for setting cleaning limits for actives. Other limits that have been mentioned by industry include analytical detection levels such as 10 PPM, biological activity levels such as 1/1000 of the normal therapeutic dose and organoleptic levels. See also Process validation References Quality
Cleaning validation
[ "Chemistry" ]
378
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
8,302,382
https://en.wikipedia.org/wiki/Contraharmonic%20mean
In mathematics, a contraharmonic mean (or antiharmonic mean) is a function complementary to the harmonic mean. The contraharmonic mean is a special case of the Lehmer mean, , where p = 2. Definition The contraharmonic mean of a set of positive real numbers is defined as the arithmetic mean of the squares of the numbers divided by the arithmetic mean of the numbers: Two-variable formulae From the formulas for the arithmetic mean and harmonic mean of two variables we have: Notice that for two variables the average of the harmonic and contraharmonic means is exactly equal to the arithmetic mean: As a gets closer to 0 then H(a, b) also gets closer to 0. The harmonic mean is very sensitive to low values. On the other hand, the contraharmonic mean is sensitive to larger values, so as a approaches 0 then C(a, b) approaches b (so their average remains A(a, b)). There are two other notable relationships between 2-variable means. First, the geometric mean of the arithmetic and harmonic means is equal to the geometric mean of the two values: The second relationship is that the geometric mean of the arithmetic and contraharmonic means is the root mean square: The contraharmonic mean of two variables can be constructed geometrically using a trapezoid. Additional constructions The contraharmonic mean can be constructed on a circle similar to the way the Pythagorean means of two variables are constructed. The contraharmonic mean is the remainder of the diameter on which the harmonic mean lies. History The contraharmonic mean was discovered by the Greek mathematician Eudoxus in the 4th century BCE. Properties The contraharmonic mean satisfies characteristic properties of a mean of some list of positive values : The first property implies the fixed point property, that for all k > 0, It is not monotonic − increasing a value of can decrease the value of the contraharmonic mean. For instance . The contraharmonic mean is higher in value than the arithmetic mean and also higher than the root mean square: where x is a list of values, H is the harmonic mean, G is geometric mean, L is the logarithmic mean, A is the arithmetic mean, R is the root mean square and C is the contraharmonic mean. Unless all values of x are the same, the ≤ signs above can be replaced by <. The name contraharmonic may be due to the fact that when taking the mean of only two variables, the contraharmonic mean is as high above the arithmetic mean as the arithmetic mean is above the harmonic mean (i.e., the arithmetic mean of the two variables is equal to the arithmetic mean of their harmonic and contraharmonic means). Relationship to arithmetic mean and variance The contraharmonic mean of a random variable is equal to the sum of the arithmetic mean and the variance divided by the arithmetic mean. The ratio of the variance and the arithmetic mean was proposed as a test statistic by Clapham. Since the variance is always ≥0 the contraharmonic mean is always greater than or equal to the arithmetic mean. Other relationships Any integer contraharmonic mean of two different positive integers is the hypotenuse of a Pythagorean triple, while any hypotenuse of a Pythagorean triple is a contraharmonic mean of two different positive integers. It is also related to Katz's statistic where m is the mean, s2 the variance and n is the sample size. Jn is asymptotically normally distributed with a mean of zero and variance of 1. Uses in statistics The problem of a size biased sample was discussed by Cox in 1969 on a problem of sampling fibres. The expectation of size biased sample is equal to its contraharmonic mean, and the contraharmonic mean is also used to estimate bias fields in multiplicative models, rather than the arithmetic mean as used in additive models. The contraharmonic mean can be used to average the intensity value of neighbouring pixels in graphing, so as to reduce noise in images and make them clearer to the eye. The probability of a fibre being sampled is proportional to its length. Because of this the usual sample mean (arithmetic mean) is a biased estimator of the true mean. To see this consider where f(x) is the true population distribution, g(x) is the length weighted distribution and m is the sample mean. Taking the usual expectation of the mean here gives the contraharmonic mean rather than the usual (arithmetic) mean of the sample. This problem can be overcome by taking instead the expectation of the harmonic mean (1/x). The expectation and variance of 1/x are and has variance where is the expectation operator. Asymptotically is distributed normally. The asymptotic efficiency of length biased sampling depends compared to random sampling on the underlying distribution. if f(x) is log normal the efficiency is 1 while if the population is gamma distributed with index b, the efficiency is . This distribution has been used in modelling consumer behaviour as well as quality sampling. It has been used longside the exponential distribution in transport planning in the form of its inverse. See also Harmonic mean Lehmer mean Pythagorean means References External links Means
Contraharmonic mean
[ "Physics", "Mathematics" ]
1,112
[ "Means", "Mathematical analysis", "Point (geometry)", "Geometric centers", "Symmetry" ]
8,305,325
https://en.wikipedia.org/wiki/Nickel%20titanium
Nickel titanium, also known as nitinol, is a metal alloy of nickel and titanium, where the two elements are present in roughly equal atomic percentages. Different alloys are named according to the weight percentage of nickel; e.g., nitinol 55 and nitinol 60. Nitinol alloys exhibit two closely related and unique properties: the shape memory effect and superelasticity (also called pseudoelasticity). Shape memory is the ability of nitinol to undergo deformation at one temperature, stay in its deformed shape when the external force is removed, then recover its original, undeformed shape upon heating above its "transformation temperature." Superelasticity is the ability for the metal to undergo large deformations and immediately return to its undeformed shape upon removal of the external load. Nitinol can undergo elastic deformations 10 to 30 times larger than alternative metals. Whether nitinol behaves with shape memory effect or superelasticity depends on whether it is above its transformation temperature during the action. Nitinol behaves with the shape memory effect when it is colder than its transformation temperature, and superelastically when it is warmer than it. History The word "nitinol" is derived from its composition and its place of discovery, Nickel Titanium - Naval Ordnance Laboratory. William J. Buehler along with Frederick E. Wang, discovered its properties during research at the Naval Ordnance Laboratory in 1959. Buehler was attempting to make a better missile nose cone, which could resist fatigue, heat and the force of impact. Having found that a 1:1 alloy of nickel and titanium could do the job, in 1961 he presented a sample at a laboratory management meeting. The sample, folded up like an accordion, was passed around and flexed by the participants. One of them applied heat from his pipe lighter to the sample and, to everyone's surprise, the accordion-shaped strip contracted and took its previous shape. While potential applications for nitinol were realized immediately, practical efforts to commercialize the alloy did not take place until two decades later in the 1980s, largely due to the extraordinary difficulty of melting, processing and machining the alloy. The discovery of the shape-memory effect in general dates back to 1932, when Swedish chemist Arne Ölander first observed the property in gold–cadmium alloys. The same effect was observed in Cu-Zn (brass) in the early 1950s. Mechanism Nitinol's unusual properties are derived from a reversible solid-state phase transformation known as a martensitic transformation, between two different martensite crystal phases, requiring of mechanical stress. At high temperatures, nitinol assumes an interpenetrating simple cubic structure referred to as austenite (also known as the parent phase). At low temperatures, nitinol spontaneously transforms to a more complicated monoclinic crystal structure known as martensite (daughter phase). There are four transition temperatures associated to the austenite-to-martensite and martensite-to-austenite transformations. Starting from full austenite, martensite begins to form as the alloy is cooled to the so-called martensite start temperature, or Ms, and the temperature at which the transformation is complete is called the martensite finish temperature, or Mf. When the alloy is fully martensite and is subjected to heating, austenite starts to form at the austenite start temperature, As, and finishes at the austenite finish temperature, Af. The cooling/heating cycle shows thermal hysteresis. The hysteresis width depends on the precise nitinol composition and processing. Its typical value is a temperature range spanning about but it can be reduced or amplified by alloying and processing. Crucial to nitinol properties are two key aspects of this phase transformation. First is that the transformation is "reversible", meaning that heating above the transformation temperature will revert the crystal structure to the simpler austenite phase. The second key point is that the transformation in both directions is instantaneous. Martensite's crystal structure (known as a monoclinic, or B19' structure) has the unique ability to undergo limited deformation in some ways without breaking atomic bonds. This type of deformation is known as twinning, which consists of the rearrangement of atomic planes without causing slip, or permanent deformation. It is able to undergo about 6–8% strain in this manner. When martensite is reverted to austenite by heating, the original austenitic structure is restored, regardless of whether the martensite phase was deformed. Thus the shape of the high temperature austenite phase is "remembered," even though the alloy is severely deformed at a lower temperature. A great deal of pressure can be produced by preventing the reversion of deformed martensite to austenite—from to, in many cases, more than . One of the reasons that nitinol works so hard to return to its original shape is that it is not just an ordinary metal alloy, but what is known as an intermetallic compound. In an ordinary alloy, the constituents are randomly positioned in the crystal lattice; in an ordered intermetallic compound, the atoms (in this case, nickel and titanium) have very specific locations in the lattice. The fact that nitinol is an intermetallic is largely responsible for the complexity in fabricating devices made from the alloy. To fix the original "parent shape," the alloy must be held in position and heated to about . This process is usually called shape setting. A second effect, called superelasticity or pseudoelasticity, is also observed in nitinol. This effect is the direct result of the fact that martensite can be formed by applying a stress as well as by cooling. Thus in a certain temperature range, one can apply a stress to austenite, causing martensite to form while at the same time changing shape. In this case, as soon as the stress is removed, the nitinol will spontaneously return to its original shape. In this mode of use, nitinol behaves like a super spring, possessing an elastic range 10 to 30 times greater than that of a normal spring material. There are, however, constraints: the effect is only observed up to about above the Af temperature. This upper limit is referred to as Md, which corresponds to the highest temperature in which it is still possible to stress-induce the formation of martensite. Below Md, martensite formation under load allows superelasticity due to twinning. Above Md, since martensite is no longer formed, the only response to stress is slip of the austenitic microstructure, and thus permanent deformation. Nitinol is typically composed of approximately 50 to 51% nickel by atomic percent (55 to 56% weight percent). Making small changes in the composition can change the transition temperature of the alloy significantly. Transformation temperatures in nitinol can be controlled to some extent, where Af temperature ranges from about . Thus, it is common practice to refer to a nitinol formulation as "superelastic" or "austenitic" if Af is lower than a reference temperature, while as "shape memory" or "martensitic" if higher. The reference temperature is usually defined as the room temperature or the human body temperature (). One often-encountered effect regarding nitinol is the so-called R-phase. The R-phase is another martensitic phase that competes with the martensite phase mentioned above. Because it does not offer the large memory effects of the martensite phase, it is usually of non practical use. Manufacturing Nitinol is exceedingly difficult to make, due to the exceptionally tight compositional control required, and the tremendous reactivity of titanium. Every atom of titanium that combines with oxygen or carbon is an atom that is robbed from the NiTi lattice, thus shifting the composition and making the transformation temperature lower. There are two primary melting methods used today. Vacuum arc remelting (VAR) is done by striking an electrical arc between the raw material and a water-cooled copper strike plate. Melting is done in a high vacuum, and the mold itself is water-cooled copper. Vacuum induction melting (VIM) is done by using alternating magnetic fields to heat the raw materials in a crucible (generally carbon). This is also done in a high vacuum. While both methods have advantages, it has been demonstrated that an industrial state-of-the-art VIM melted material has smaller inclusions than an industrial state-of-the-art VAR one, leading to a higher fatigue resistance. Other research report that VAR employing extreme high-purity raw materials may lead to a reduced number of inclusions and thus to an improved fatigue behavior. Other methods are also used on a boutique scale, including plasma arc melting, induction skull melting, and e-beam melting. Physical vapour deposition is also used on a laboratory scale. Heat treating nitinol is delicate and critical. It is a knowledge intensive process to fine-tune the transformation temperatures. Aging time and temperature controls the precipitation of various Ni-rich phases, and thus controls how much nickel resides in the NiTi lattice; by depleting the matrix of nickel, aging increases the transformation temperature. The combination of heat treatment and cold working is essential in controlling the properties of nitinol products. Challenges Fatigue failures of nitinol devices are a constant subject of discussion. Because it is the material of choice for applications requiring enormous flexibility and motion (e.g., peripheral stents, heart valves, smart thermomechanical actuators and electromechanical microactuators), it is necessarily exposed to much greater fatigue strains compared to other metals. While the strain-controlled fatigue performance of nitinol is superior to all other known metals, fatigue failures have been observed in the most demanding applications; with a great deal of effort underway to better understand and define the durability limits of nitinol. Nitinol is half nickel, and thus there has been a great deal of concern in the medical industry regarding the release of nickel, a known allergen and possible carcinogen. (Nickel is also present in substantial amounts in stainless steel and cobalt-chrome alloys also used in the medical industry.) When treated (via electropolishing or passivation), nitinol forms a very stable protective TiO2 layer that acts as an effective and self-healing barrier against ion exchange; repeatedly showing that nitinol releases nickel at a slower pace than stainless steel, for example. Early Nitinol medical devices were made without electropolishing, and corrosion was observed. Today's nitinol vascular self-expandable metallic stents show no evidence of corrosion or nickel release, and outcomes in patients with and without nickel allergies are indistinguishable. There are constant and long-running discussions regarding inclusions in nitinol, both TiC and Ti2NiOx. As in all other metals and alloys, inclusions can be found in nitinol. The size, distribution and type of inclusions can be controlled to some extent. Theoretically, smaller, rounder, and fewer inclusions should lead to increased fatigue durability. In literature, some early works report to have failed to show measurable differences, while novel studies demonstrate a dependence of fatigue resistance on the typical inclusion size in an alloy. Nitinol is difficult to weld, both to itself and other materials. Laser welding nitinol to itself is a relatively routine process. Strong joints between NiTi wires and stainless steel wires have been made using nickel filler. Laser and tungsten inert gas (TIG) welds have been made between NiTi tubes and stainless steel tubes. More research is ongoing into other processes and other metals to which nitinol can be welded. Actuation frequency of nitinol is dependent on heat management, especially during the cooling phase. Numerous methods are used to increase the cooling performance, such as forced air, flowing liquids, thermoelectric modules (i.e. Peltier or semiconductor heat pumps), heat sinks, conductive materials and higher surface-to-volume ratio (improvements up to 3.3 Hz with very thin wires and up to 100 Hz with thin films of nitinol). The fastest nitinol actuation recorded was carried by a high voltage capacitor discharge which heated an SMA wire in a manner of microseconds, and resulted in a complete phase transformation (and high velocities) in a few milliseconds. Recent advances have shown that processing of nitinol can expand thermomechanical capabilities, allowing for multiple shape memories to be embedded within a monolithic structure. Research on multi-memory technology is on-going and may deliver enhanced shape memory devices in the near future, and new materials and material structures, such as hybrid shape memory materials (SMMs) and shape memory composites (SMCs). Applications There are four commonly used types of applications for nitinol: Free recovery Nitinol is deformed at a low temperature, remains deformed, and then is heated to recover its original shape through the shape memory effect. Constrained recovery Similar to free recovery, except that recovery is rigidly prevented and thus a stress is generated. Work production The alloy is allowed to recover, but to do so it must act against a force (thus doing work). Superelasticity Nitinol acts as a super spring through the superelastic effect. Superelastic materials undergo stress-induced transformation and are commonly recognized for their "shape-memory" property. Due to its superelasticity, NiTi wires exhibit "elastocaloric" effect, which is stress-triggered heating/cooling. NiTi wires are currently under research as the most promising material for the technology. The process begins with tensile loading on the wire, which causes fluid (within the wire) to flow to HHEX (hot heat exchanger). Simultaneously, heat will be expelled, which can be used to heat the surrounding. In the reverse process, tensile unloading of the wire leads to fluid flowing to CHEX (cold heat exchanger), causing the NiTi wire to absorb heat from the surrounding. Therefore, the temperature of the surrounding can be decreased (cooled). Elastocaloric devices are often compared with magnetocaloric devices as new methods of efficient heating/cooling. Elastocaloric device made with NiTi wires has an advantage over magnetocaloric device made with gadolinium due to its specific cooling power (at 2 Hz), which is 70X better (7 kWh/kg vs. 0.1 kWh/kg). However, elastocaloric device made with NiTi wires also have limitations, such as its short fatigue life and dependency on large tensile forces (energy consuming). In 1989 a survey was conducted in the United States and Canada that involved seven organizations. The survey focused on predicting the future technology, market, and applications of SMAs. The companies predicted the following uses of nitinol in a decreasing order of importance: (1) Couplings, (2) Biomedical and medical, (3) Toys, demonstration, novelty items, (4) Actuators, (5) Heat Engines, (6) Sensors, (7) Cryogenically activated die and bubble memory sockets, and finally (8) lifting devices. Thermal and electrical actuators Nitinol can be used to replace conventional actuators (solenoids, servo motors, etc.), such as in the Stiquito, a simple hexapod robot. Nitinol springs are used in thermal valves for fluidics, where the material both acts as a temperature sensor and an actuator. It is used as autofocus actuator in action cameras and as an optical image stabilizer in mobile phones. It is used in pneumatic valves for comfort seating and has become an industry standard. The 2014 Chevrolet Corvette incorporates nitinol actuators, which replaced heavier motorized actuators to open and close the hatch vent that releases air from the trunk, making it easier to close. Biocompatible and biomedical applications Nitinol is highly biocompatible and has properties suitable for use in orthopedic implants. Due to nitinol's unique properties it has seen a large demand for use in less invasive medical devices. Nitinol tubing is commonly used in catheters, stents, and superelastic needles. In colorectal surgery, the material is used in devices for reconnecting the intestine after removing the pathogens. Nitinol is used for devices developed by Franz Freudenthal to treat patent ductus arteriosus, blocking a blood vessel that bypasses the lungs and has failed to close after birth in an infant. In dentistry, the material is used in orthodontics for brackets and wires connecting the teeth. Once the SMA wire is placed in the mouth its temperature rises to ambient body temperature. This causes the nitinol to contract back to its original shape, applying a constant force to move the teeth. These SMA wires do not need to be retightened as often as other wires because they can contract as the teeth move unlike conventional stainless steel wires. Additionally, nitinol can be used in endodontics, where nitinol files are used to clean and shape the root canals during the root canal procedure. Because of the high fatigue tolerance and flexibility of nitinol, it greatly decreases the possibility of an endodontic file breaking inside the tooth during root canal treatment, thus improving safety for the patient. Another significant application of nitinol in medicine is in stents: a collapsed stent can be inserted into an artery or vein, where body temperature warms the stent and the stent returns to its original expanded shape following removal of a constraining sheath; the stent then helps support the artery or vein to improve blood flow. It is also used as a replacement for sutures—nitinol wire can be woven through two structures then allowed to transform into its preformed shape, which should hold the structures in place. Similarly, collapsible structures composed of braided, microscopically-thin nitinol filaments can be used in neurovascular interventions such as stroke thrombolysis, embolization, and intracranial angioplasty. Application of nitinol wire in female contraception, specifically in intrauterine devices due to its small, flexible nature and its high efficacy. Damping systems in structural engineering Superelastic nitinol finds a variety of applications in civil structures such as bridges and buildings. One such application is Intelligent Reinforced Concrete (IRC), which incorporates NiTi wires embedded within the concrete. These wires can sense cracks and contract to heal macro-sized cracks. Another application is active tuning of structural natural frequency using nitinol wires to damp vibrations. Other applications and prototypes Demonstration model heat engines have been built which use nitinol wire to produce mechanical energy from hot and cold heat sources. A prototype commercial engine developed in the 1970s by engineer Ridgway Banks at Lawrence Berkeley National Laboratory, was named the Banks Engine. Nitinol is also popular in extremely resilient glasses frames. Boeing engineers successfully flight-tested SMA-actuated morphing chevrons on the Boeing 777-300ER Quiet Technology Demonstrator 2. The Ford Motor Company has registered a US patent for what it calls a "bicycle derailleur apparatus for controlling bicycle speed". Filed on 22 April 2019, the patent depicts a front derailleur for a bicycle, devoid of cables, instead using two nitinol wires to provide the movement needed to shift gears. It is used in some novelty products, such as self-bending spoons which can be used by amateur and stage magicians to demonstrate "psychic" powers or as a practical joke, as the spoon will bend itself when used to stir tea, coffee, or any other warm liquid. Due to the high damping capacity of superelastic nitinol, it is also used as a golf club insert. Nickel titanium can be used to make the underwires for underwire bras. Nickel-titanium alloy is used in aerospace applications such as aircraft pipe joints, spacecraft antennas, fasteners, connecting components, electrical connections, and electromechanical actuators. In 1998, the golf manufacturer Ping allowed it’s WRX department to create the Isoforce series, which originally included a Nitinol face insert. The process was so expensive, models were sold below cost price before being quickly discontinued and replaced with cheaper aluminium and copper inserts. The Anser F, Sedona F and Darby F remain the only golf equipment ever made with Nitinol. References Further reading H.R. Chen, ed., Shape Memory Alloys: Manufacture, Properties and Applications, Nova Science Publishers, Inc., 2010, . Y.Y. Chu & L.C. Zhao, eds., Shape Memory Materials and Its [sic] Applications, Trans Tech Publications Ltd., 2002, . D.C. Lagoudas, ed., Shape Memory Alloys, Springer Science+Business Media LLC, 2008, . K. Ōtsuka & C.M. Wayman, eds., Shape Memory Materials, Cambridge University Press, 1998, Sai V. Raj, Low Temperature Creep of Hot-extruded Near-stoichiometric NiTi Shape Memory Alloy, National Aeronautics and Space Administration, Glenn Research Center, 2013. Gerald Julien, Nitinol Technologies, Inc Edgewood, Wa. Us patent" 6422010 Manufacturing of Nitinol Parts & Forms A process of making parts and forms of Type 60 Nitinol having a shape memory effect, comprising: selecting a Type 60 Nitinol. Inventor G, Julien, CEO of Nitinol Technologies, Inc. (Washington State) External links Society of Shape Memory and Superelastic Technologies Nitinol Resource Library Physical properties of nitinol Nitinol Technical Resource Library Literature on Nitinol Wire Nitinol-Tubing How NASA Reinvented The Wheel - Shape Memory Alloys Demonstration of shape memory in nitinol and animated depiction of the martensite-austenite transition Dental materials Biomaterials Intermetallics Smart materials tr:Şekil Hafızalı Alaşımlar
Nickel titanium
[ "Physics", "Chemistry", "Materials_science", "Engineering", "Biology" ]
4,642
[ "Biomaterials", "Dental materials", "Inorganic compounds", "Metallurgy", "Materials science", "Materials", "Intermetallics", "Condensed matter physics", "Alloys", "Smart materials", "Matter", "Medical technology" ]
8,309,686
https://en.wikipedia.org/wiki/Coordination%20number
In chemistry, crystallography, and materials science, the coordination number, also called ligancy, of a central atom in a molecule or crystal is the number of atoms, molecules or ions bonded to it. The ion/molecule/atom surrounding the central ion/molecule/atom is called a ligand. This number is determined somewhat differently for molecules than for crystals. For molecules and polyatomic ions the coordination number of an atom is determined by simply counting the other atoms to which it is bonded (by either single or multiple bonds). For example, [Cr(NH3)2Cl2Br2]− has Cr3+ as its central cation, which has a coordination number of 6 and is described as hexacoordinate. The common coordination numbers are 4, 6 and 8. Molecules, polyatomic ions and coordination complexes In chemistry, coordination number, defined originally in 1893 by Alfred Werner, is the total number of neighbors of a central atom in a molecule or ion. The concept is most commonly applied to coordination complexes. Simple and commonplace cases The most common coordination number for d-block transition metal complexes is 6. The coordination number does not distinguish the geometry of such complexes, i.e. octahedral vs trigonal prismatic. For transition metal complexes, coordination numbers range from 2 (e.g., AuI in Ph3PAuCl) to 9 (e.g., ReVII in [ReH9]2−). Metals in the f-block (the lanthanoids and actinoids) can accommodate higher coordination number due to their greater ionic radii and availability of more orbitals for bonding. Coordination numbers of 8 to 12 are commonly observed for f-block elements. For example, with bidentate nitrate ions as ligands, CeIV and ThIV form the 12-coordinate ions [Ce(NO3)6]2− (ceric ammonium nitrate) and [Th(NO3)6]2−. When the surrounding ligands are much smaller than the central atom, even higher coordination numbers may be possible. One computational chemistry study predicted a particularly stable ion composed of a central lead ion coordinated with no fewer than 15 helium atoms. Among the Frank–Kasper phases, the packing of metallic atoms can give coordination numbers of up to 16. At the opposite extreme, steric shielding can give rise to unusually low coordination numbers. An extremely rare instance of a metal adopting a coordination number of 1 occurs in the terphenyl-based arylthallium(I) complex 2,6-Tipp2C6H3Tl, where Tipp is the 2,4,6-triisopropylphenyl group. Polyhapto ligands Coordination numbers become ambiguous when dealing with polyhapto ligands. For π-electron ligands such as the cyclopentadienide ion [C5H5]−, alkenes and the cyclooctatetraenide ion [C8H8]2−, the number of adjacent atoms in the π-electron system that bind to the central atom is termed the hapticity. In ferrocene the hapticity, η, of each cyclopentadienide anion is five, Fe(η5-C5H5)2. Various ways exist for assigning the contribution made to the coordination number of the central iron atom by each cyclopentadienide ligand. The contribution could be assigned as one since there is one ligand, or as five since there are five neighbouring atoms, or as three since there are three electron pairs involved. Normally the count of electron pairs is taken. Surfaces and reconstruction The coordination numbers are well defined for atoms in the interior of a crystal lattice: one counts the nearest neighbors in all directions. The number of neighbors of an interior atom is termed the bulk coordination number. For surfaces, the number of neighbors is more limited, so the surface coordination number is smaller than the bulk coordination number. Often the surface coordination number is unknown or variable. The surface coordination number is also dependent on the Miller indices of the surface. In a body-centered cubic (BCC) crystal, the bulk coordination number is 8, whereas, for the (100) surface, the surface coordination number is 4. Experimental determination A common way to determine the coordination number of an atom is by X-ray crystallography. Related techniques include neutron or electron diffraction. The coordination number of an atom can be determined straightforwardly by counting nearest neighbors. α-Aluminium has a regular cubic close packed structure, fcc, where each aluminium atom has 12 nearest neighbors, 6 in the same plane and 3 above and below and the coordination polyhedron is a cuboctahedron. α-Iron has a body centered cubic structure where each iron atom has 8 nearest neighbors situated at the corners of a cube. The two most common allotropes of carbon have different coordination numbers. In diamond, each carbon atom is at the centre of a regular tetrahedron formed by four other carbon atoms, the coordination number is four, as for methane. Graphite is made of two-dimensional layers in which each carbon is covalently bonded to three other carbons; atoms in other layers are further away and are not nearest neighbours, giving a coordination number of 3. For chemical compounds with regular lattices such as sodium chloride and caesium chloride, a count of the nearest neighbors gives a good picture of the environment of the ions. In sodium chloride each sodium ion has 6 chloride ions as nearest neighbours (at 276 pm) at the corners of an octahedron and each chloride ion has 6 sodium atoms (also at 276 pm) at the corners of an octahedron. In caesium chloride each caesium has 8 chloride ions (at 356 pm) situated at the corners of a cube and each chloride has eight caesium ions (also at 356 pm) at the corners of a cube. Complications In some compounds the metal-ligand bonds may not all be at the same distance. For example in PbCl2, the coordination number of Pb2+ could be said to be seven or nine, depending on which chlorides are assigned as ligands. Seven chloride ligands have Pb-Cl distances of 280–309 pm. Two chloride ligands are more distant, with a Pb-Cl distances of 370 pm. In some cases a different definition of coordination number is used that includes atoms at a greater distance than the nearest neighbours. The very broad definition adopted by the International Union of Crystallography, IUCR, states that the coordination number of an atom in a crystalline solid depends on the chemical bonding model and the way in which the coordination number is calculated. Some metals have irregular structures. For example, zinc has a distorted hexagonal close packed structure. Regular hexagonal close packing of spheres would predict that each atom has 12 nearest neighbours and a triangular orthobicupola (also called an anticuboctahedron or twinned cuboctahedron) coordination polyhedron. In zinc there are only 6 nearest neighbours at 266 pm in the same close packed plane with six other, next-nearest neighbours, equidistant, three in each of the close packed planes above and below at 291 pm. It is considered to be reasonable to describe the coordination number as 12 rather than 6. Similar considerations can be applied to the regular body centred cube structure where in addition to the 8 nearest neighbors there 6 more, approximately 15% more distant, and in this case the coordination number is often considered to be 14. Many chemical compounds have distorted structures. Nickel arsenide, NiAs has a structure where nickel and arsenic atoms are 6-coordinate. Unlike sodium chloride where the chloride ions are cubic close packed, the arsenic anions are hexagonal close packed. The nickel ions are 6-coordinate with a distorted octahedral coordination polyhedron where columns of octahedra share opposite faces. The arsenic ions are not octahedrally coordinated but have a trigonal prismatic coordination polyhedron. A consequence of this arrangement is that the nickel atoms are rather close to each other. Other compounds that share this structure, or a closely related one are some transition metal sulfides such as FeS and CoS, as well as some intermetallics. In cobalt(II) telluride, CoTe, the six tellurium and two cobalt atoms are all equidistant from the central Co atom. Two other examples of commonly-encountered chemicals are Fe2O3 and TiO2. Fe2O3 has a crystal structure that can be described as having a near close packed array of oxygen atoms with iron atoms filling two thirds of the octahedral holes. However each iron atom has 3 nearest neighbors and 3 others a little further away. The structure is quite complex, the oxygen atoms are coordinated to four iron atoms and the iron atoms in turn share vertices, edges and faces of the distorted octahedra. TiO2 has the rutile structure. The titanium atoms 6-coordinate, 2 atoms at 198.3 pm and 4 at 194.6 pm, in a slightly distorted octahedron. The octahedra around the titanium atoms share edges and vertices to form a 3-D network. The oxide ions are 3-coordinate in a trigonal planar configuration. Usage in quasicrystal, liquid and other disordered systems The coordination number of systems with disorder cannot be precisely defined. The first coordination number can be defined using the radial distribution function g(r): where r0 is the rightmost position starting from r = 0 whereon g(r) is approximately zero, r1 is the first minimum. Therefore, it is the area under the first peak of g(r). The second coordination number is defined similarly: Alternative definitions for the coordination number can be found in literature, but in essence the main idea is the same. One of those definition are as follows: Denoting the position of the first peak as rp, The first coordination shell is the spherical shell with radius between r0 and r1 around the central particle under investigation. References External links Meteorite Book-Glossary C A website on coordination numbers Chemical bonding Stereochemistry Molecular geometry Materials science Coordination chemistry
Coordination number
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,109
[ "Applied and interdisciplinary physics", "Molecules", "Molecular geometry", "Stereochemistry", "Coordination chemistry", "Materials science", "Space", "Condensed matter physics", "nan", "Spacetime", "Chemical bonding", "Matter" ]
23,925,683
https://en.wikipedia.org/wiki/Fencamine
Fencamine (Altimina, Sicoclor) is a psychostimulant drug of the amphetamine class. It is closely related to fenethylline. References Methamphetamines Norepinephrine-dopamine releasing agents Xanthines
Fencamine
[ "Chemistry" ]
61
[ "Alkaloids by chemical classification", "Xanthines" ]
23,927,032
https://en.wikipedia.org/wiki/Laninamivir
Laninamivir (CS-8958) is a neuraminidase inhibitor that is a drug used for the treatment and prophylaxis of Influenzavirus A and Influenzavirus B. It is currently in Phase III clinical trials. It is a long-acting neuraminidase inhibitor administered by nasal inhalation. Laninamivir was approved for influenza treatment in Japan in 2010 and for prophylaxis in 2013. It is currently marketed under the name Inavir by Daiichi Sankyo. Biota Pharmaceuticals and Daiichi Sankyo co-own Laninamivir. On 1 April 2011, BARDA awarded up to an estimated U$231m to Biota Pharmaceuticals (Formerly Biota Holdings Ltd) wholly owned subsidiary, Biota Scientific Management Pty Ltd, for the advanced development of Laninamivir in the US. It is under clinical evaluations in other countries. References Guanidines Neuraminidase inhibitors Dihydropyrans Acetamides
Laninamivir
[ "Chemistry" ]
204
[ "Glycobiology", "Guanidines", "Functional groups", "Neuraminidase inhibitors" ]
23,929,075
https://en.wikipedia.org/wiki/Massively%20parallel%20signature%20sequencing
Massive parallel signature sequencing (MPSS) is a procedure that is used to identify and quantify mRNA transcripts, resulting in data similar to serial analysis of gene expression (SAGE), although it employs a series of biochemical and sequencing steps that are substantially different. How it works MPSS is a method for determining expression levels of mRNA by counting the number of individual mRNA molecules produced by each gene. It is "open ended" in the sense that the identity of the RNAs to be measured are not pre-determined as they are with gene expression microarrays. A sample of mRNA are first converted to complementary DNA (cDNA) using reverse transcriptase, which makes subsequent manipulations easier. These cDNA are fused to a small oligonucleotide "tag" which allows the cDNA to be PCR amplified and then coupled to microbeads. After several rounds of sequence determination, using hybridization of fluorescent labeled probes, a sequence signature of ~16–20 bp is determined from each bead. Fluorescent imaging captures the signal from all of the beads, while affixed to a 2-dimensional surface, so DNA sequences are determined from all the beads in parallel. There is some amplification of the starting material so, in the end, approximately 1,000,000 sequence reads are obtained per experiment. Overview MPSS allows mRNA transcripts to be identified through the generation of a 17–20 bp (base pair) signature sequence adjacent to the 3'-end of the 3'-most site of the designated restriction enzyme (commonly Sau3A or DpnII). Each signature sequence is cloned onto one of a million microbeads. The technique ensures that only one type of DNA sequence is on a microbead. So if there are 50 copies of a specific transcript in the biological sample, these transcripts will be captured onto 50 different microbeads, each bead holding roughly 100,000 amplified copies of the specific signature sequence. The microbeads are then arrayed in a flow cell for sequencing and quantification. The sequence signatures are deciphered by the parallel identification of four bases by hybridization to fluorescently labeled encoders (Figure 5). Each of the encoders has a unique label which is detected after hybridization by taking an image of the microbead array. The next step is to cleave and remove that set of four bases and reveal the next four bases for a new round of hybridization to encoders and image acquisition. The raw output is a list of 17–20 bp signature sequences, that can be annotated to the human genome for gene identification. Comparison with SAGE The longer tag sequence confers a higher specificity than the classical SAGE tag of 9–10 bp. The level of unique gene expression is represented by the count of transcripts present per million molecules, similar to SAGE output. A significant advantage is the larger library size compared with SAGE. An MPSS library typically holds 1 million signature tags, which is roughly 20 times the size of a SAGE library. Some of the disadvantages related to SAGE apply to MPSS as well, such as loss of certain transcripts due to lack of restriction enzyme recognition site and ambiguity in tag annotation. The high sensitivity and absolute gene expression certainly favors MPSS. However, the technology is only available through Lynxgen Therapeutics, Inc. (then Solexa Inc till 2006 and then Illumina). References Further reading External links https://www.ncbi.nlm.nih.gov/projects/genome/probe/doc/TechMPSS.shtml DNA sequencing methods Molecular biology
Massively parallel signature sequencing
[ "Chemistry", "Biology" ]
747
[ "Genetics techniques", "DNA sequencing methods", "DNA sequencing", "Molecular biology", "Biochemistry" ]
25,254,818
https://en.wikipedia.org/wiki/Microcredit%20for%20water%20supply%20and%20sanitation
Microcredit for water supply and sanitation is the application of microcredit to provide loans to small enterprises and households in order to increase access to an improved water source and sanitation in developing countries. For background, most investments in water supply and sanitation infrastructure are financed by the public sector, but investment levels have been insufficient to achieve universal access. Commercial credit to public utilities was limited by low tariffs and insufficient cost-recovery. Microcredits are a complementary or alternative approach to allow the poor to gain access to water supply and sanitation in the aforementioned regions. Funding is allocated either to small-scale independent water-providers who generate an income stream from selling water, or to households in order to finance house connections, plumbing installations, or on-site sanitation such as latrines. Many microfinance institutions have only limited experience with financing investments in water supply and sanitation. While there have been many pilot projects in both urban and rural areas, only a small number of these have been expanded. A water connection can significantly lower a family's water expenditures, if it previously had to rely on water vendors, allowing cost-savings to repay the credit. The time previously required to physically fetch water can be put to income-generating purposes, and investments in sanitation provide health benefits that can also translate into increased income. Types There are three broad types of microcredit products in the water sector: Microcredits aiming to improve access to water supply and sanitation at the household level. Credits for small and medium enterprises for small water-supply investments. Credits to upgrade urban services and shared facilities in low-income areas. Household credits Microcredits can be targeted specifically at water and sanitation, or general-purpose microcredits may be used for this purpose. Such use is typically to finance household water and sewerage connections, bathrooms, toilets, pit latrines, rainwater harvesting tanks or water purifiers. The loans are generally with a tenure of less than three years. Microfinance institutions, such as Grameen Bank, the Vietnam Bank for Social Policies, and numerous microfinance institutions in India and Kenya, offer credits to individuals for water and sanitation facilities. Non-government organisations (NGOs) that are not microfinance institutions, such as Dustha Shasthya Kendra (DSK) in Bangladesh or Community Integrated Development Initiatives in Uganda, also provide credits for water supply and sanitation. The potential market size is considered huge in both rural and urban areas and some of these water and sanitation schemes have achieved a significant scale. Nevertheless, compared to the microfinance institution's overall size, they still play a minor role. In 1999, all microfinance institutions in Bangladesh and more recently in Vietnam had reached only about 9 percent and 2.4 percent of rural households respectively. In either country, water and sanitation amounts to less than two percent of the microfinance institution's total portfolio. However, borrowers for water supply and sanitation comprised 30 percent of total borrowers for Grameen Bank and 10 percent of total borrowers from Vietnam Bank for Social Policies. For instance, the water and sanitation portfolio of the Indian microfinance institution SEWA Bank comprised 15 percent of all loans provided in the city of Admedabad over a period of five years. Examples WaterCredit The US-based NGO Water.org, through its WaterCredit initiative, had since 2003 supported microfinance institutions and NGOs in India, Bangladesh, Kenya and Uganda in providing microcredit for water supply and sanitation. As of 2011, it had helped its 13 partner organisations to make 51,000 credits. The organisation claimed a 97% repayment rate and stated that 90% of its borrowers were women. WaterCredit did not subsidise interest rates and typically did not make microcredits directly. Instead, it connected microfinance institutions with water and sanitation NGOs to develop water and sanitation microcredits, including through market assessments and capacity-building. Only in exceptional cases did it provide guarantees, standing letters of credit or the initial capital to establish a revolving fund managed by an NGO that was not previously engaged in microcredit. Indonesia Since 2003 Bank Rakyat Indonesia financed water connections with the water utility PDAM through microcredits with support from the USAID Environmental Services Program. According to an impact assessment conducted in 2005, the program helped the utility to increase its customer base by 40% which reduced its costs per cubic meter of water sold by 42% and reduced its non-revenue water from 56.5% in 2002 to 36% percent at the end of 2004. Vietnam In 1999, the World Bank in cooperation with the governments of Australia, Finland and Denmark supported the creation of a Sanitation Revolving Fund with an initial working capital of  million. The project was carried out in the cities of Danang, Haiphong, and Quang Ninh. The aim was to provide small loans () to low-income households for targeted sanitation investments such as septic tanks, urine diverting/composting latrines or sewer connections. Participating households had to join a savings and credit group of 12 to 20 people, who were required to live near each other to ensure community control. The loans had a catalyst effect for household investment. With loans covering approximately two-thirds of investment costs, households had to find complementary sources of finance (from family and friends). In contrast to a centralised, supply-driven approach, where government institutions design a project with little community consultation and no capacity-building for the community, this approach was strictly demand-driven and thus required the Sanitation Revolving Fund to develop awareness-raising campaigns for sanitation. Managed by the microfinance-experienced Women's Union of Vietnam, the Sanitation Revolving Fund gave 200,000 households the opportunity to finance and build sanitation facilities over a period of seven years. With a leverage effect of up to 25 times the amount of public spending on household investment and repayment rates of almost 100 percent, the fund is seen as a best practice example by its financiers. In 2009 it was considered to be scaled up with further support of the World Bank and the Vietnam Bank for Social Policies. Small and medium enterprise loans Small and medium enterprise (SME) loans are used for investments by community groups, for private providers in greenfield contexts, or for rehabilitation measures of water supply and sanitation. Supplied by mature microfinance institutions, these loans are seen as suitable for other suppliers in the value chain such as pit latrine emptiers and tanker suppliers. With the right conditions such as a solid policy environment and clear institutional relationships, there is a market potential for small-scale water supply projects. In comparison to retail loans on the household level, the experience with loan products for SME is fairly limited. These loan programs remain mostly at the pilot level. However, the design of some recent projects using microcredits for community-based service providers in some African countries (such as those of the K-Rep Bank in Kenya and Togo) shows a sustainable expansion potential. In the case of Kenya's K-Rep Bank, the Water and Sanitation Program, which facilitated the project, is already exploring a country-wide scaling up. Examples Kenya Kenya has numerous community-managed small-water enterprises. The Water and Sanitation Program (WSP) has launched an initiative to use microcredits to promote these enterprises. As part of this initiative, the commercial microfinance bank K-Rep Bank provided loans to 21 community-managed water projects. The Global Partnership on Output-based Aid (GPOBA) supported the programme by providing partial subsidies. Every project is pre-financed with a credit of up to 80 percent of the project costs (averaging ). After an independent verification process, certifying a successful completion, a part of the loan is refinanced by a 40 percent output-based aid subsidy. The remaining loan repayments have to be generated from water revenues. In addition, technical-assistance grants are provided to assist with the project development. Togo In Togo, CREPA (Centre Regional pour l'Eau Potable et L'Assainissement à Faible Côut) had encouraged the liberalisation of water services in 2001. As a consequence, six domestic microfinance institutions were preparing microcredit scheme for a shallow borehole () or rainwater-harvesting tank (). The loans were originally dedicated to households, which act as a small private provider, selling water in bulk or in buckets. However, the funds were disbursed directly to the private (drilling) companies. In the period from 2001 to 2006, roughly 1,200 water points were built and have been used for small-business activities by the households which participated in that programme. Urban services upgrading This type of credits has not been used widely. See also Water supply Sanitation References External links WaterCredit by Water.org Vietnam Women's Union Three Cities Sanitation Project in Vietnam GTZ, World Bank and IFAD:Pro-Poor Financial Services for Rural Water. Linking the Water Sector to Rural Finance, 2010 Microfinance Water supply Sanitation
Microcredit for water supply and sanitation
[ "Chemistry", "Engineering", "Environmental_science" ]
1,849
[ "Hydrology", "Water supply", "Environmental engineering" ]
25,260,082
https://en.wikipedia.org/wiki/Depolarizing%20prepulse
A depolarizing prepulse (DPP) is an electrical stimulus that causes the potential difference measured across a neuronal membrane to become more positive or less negative, and precedes another electrical stimulus. DPPs may be of either the voltage or current stimulus variety and have been used to inhibit neural activity, selectively excite neurons, and increase the pain threshold associated with electrocutaneous stimulation. Biophysical mechanisms Hodgkin–Huxley model Typical action potentials are initiated by voltage-gated sodium channels. As the transmembrane voltage is increased the probability that a given voltage gated sodium channel is open is increased, thus enabling an influx of Na+ ions. Once the sodium inflow becomes greater than the potassium outflow, a positive feedback loop of sodium entry is closed and thus an action potential is fired. In the early 1950s Drs. Hodgkin and Huxley performed experiments on the squid giant axon, and in the process developed a model (the Hodgkin–Huxley model) for sodium channel conductance. It was found that the conductance may be expressed as: where is the maximum sodium conductance, m is the activation gate, and h is the inactivation gate (both gates are shown in the adjacent image). The values of m and h vary between 0 and 1, depending upon the transmembrane potential. As the transmembrane potential rises, the value of m increases, thus increasing the probability that the activation gate will be open. And as the transmembrane potential drops, the value of h increases, along with the probability that the inactivation gate will be open. The rate of change for an h gate is much slower than that of an m gate, therefore if one precedes a sub-threshold voltage stimulation with a hyperpolarizing prepulse, the value of h may be temporarily increased, enabling the neuron to fire an action potential. Vice versa, if one precedes a supra-threshold voltage stimulation with a depolarizing prepulse, the value of h may be temporarily reduced, enabling the inhibition of the neuron. An illustration of how the transmembrane voltage response to a supra-threshold stimulus may differ, based upon the presence of a depolarizing prepulse, may be observed in the adjacent image. The Hodgkin–Huxley model is slightly inaccurate as it fudges over some dependencies, for example the inactivation gate should not be able to close unless the activation gate is open and the inactivation gate, once closed, is located inside the cell membrane where it cannot be directly affected by the transmembrane potential. However, this model is useful for gaining a high level understanding of hyperpolarizing and depolarizing prepulses. Depolarizing neurons creates a more likely out come of the neuron firing. Voltage-gated sodium channel Since the Hodgkin–Huxley model was first proposed in the 1950s, much has been learned concerning the structure and functionality of voltage-gated sodium channels. Although the exact three dimensional structure of the sodium channel remains unknown, its composition and the functionality of individual components have been determined. Voltage-gated sodium channels are large, multimeric complexes, composed of a single α subunit and one or more β subunits, an illustration of which may be observed in the adjacent image. The α subunit folds into four homologous domains, each of which contain six α-helical transmembrane segments. The S4 segments of each domain serve as voltage sensors for activation. Each S4 segment consists of a repeating structure of one positively charged residue and two hydrophobic residues, and these combine to form a helical arrangement. When the channel is depolarized these S4 segments undergo a conformational change that widens the helical arrangement and opens the sodium-channel pore. Within milliseconds after the pore's opening, the intracellular loop that connects domains III and IV, binds to the channel's intracellular pore, inactivating the channel. Thus, by providing a depolarizing prepulse before a stimulus, there is a greater probability that the inactivating domains of the sodium channels have bound to their respective pores, reducing the stimulus induced sodium influx and the influence of the stimulus. Depolarizing prepulse properties DPP duration The relationship between the DPP duration and neuronal recruitment is as follows. If the duration of the DPP is relatively short, i.e. much less than 100 μs, then the threshold of excitation for the surrounding nerves will be decreased as opposed to increased. Possibly resulting from the depolarization of the S4 segments and the little time given for inactivation. For long duration DPP's the III and IV domains of the sodium channels (discussed above) are given more time to bind with their respective channel pores, thus the threshold current is observed to increase with an increasing DPP duration. DPP amplitude As the DPP amplitude is increased from zero to near threshold, the resulting increase in threshold current will grow as well. This is because the higher amplitude activates more sodium channels, thus allowing more channels to become inactivated by their III and IV domains. DPP inter-phase delay An increase in the delay between the DPP and the stimulus provides more time during which the sodium channel S4 segments may close and the III and IV domains may detach themselves from their respective pores. Thus, an increase in the DPP inter-phase delay will reduce the effective increase in threshold current, induced by the DPP. Depolarizing prepulse applications Elevating pain thresholds One immediate application for depolarizing prepulses, explored by Drs. Poletto and Van Doren, is to elevate the pain thresholds associated with electrocutaneous stimulation. Electrocutaneous stimulation possesses a great deal of potential as a mechanism for the conveyance of additional sensory information. Hence, this method of stimulation may be directly applied to fields such as virtual reality, sensory substitution, and sensory augmentation . However, many of these applications require the use of small electrode arrays, stimulation through which is often painful, thus limiting the usefulness of this technology. The experimental setup, constructed by Drs. Poletto and Van Doren, was as follows: 4 human subjects, each of which had demonstrated the ability to provide reliable pain judgments in previous studies left middle finger rests on a 1 mm diameter polished stainless steel disk electrodes a single stimulus consisted of a burst of three identical prepulse and stim-pulse pairs, presented at the beginning, middle, and end of a 1-second interval the prepulse and stim-pulse widths were matched at a duration of 10 milliseconds so that the thresholds would be the same for both used varying prepulse amplitudes of 0%, 79%, 63%, 50%, 40%, and 32% so as to study their influence over the pain experienced the experiments were conducted in such a way that the stimulus, without a prepulse was painful for about half of the time; this was achieved by stepping the stim-pulse amplitude up and down for the next trial, based upon whether it was reported as painful Their results demonstrated that a prepulse before a stimulus pulse effectively reduces the probability that pain will be experienced due to electrocutaneous stimulation. Surprisingly enough, a prepulse of 32% of the amplitude of the stimulus pulse was able to nearly half the probability of experiencing pain. Therefore, in environments in which the pain threshold is difficult to discern, it may be sufficient to deliver a relatively low amplitude prepulse before the stimulus to achieve the desired effects. Nerve fiber recruitment order In addition to inhibiting neural excitability, it has been observed that preceding an electrical stimulus with a depolarizing prepulse allows one to invert the current-distance relationship controlling nerve fiber recruitment, where the current-distance relationship describes how the threshold current for nerve fiber excitation is proportional to the square of the distance between the nerve fiber and the electrode. Therefore, if the region of influence for the depolarizing prepulse is less than that for the stimulus, the nerve fibers closer to the electrode will experience a greater increase in their threshold current for excitation. Thus, provided such a stimulus, the nerve fibers closest to the electrode may be inhibited, while those further away may be excited. A simulation of this stimulation, constructed by Drs. Warren Grill and J. Thomas Mortimer, may be observed in the adjacent image. Building upon this, a stimulus with two depolarizing prepulses, each of an amplitude slightly below the threshold current (at the time of delivery), should increase the radii of influence for nearby nerve fiber inactivation and distant nerve fiber excitation. Typically, nerve fibers of a larger diameter may be activated by single-pulse stimuli of a lower intensity, and thus may be recruited more readily. However, DPPs have demonstrated the additional capability to invert this recruitment order. As electrical stimuli have a greater effect over nerve fibers of a larger diameter, DPPs will in turn cause a larger degree of sodium conductance inactivation within such nerve fibers, thus nerve fibers of a smaller diameter will have a lower threshold current. See also Prepulse inhibition References External links Hodgkin-Huxley Model Ball-and-Chain Model Neurology Neuroscience Electrophysiology
Depolarizing prepulse
[ "Biology" ]
1,970
[ "Neuroscience" ]
6,321,088
https://en.wikipedia.org/wiki/Radio%20frequency%20power%20transmission
Radio frequency power transmission is the transmission of the output power of a transmitter to an antenna. When the antenna is not situated close to the transmitter, special transmission lines are required. The most common type of transmission line for this purpose is large-diameter coaxial cable. At high-power transmitters, cage lines are used. Cage lines are a kind of overhead line similar in construction to coaxial cables. The interior conductor is held by insulators mounted on a circular device in the middle. On the circular device, there are wires for the other pole of the line. Cage lines are used at high-power transmitters in Europe, like longwave transmitter Topolna, longwave-transmitter Solec Kujawski and some other high-power transmitters for long-, medium- and shortwave. For UHF and VHF, Goubau lines are sometimes used. They consist of an insulated single wire mounted on insulators. On a Goubau line, the wave travels as longitudinal currents surrounded by transverse EM fields. For microwaves, waveguides are used. References External links Cage lines of Solec Kujawski transmitter Cage lines of longwave transmitter Topolna (second image) (third image) Cables Power cables Radio technology
Radio frequency power transmission
[ "Technology", "Engineering" ]
254
[ "Information and communications technology", "Telecommunications engineering", "Radio technology" ]
6,321,288
https://en.wikipedia.org/wiki/Signal%20peptide%20peptidase
In molecular biology, the Signal Peptide Peptidase (SPP) is a type of protein that specifically cleaves parts of other proteins. It is an intramembrane aspartyl protease with the conserved active site motifs 'YD' and 'GxGD' in adjacent transmembrane domains (TMDs). Its sequences is highly conserved in different vertebrate species. SPP cleaves remnant signal peptides left behind in membrane by the action of signal peptidase and also plays key roles in immune surveillance and the maturation of certain viral proteins. Biological function Physiologically SPP processes signal peptides of classical MHC class I preproteins. A nine amino acid-long cleavage fragment is then presented on HLA-E receptors and modulates the activity of natural killer cells. SPP also plays a pathophysiological role; it cleaves the structural nucleocapsid protein (also known as core protein) of the Hepatitis C virus and thus influences viral reproduction rate. In mice, a nonamer peptide originating from the SPP protein serves as minor histocompatibility antigen HM13 that plays a role in transplant rejection The homologous proteases SPPL2A and SPPL2B promote the intramembrane cleavage of TNFα in activated dendritic cells and might play an immunomodulatory role. For SPPL2c and SPPL3 no substrates are known. SPPs do not require cofactors as demonstrated by expression in bacteria and purification of a proteolytically active form. The C-terminal region defines the functional domain, which is in itself sufficient for proteolytic activity. Type IV leader peptidase Another family of signal aspartic endopeptidases was found in bacteria. Bacteria produce a number of protein precursors that undergo post-translational methylation and proteolysis prior to secretion as active proteins. Type IV prepilin leader peptidases are enzymes that mediate this type of post-translational modification. Type IV pilin is a protein found on the surface of Pseudomonas aeruginosa, Neisseria gonorrhoeae and other Gram-negative pathogens. Pilin subunits attach the infecting organism to the surface of host epithelial cells. They are synthesised as prepilin subunits, which differ from mature pilin by virtue of containing a 6-8 residue leader peptide consisting of charged amino acids. Mature type IV pilins also contain a methylated N-terminal phenylalanine residue. The bifunctional enzyme prepilin peptidase (PilD) from Pseudomonas aeruginosa is a key determinant in both type-IV pilus biogenesis and extracellular protein secretion, in its roles as a leader peptidase and methyl transferase (MTase). It is responsible for endopeptidic cleavage of the unique leader peptides that characterise type-IV pilin precursors, as well as proteins with homologous leader sequences that are essential components of the general secretion pathway found in a variety of Gram-negative pathogens. Following removal of the leader peptides, the same enzyme is responsible for the second posttranslational modification that characterises the type-IV pilins and their homologues, namely N-methylation of the newly exposed N-terminal amino acid residue. See also Leader peptidase A Presenilin References Further reading External links The MEROPS online database for peptidases and their inhibitors: SPP:A22.003, SPPL2a: A22.007, SPPL2b:A22.004, SPPL2c:A22.006, SPPL3:A22.005 - Calculated spatial position of type 1 signal peptidase in membrane Integral membrane proteins Protein targeting Hydrolases Proteases EC 3.4.23
Signal peptide peptidase
[ "Biology" ]
827
[ "Protein targeting", "Cellular processes" ]
6,325,086
https://en.wikipedia.org/wiki/STR%20analysis
Short tandem repeat (STR) analysis is a common molecular biology method used to compare allele repeats at specific loci in DNA between two or more samples. A short tandem repeat is a microsatellite with repeat units that are 2 to 7 base pairs in length, with the number of repeats varying among individuals, making STRs effective for human identification purposes. This method differs from restriction fragment length polymorphism analysis (RFLP) since STR analysis does not cut the DNA with restriction enzymes. Instead, polymerase chain reaction (PCR) is employed to discover the lengths of the short tandem repeats based on the length of the PCR product. Forensic uses STR analysis is a tool in forensic analysis that evaluates specific STR regions found on nuclear DNA. The variable (polymorphic) nature of the STR regions that are analyzed for forensic testing intensifies the discrimination between one DNA profile and another. Scientific tools such as FBI approved STRmix incorporate this research technique. Forensic science takes advantage of the population's variability in STR lengths, enabling scientists to distinguish one DNA sample from another. The system of DNA profiling used today is based on PCR and uses simple sequences or short tandem repeats (STR). This method uses highly polymorphic regions that have short repeated sequences of DNA (the most common is 4 bases repeated, but there are other lengths in use, including 3 and 5 bases). Because unrelated people almost certainly have different numbers of repeat units, STRs can be used to discriminate between unrelated individuals. These STR loci (locations on a chromosome) are targeted with sequence-specific primers and amplified using PCR. The DNA fragments that result are then separated and detected using electrophoresis. There are two common methods of separation and detection, capillary electrophoresis (CE) and gel electrophoresis. Each STR is polymorphic, but the number of alleles is very small. Typically each STR allele will be shared by around 5 - 20% of individuals. The power of STR analysis comes from looking at multiple STR loci simultaneously. The pattern of alleles can identify an individual quite accurately. Thus STR analysis provides an excellent identification tool. The more STR regions that are tested in an individual the more discriminating the test becomes. However, given 10 STR loci, it can result in a genotyping error margin of 30%, or nearly one third (1/3) of the time. Even when using 15 identifier microsatellite STR loci, they are not informative markers for inference of ancestry, a much larger set of genetic markers is needed to detect fine-scale population structure. A study claimed 30 DIP-STRs were found to be suitable for prenatal paternity testing and roughly outlining biogeographic ancestry in forensics, but more markers and multiplex panels need to be developed to promote use of this original approach. When comparing SNP and STR analysis, the use of high-quality SNPs has proven to be better for delineating population structure, as well as genetic relationships at the individual and population level. Using the best 15 SNPs (30 alleles) was similar to the best 4 STR loci (83 alleles), and increasing the STR made no difference, but increasing to 100 SNPs substantially increased assignment giving the highest result. Researchers found that some of the STR loci out-performed the SNP loci on a single locus basis, but combinations of SNPs outperformed the STRs based upon total number of alleles. The SNPs from a larger panel gave significantly more accurate individual genetic self-assignment compared to any combination of the STR loci. From country to country, different STR-based DNA-profiling systems are in use. In North America, systems that amplify the CODIS 20 core loci are almost universal, whereas in the United Kingdom the DNA-17 17 loci system (which is compatible with The National DNA Database) is in use. Whichever system is used, many of the STR regions used are the same. These DNA-profiling systems are based on multiplex reactions, whereby many STR regions will be tested at the same time. The true power of STR analysis is in its statistical power of discrimination. Because the 20 loci that are currently used for discrimination in CODIS are independently assorted (having a certain number of repeats at one locus does not change the likelihood of having any number of repeats at any other locus), the product rule for probabilities can be applied. This means that, if someone has the DNA type of ABC, where the three loci were independent, we can say that the probability of having that DNA type is the probability of having type A times the probability of having type B times the probability of having type C. This has resulted in the ability to generate match probabilities of 1 in a quintillion (1x1018) or more. However, DNA database searches showed much more frequent than expected false DNA profile matches. Moreover, since there are about 12 million monozygotic twins on Earth, the theoretical probability is not accurate. In practice, the risk of contaminated-matching is much greater than matching a distant relative, such as contamination of a sample from nearby objects, or from left-over cells transferred from a prior test. The risk is greater for matching the most common person in the samples: Everything collected from, or in contact with, a victim is a major source of contamination for any other samples brought into a lab. For that reason, multiple control-samples are typically tested in order to ensure that they stayed clean, when prepared during the same period as the actual test samples. Unexpected matches (or variations) in several control-samples indicates a high probability of contamination for the actual test samples. In a relationship test, the full DNA profiles should differ (except for twins), to prove that a person was not matched as being related to their own DNA in another sample. In biomedical research, STR profiles are used to authenticate cell lines. Self-generated STR profiles can be compared with databases such as CLASTR (https://www.cellosaurus.org/cellosaurus-str-search/) or STRBase (https://strbase.nist.gov/). In addition, self-generated primary murine cell lines cultured before the first passaging can be matched with later passages, thus ensuring the identity of the cell line. See also STR multiplex system Snpstr Y-STR List of Y-STR markers List of X-STR markers Earth Human STR Allele Frequencies Database References Biochemistry detection methods DNA profiling techniques Genomics Forensic genetics
STR analysis
[ "Chemistry", "Biology" ]
1,402
[ "Biochemistry methods", "Genetics techniques", "DNA profiling techniques", "Chemical tests", "Biochemistry detection methods" ]
6,325,527
https://en.wikipedia.org/wiki/Fruitless%20%28gene%29
The fruitless gene (fru) is a Drosophila melanogaster gene that encodes several variants of a putative transcription factor protein. Normal fruitless function is required for proper development of several anatomical structures necessary for courtship, including motor neurons which innervate muscles needed for fly sexual behaviors. The gene does not have an obvious mammalian homolog, but appears to function in sex determination in species as distant as the mosquito Anopheles gambiae. fruitless serves as an example of how a gene or a group of genes may regulate the development and/or function of neurons involved in innate behavior. Research on fruitless has received attention in the popular press, since it provokes discussion on genetics of human sexual orientation, and behaviors such as gender-specific aggression. Function Male flies with mutations in the fruitless gene display altered sexual behavior. Fruitfly courtship, which involves a complex male-initiated ritual, may be disrupted in many ways by mutated fru alleles; fru is necessary for every step in the ritual. Some alleles prevent courting entirely, while others disrupt individual components. Notably, some loss-of-function alleles change or remove sexual preference. Although many genes are known to be involved in male courtship behavior, the fruitless gene has been considered noteworthy because it exhibits sex-specific alternative splicing. When females produce the male-spliced gene product, they behave as males. Males that do not produce the male-specific product do not court females and are infertile. In the brain, a subset (ca. 2,000) of neurons express fruitless and fruitless expression is sufficient to instruct sexually dimorphic connectivity. fruitless has at least four promoters, each encoding proteins containing both a BTB (Broad complex/tramtrack/bric-a-brac) domain and a zinc finger motif. Alternative splicing occurs at both the 5' and 3' ends, and there are several variants (other than the male- and female-specific splicing patterns). The fruitless gene locus also controls the expression of hundreds of other genes, any subset of which may actually regulate behavior. Name Early work refers to the gene as fruity, an apparent pun on both the common name of D. melanogaster, the fruit fly, as well as a slang word for homosexual. As social attitudes towards homosexuality changed, fruity came to be regarded as offensive, or at best, not politically correct. Thus, the gene was re-dubbed fruitless, alluding to the lack of offspring produced by flies with the mutation. However, despite the original name and a continuing history of misleading inferences by the popular media, fruitless mutants primarily show defects in male-female courtship, though certain mutants cause male-male or female-female courtship. References External links Entrez Gene summary for fruitless Article on fruitless in The Interactive Fly fruitless Molecular neuroscience Mating Sexual orientation and science Mutated genes Behavioural genetics
Fruitless (gene)
[ "Chemistry", "Biology" ]
609
[ "Behavior", "Molecular neuroscience", "Molecular biology", "Ethology", "Mating" ]
6,326,413
https://en.wikipedia.org/wiki/Hoist%20%28device%29
A hoist is a device used for lifting or lowering a load by means of a drum or lift-wheel around which rope or chain wraps. It may be manually operated, electrically or pneumatically driven and may use chain, fiber or wire rope as its lifting medium. The most familiar form is an elevator, the car of which is raised and lowered by a hoist mechanism. Most hoists couple to their loads using a lifting hook. Today, there are a few governing bodies for the North American overhead hoist industry which include the Hoist Manufactures Institute, ASME, and the Occupational Safety and Health Administration. HMI is a product counsel of the Material Handling Industry of America consisting of hoist manufacturers promoting safe use of their products. Types The word “hoist” is used to describe many different types of equipment that lift and lower loads. For example, many people use “hoist” to describe an elevator. The information contained here pertains specially to overhead, construction and mine hoist. Overhead hoist Overhead hoists are defined in the American Society of Mechanical Engineers (ASME) B30 standards as a machinery unit that is used for lifting or lowering a freely suspended (unguided) load. These units are typically used in an industrial setting and may be part of an overhead crane. A specific overhead hoist configuration is usually defined by the lifting medium, operation and suspension. The lifting medium is the type of component used to transmit and cause the vertical motion and includes wire rope, chain or synthetic strap, or rope. The operation defines the type of power used to operate the hoisting motion and includes manual power, electric power, hydraulic power or air power. The suspension defines the type of mounting method used to suspend the hoist and includes hook, clevis, lug, trolley, deck, base, wall or ceiling. The most commonly used overhead hoist is electrical powered with wire rope or chain as the lifting medium. Both wire rope and chain hoist have been in common use since the 1800s, however mass production of electric hoists did not start until the early 1900s and was first adapted by Germany. A hoist can be a serial production unit or a custom unit. Serial production hoists are typically more cost-effective and designed for a ten-year life in a light to heavy hoist duty service classification. Custom hoists are typically more expensive and are designed for a heavy to severe hoist duty service classification. Serial production hoists were once regarded as being designed for light to moderate hoist duty service classifications, but since the 60's this has changed. Over the years the custom hoist market has decreased in size with the advent of the more durable serial production hoists. A machine shop or fabricating shop will typically use a serial production hoist, while a steel mill or NASA may typically use a custom hoist to meet durability and performance requirements. Overhead hoists require proper installation, operation, inspection, and maintenance. When selecting an overhead hoist, operators must consider the average operating time per day, load spectrum, starts per hour, operating period and equipment life. These parameters determine the Hoist Duty Service Classification, which helps hoist installers and users better understand the hoist's useful life and duty service application. The American Society of Mechanical Engineers also publishes a number or standards related to overhead hoists, including the “ASME B30.16 Standard for Overhead Hoists (Underhung)", which provides additional guidance for the proper design, installation, operation and maintenance of hoists. Construction hoist Also known as a Man-Lift, Buckhoist, temporary elevator, builder hoist, passenger hoist or construction elevator, a construction hoist is commonly used on large scale construction projects, such as high-rise buildings or major hospitals. There are many other uses for the construction elevator. Many other industries use the buckhoist for full-time operations, the purpose being to carry personnel, materials, and equipment quickly between the ground and higher floors, or between floors in the middle of a structure. There are three types: Utility (to move material), personnel (to move personnel), and dual-rated, which can do both. The construction hoist is made up of either one or two cars (cages) which travel vertically along stacked mast tower sections. The mast sections are attached to the structure or building every for added stability. For precisely controlled travel along the mast sections, modern construction hoists use a motorized rack-and-pinion system that climbs the mast sections at various speeds. While hoists have been predominantly produced in Europe and the United States, China is emerging as a manufacturer of hoists to be used in Asia. In the United States and abroad, General Contractors and various other industrial markets rent or lease hoists for a specific projects. Rental or leasing companies provide erection, dismantling, and repair services to their hoists to provide General Contractors with turnkey services. Also, the rental and leasing companies can provide parts and service for the elevators that are under contract. Mine hoist A mining hoist (also known simply as a hoist or winder) is used in underground mining to raise and lower conveyances within the mine shaft. It is similar to an elevator, used for raising humans, equipment, and assorted loads. Human, animal and water power were used to power the mine hoists documented in Agricola's De Re Metallica, published in 1556. Stationary steam engines were commonly used to power mine hoists through the 19th century and into the 20th, as at the Quincy Mine, where a 4-cylinder cross-compound Corliss engine was used. Modern hoists are powered using electric motors, historically with direct current drives utilizing solid-state converters (thyristors); however, modern large hoists use alternating current drives that are variable-frequency controlled. There are three principal types of hoists used in mining applications, Drum Hoists, Friction (or Kope) hoists and Blair multi-rope hoists. Hoist can be defined as anything that is used to lift any heavy materials. Chain hoist Differential pulley Gallery See also Overhead crane Hoist controller Hoist (mining) Hydraulic jigger Hydraulic hooklift hoist Rigging Winch Windlass Derrick References External links OSHA Regs for overhead cranes Actuators Lifting equipment
Hoist (device)
[ "Physics", "Technology" ]
1,297
[ "Physical systems", "Machines", "Lifting equipment" ]
6,327,272
https://en.wikipedia.org/wiki/Biometal%20%28biology%29
Biometals (also called biocompatible metals, bioactive metals, metallic biomaterials) are metals normally present, in small but important and measurable amounts, in biology, biochemistry, and medicine. The metals copper, zinc, iron, and manganese are examples of metals that are essential for the normal functioning of most plants and the bodies of most animals, such as the human body. A few (calcium, potassium, sodium) are present in relatively larger amounts, whereas most others are trace metals, present in smaller but important amounts (the image shows the percentages for humans). Approximately 2/3 of the existing periodic table is composed of metals with varying properties, accounting for the diverse ways in which metals (usually in ionic form) have been utilized in nature and medicine. History At first, the study of biometals was referred to as bioinorganic chemistry. Each branch of bioinorganic chemistry studied separate, particular sub-fields of the subject. However, this led to an isolated view of each particular aspect in a biological system. This view was revised into a holistic approach of biometals in metallomics. Metal ions in biology were studied in various specializations. In nutrition, it was to define the essentials for life; in toxicology, to define how the adverse effects of certain metal ions in biological systems and in pharmacology for their therapeutic effects. In each field, at first, they were studied and separated on a basis of concentration. In low amounts, metal ions in a biological system could perform at their optimal functionality whereas in higher concentrations, metal ions can prove fatal to biological systems. However, the concentration gradients were proved to be arbitrary as low concentrations of non-essential metals (like lithium or helium) in essential metals (like sodium or potassium) can cause an adverse effect in biological systems and vice versa. Investigations into biometals and their effects date back to the 19th century and even further back to the 18th century with the identification of iron in blood. Zinc was identified to be essential in fungal growth of yeast as shown by Jules Raulin in 1869 yet no proof for the need of zinc in human cells was shown until the late 1930s where its presence was demonstrated in carbonic anhydrase and the 1960s where it was identified as a necessary element for humans. Since then, understanding of zinc in human biology has advanced to the point that it is considered as important as iron. Modern advancements in analytical technology have made it clear the importance of biometals in signalling pathways and the initial thoughts on the chemical basis of life. Naturally occurring biometals Metal ions are essential to the function of many proteins present in living organisms, such as metalloproteins and enzymes that require metal ions as cofactors. Processes including oxygen transport and DNA replication are carried out using enzymes such as DNA polymerase, which in humans requires magnesium and zinc to function properly. Other biomolecules also contain metal ions in their structure, such as iodine in human thyroid hormones. The uses of some of them are listed below. The list is not exhaustive, because it covers only the principal class members; others that are trace metals of especially low bioconcentration are not explored herein. Some elements that are nonmetals or metalloids (such as selenium) are beyond the scope of this article. Calcium Calcium is the most abundant metal in the eukaryotes and by extension humans. The body is made up of approximate 1.5% calcium and this abundance is reflected in its lack of redox toxicity and its participation in the structure stability of membranes and other biomolecules. Calcium plays a part in fertilization of an egg, controls several developmental process and may regulate cellular processes like metabolism or learning. Calcium also plays a part in bone structure as the rigidity of vertebrae bone matrices are akin to the nature of the calcium hydroxyapatite. Calcium usually binds with other proteins and molecules in order to perform other functions in the body. The calcium bound proteins usually play an important role in cell-cell adhesion, hydrolytic processes (such as hydrolytic enzymes like glycosidases and sulfatases) and protein folding and sorting. These processes play into the larger part of cell structure and metabolism. Magnesium Magnesium is the most abundant free cation in plant cytosol, is the central atom in chlorophyll and offers itself as a bridging ion for the aggregation of ribosomes in plants. Even small changes in the concentration of magnesium in plant cytosol or chloroplasts can drastically affect the key enzymes present in the chloroplasts. It is most commonly used as a co-factor in eukaryotes and functions as an important functional key in enzymes like RNA Polymerase and ATPase. In phosphorylating enzymes like ATPase or kinases and phosphates, magnesium acts as a stabilizing ion in polyphosphate compounds due its Lewis acidity. Magnesium has also been noted as a possible secondary messenger for neural transmissions. Magnesium acts as an allosteric inhibitor for the enzyme vacuolar pyrophosphatase (V-PPiase). In vitro, the concentration of free magnesium acts as a strict regulator and stabilizer for the enzyme activity of V-PPiase. Manganese Manganese like magnesium plays a crucial role as a co-factor in various enzymes though its concentration is noticeably lower than the other. Enzymes that use manganese as a co-factor are known as "manganoproteins." These proteins include enzymes, like oxidoreductases, transferases and hydrolases, which are necessary for metabolic functions and antioxidant responses. Manganese plays a significant role in host defense, blood clotting, reproduction, digestion and various other functions in the body. In particular, when concerning host defense, manganese acts as a preventative measure for oxidative stress by destroying free radicals which are ions that have an unpaired electron in their outer shells. Zinc Zinc is the second most abundant transition metal present in living organisms second only to iron. It is critical for the growth and survival of cells. In humans, zinc is primarily found in various organs and tissues such as the brain, intestines, pancreas and mammary glands. In prokaryotes, zinc can function as an antimicrobial, zinc oxide nano-particles can function as an antibacterial or antibiotic. Zinc homeostasis is highly controlled to allow for its benefits without risk of death via its high toxicity. Because of zinc's antibiotic nature, it is often used in many drugs against bacterial infections in humans. Inversely, due to the bacterial nature of mitochondria, zinc antibiotics are also lethal to mitochondria and results in cell death at high concentrations. Zinc is also used in a number of transcription factors, proteins and enzymes. Sodium Sodium is a metal where humans have discovered a great deal of its total roles in the body as well as being one of the only two alkali metals that play a major role in the bodily functions. It plays an important role in maintenance of the cell membrane potential and the electrochemical gradient in the body via the sodium-potassium pump and sodium-glucose transport proteins. Sodium also serves a purpose in the nervous system and cell communication as they flood into axons during an action potential to preserve the strength of the signal. It has also been shown that sodium affects immune response both in efficiency and speed. Macrophages have increased proliferation rates at high-salt concentrations and the body uses high-sodium concentrations in isolated regions to generate an heightened immune response which fades after the infection has been dealt with. Potassium In plants, potassium plays a key role in maintaining plant health. High concentrations of potassium in plants play a key role in synthesis of essential proteins in plants as well as development of plant organelles like cell walls to prevent damage from viruses and insects. It also lowers the concentration of low molecular weight molecules like sugars and amino acids and increases the concentration of high weight molecular weight molecules like protein which also prevent the development and propagation of viruses. Potassium absorption has a positive correlation with aquaporins and the uptake of water in plant cells via cell membrane proteins. Because of this correlation, it has been noted that potassium also plays a key part in stomatal movement and regulation as high concentrations of potassium are moved into the plant stomata to keep them open and promote photosynthesis. In animals, potassium also plays a key part along with sodium in maintaining resting cell membrane potential and in cell-cell communication via repolarization of axon pathways after an action potential between neurons. Potassium may also play a key part in maintaining blood pressure in animals as shown in a study where increased severity of periodontal disease and hypertension were inversely correlated to urinary potassium excretion (a telltale sign of low potassium intake). Iron Iron is also the most abundant transition metal in the human body and it is used in various processes like oxygen transport and ATP production. It plays a key role in the function of enzymes like cytochrome a, b and c as well as iron-sulfur complexes which play an important role in ATP production. It is present in every type of cell in the brain as the brain itself has a very high energy requirement and by extension a very high iron requirement. In animals, iron plays a very important role in transporting oxygen from the lungs to tissues and CO2 from tissues to the lungs. It does this via two important transport proteins called hemoglobin and myoglobin. Hemoglobin in the blood transports oxygen from the lungs to myoglobin in tissues. Both proteins are tetramer complexes with iron protein complexes called hemes built into each subunit of the tetramer. The oxygen binds to the iron in the heme via affinity-based binding or liganding and dissociates from the protein once it has reached its destination. Iron can also be a potential carcinogen in three ways; first being the production of hydroxyl radicals. Ferric ions can be reduced via superoxide and the product can be reoxidized via peroxide to form hydroxyl radicals. Hydroxyl radicals and other reactive oxygen species when generated near DNA can cause point mutations, cross-linkage and breaks. The second being the bolstering of the growth of neoplastic cells by suppressing host defenses. Excessive iron inhibits the activity of CD4 lymphocytes and suppresses the tumoricidal activity of macrophages. The third way it can act as a carcinogen is by functioning as an essential nutrient for unrestricted proliferation of tumor cells. Lithium Lithium is present in biological systems in trace amounts; its functions are uncertain. Lithium salts have proven to be useful as a mood stabilizer and antidepressant in the treatment of mental illness such as bipolar disorder. Non-natural biometal complexes The term biometal can be used as a synonym to a metallic element that is involved in the function of a biomolecule, hence also artificial systems can be considered when talking about biometals. Systems such as metalloproteins, metallopeptides and artificial metalloenzymes are examples of biomolecules containing metallic elements. The de novo design of structures involving metals in the function of the biomolecule itself is done in a biomimetic fashion but also to enable non-natural activity in biomolecules. Biometals in medicine Metal ions and metallic compounds are often used in medical treatments and diagnoses. Compounds containing metal ions can be used as medicine, such as lithium compounds and auranofin. Metal compounds and ions can also produce harmful effects on the body due to the toxicity of several types of metals. For example, arsenic works as a potent poison due to its effects as an enzyme inhibitor, disrupting ATP production. On the other hand, Ni–Ti–Cu wires are used for artificial heart muscles and iron and gold particles can guide magnetic drug delivery or destroy tumor cells. Bigger biometal structures (relying on metallic elements and alloys) in medicine can be classified into three types: fibre, bulk scaffolds, and nanotubes. And in some cases the term biometal is also used to refer to metal system with application in biomedicine not directly correlated to the biochemical function of biomolecules but to the biocompatibility of these metal systems. Examples are scaffolds of stainless steel or titanium alloy to create screws or plates for osteosynthesis, and titanium bulk for precise engineering of bone tissue. For analytical purposes biometals can be employed in magnetic separation of different materials. References Biology and pharmacology of chemical elements Metals
Biometal (biology)
[ "Chemistry", "Biology" ]
2,631
[ "Pharmacology", "Metals", "Properties of chemical elements", "Biology and pharmacology of chemical elements", "Biochemistry" ]
6,327,661
https://en.wikipedia.org/wiki/Technology%20adoption%20life%20cycle
The technology adoption lifecycle is a sociological model that describes the adoption or acceptance of a new product or innovation, according to the demographic and psychological characteristics of defined adopter groups. The process of adoption over time is typically illustrated as a classical normal distribution or "bell curve". The model calls the first group of people to use a new product "innovators", followed by "early adopters". Next come the "early majority" and "late majority", and the last group to eventually adopt a product are called "laggards" or "phobics". For example, a phobic may only use a cloud service when it is the only remaining method of performing a required task, but the phobic may not have an in-depth technical knowledge of how to use the service. The demographic and psychological (or "psychographic") profiles of each adoption group were originally specified by agricultural researchers in 1956: innovators – had larger farms, were more educated, more prosperous and more risk-oriented early adopters – younger, more educated, tended to be community leaders, less prosperous early majority – more conservative but open to new ideas, active in community and influence to neighbors late majority – older, less educated, fairly conservative and less socially active laggards – very conservative, had small farms and capital, oldest and least educated The model has subsequently been adapted for many areas of technology adoption in the late 20th century, for example in the spread of policy innovations among U.S. states. Adaptations of the model The model has spawned a range of adaptations that extend the concept or apply it to specific domains of interest. In his book Crossing the Chasm, Geoffrey Moore proposes a variation of the original lifecycle. He suggests that for discontinuous innovations, which may result in a Foster disruption based on an s-curve, there is a gap or chasm between the first two adopter groups (innovators/early adopters), and the vertical markets. Disruption as it is used today are of the Clayton M. Christensen variety. These disruptions are not s-curve based. In educational technology, Lindy McKeown has provided a similar model (a pencil metaphor) describing the Information and Communications Technology uptake in education. In medical sociology, Carl May has proposed normalization process theory that shows how technologies become embedded and integrated in health care and other kinds of organization. Wenger, White and Smith, in their book Digital habitats: Stewarding technology for communities, talk of technology stewards: people with sufficient understanding of the technology available and the technological needs of a community to steward the community through the technology adoption process. Rayna and Striukova (2009) propose that the choice of initial market segment has crucial importance for crossing the chasm, as adoption in this segment can lead to a cascade of adoption in the other segments. This initial market segment has, at the same time, to contain a large proportion of visionaries, to be small enough for adoption to be observed from within the segment and from other segment and be sufficiently connected with other segments. If this is the case, the adoption in the first segment will progressively cascade into the adjacent segments, thereby triggering the adoption by the mass-market. Stephen L. Parente (1995) implemented a Markov Chain to model economic growth across different countries given different technological barriers. In Product marketing, Warren Schirtzinger proposed an expansion of the original lifecycle (the Customer Alignment Lifecycle) which describes the configuration of five different business disciplines that follow the sequence of technology adoption. Examples One way to model product adoption is to understand that people's behaviors are influenced by their peers and how widespread they think a particular action is. For many format-dependent technologies, people have a non-zero payoff for adopting the same technology as their closest friends or colleagues. If two users both adopt product A, they might get a payoff a > 0; if they adopt product B, they get b > 0. But if one adopts A and the other adopts B, they both get a payoff of 0. A threshold can be set for each user to adopt a product. Say that a node v in a graph has d neighbors: then v will adopt product A if a fraction p of its neighbors is greater than or equal to some threshold. For example, if v's threshold is 2/3, and only one of its two neighbors adopts product A, then v will not adopt A. Using this model, we can deterministically model product adoption on sample networks. History The technology adoption lifecycle is a sociological model that is an extension of an earlier model called the diffusion process, which was originally published in 1956 by George M. Beal and Joe M. Bohlen. This article did not acknowledge the contributions of Beal's Ph.D. student Everett M. Rogers; however Beal, Bohlen and Rogers soon co-authored a scholarly article on their methodology. This research built on prior work by Neal C. Gross and Bryce Ryan. Rogers generalized the diffusion process to innovations outside the agricultural sector of the midwestern USA, and successfully popularized his generalizations in his widely acclaimed 1962 book Diffusion of Innovations (now in its fifth edition). See also Bass diffusion model Diffusion (business) Hype cycle Lazy user model Neo-Luddism Technology acceptance model Technology lifecycle Varian Rule Notes Demographics Diffusion Innovation economics Product development Product lifecycle management Product management Science and technology studies Sociology of culture Technological change Technology in society Management cybernetics
Technology adoption life cycle
[ "Physics", "Chemistry", "Technology" ]
1,129
[ "Transport phenomena", "Physical phenomena", "Diffusion", "Science and technology studies" ]
3,581,788
https://en.wikipedia.org/wiki/Opacity
Opacity is the measure of impenetrability to electromagnetic or other kinds of radiation, especially visible light. In radiative transfer, it describes the absorption and scattering of radiation in a medium, such as a plasma, dielectric, shielding material, glass, etc. An opaque object is neither transparent (allowing all light to pass through) nor translucent (allowing some light to pass through). When light strikes an interface between two substances, in general, some may be reflected, some absorbed, some scattered, and the rest transmitted (also see refraction). Reflection can be diffuse, for example light reflecting off a white wall, or specular, for example light reflecting off a mirror. An opaque substance transmits no light, and therefore reflects, scatters, or absorbs all of it. Other categories of visual appearance, related to the perception of regular or diffuse reflection and transmission of light, have been organized under the concept of cesia in an order system with three variables, including opacity, transparency and translucency among the involved aspects. Both mirrors and carbon black are opaque. Opacity depends on the frequency of the light being considered. For instance, some kinds of glass, while transparent in the visual range, are largely opaque to ultraviolet light. More extreme frequency-dependence is visible in the absorption lines of cold gases. Opacity can be quantified in many ways; for example, see the article mathematical descriptions of opacity. Different processes can lead to opacity, including absorption, reflection, and scattering. Etymology Late Middle English opake, from Latin opacus 'darkened'. The current spelling (rare before the 19th century) has been influenced by the French form. Radiopacity Radiopacity is preferentially used to describe opacity of X-rays. In modern medicine, radiodense substances are those that will not allow X-rays or similar radiation to pass. Radiographic imaging has been revolutionized by radiodense contrast media, which can be passed through the bloodstream, the gastrointestinal tract, or into the cerebral spinal fluid and utilized to highlight CT scan or X-ray images. Radiopacity is one of the key considerations in the design of various devices such as guidewires or stents that are used during radiological intervention. The radiopacity of a given endovascular device is important since it allows the device to be tracked during the interventional procedure. Quantitative definition The words "opacity" and "opaque" are often used as colloquial terms for objects or media with the properties described above. However, there is also a specific, quantitative definition of "opacity", used in astronomy, plasma physics, and other fields, given here. In this use, "opacity" is another term for the mass attenuation coefficient (or, depending on context, mass absorption coefficient, the difference is described here) at a particular frequency of electromagnetic radiation. More specifically, if a beam of light with frequency travels through a medium with opacity and mass density , both constant, then the intensity will be reduced with distance x according to the formula where x is the distance the light has traveled through the medium is the intensity of light remaining at distance x is the initial intensity of light, at For a given medium at a given frequency, the opacity has a numerical value that may range between 0 and infinity, with units of length2/mass. Opacity in air pollution work refers to the percentage of light blocked instead of the attenuation coefficient (aka extinction coefficient) and varies from 0% light blocked to 100% light blocked: Planck and Rosseland opacities It is customary to define the average opacity, calculated using a certain weighting scheme. Planck opacity (also known as Planck-Mean-Absorption-Coefficient) uses the normalized Planck black-body radiation energy density distribution, , as the weighting function, and averages directly: where is the Stefan–Boltzmann constant. Rosseland opacity (after Svein Rosseland), on the other hand, uses a temperature derivative of the Planck distribution, , as the weighting function, and averages , The photon mean free path is . The Rosseland opacity is derived in the diffusion approximation to the radiative transport equation. It is valid whenever the radiation field is isotropic over distances comparable to or less than a radiation mean free path, such as in local thermal equilibrium. In practice, the mean opacity for Thomson electron scattering is: where is the hydrogen mass fraction. For nonrelativistic thermal bremsstrahlung, or free-free transitions, assuming solar metallicity, it is: The Rosseland mean attenuation coefficient is: See also Absorption (electromagnetic radiation) Mathematical descriptions of opacity Molar absorptivity Reflection (physics) Gloss (optics) Cesia (visual appearance) Scattering theory Transparency and translucency Kappa mechanism References Electromagnetic radiation Scattering, absorption and radiative transfer (optics) Spectroscopy Glass physics Physical properties Optics
Opacity
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,044
[ "Glass engineering and science", "Physical phenomena", " absorption and radiative transfer (optics)", "Applied and interdisciplinary physics", "Optics", "Spectrum (physical sciences)", "Molecular physics", "Electromagnetic radiation", "Instrumental analysis", "Glass physics", "Radiation", "Sca...
3,584,826
https://en.wikipedia.org/wiki/Orthotropic%20material
In material science and solid mechanics, orthotropic materials have material properties at a particular point which differ along three orthogonal axes, where each axis has twofold rotational symmetry. These directional differences in strength can be quantified with Hankinson's equation. They are a subset of anisotropic materials, because their properties change when measured from different directions. A familiar example of an orthotropic material is wood. In wood, one can define three mutually perpendicular directions at each point in which the properties are different. It is most stiff (and strong) along the grain (axial direction), because most cellulose fibrils are aligned that way. It is usually least stiff in the radial direction (between the growth rings), and is intermediate in the circumferential direction. This anisotropy was provided by evolution, as it best enables the tree to remain upright. Because the preferred coordinate system is cylindrical-polar, this type of orthotropy is also called polar orthotropy. Another example of an orthotropic material is sheet metal formed by squeezing thick sections of metal between heavy rollers. This flattens and stretches its grain structure. As a result, the material becomes anisotropic — its properties differ between the direction it was rolled in and each of the two transverse directions. This method is used to advantage in structural steel beams, and in aluminium aircraft skins. If orthotropic properties vary between points inside an object, it possesses both orthotropy and inhomogeneity. This suggests that orthotropy is the property of a point within an object rather than for the object as a whole (unless the object is homogeneous). The associated planes of symmetry are also defined for a small region around a point and do not necessarily have to be identical to the planes of symmetry of the whole object. Orthotropic materials are a subset of anisotropic materials; their properties depend on the direction in which they are measured. Orthotropic materials have three planes/axes of symmetry. An isotropic material, in contrast, has the same properties in every direction. It can be proved that a material having two planes of symmetry must have a third one. Isotropic materials have an infinite number of planes of symmetry. Transversely isotropic materials are special orthotropic materials that have one axis of symmetry (any other pair of axes that are perpendicular to the main one and orthogonal among themselves are also axes of symmetry). One common example of transversely isotropic material with one axis of symmetry is a polymer reinforced by parallel glass or graphite fibers. The strength and stiffness of such a composite material will usually be greater in a direction parallel to the fibers than in the transverse direction, and the thickness direction usually has properties similar to the transverse direction. Another example would be a biological membrane, in which the properties in the plane of the membrane will be different from those in the perpendicular direction. Orthotropic material properties have been shown to provide a more accurate representation of bone's elastic symmetry and can also give information about the three-dimensional directionality of bone's tissue-level material properties. It is important to keep in mind that a material which is anisotropic on one length scale may be isotropic on another (usually larger) length scale. For instance, most metals are polycrystalline with very small grains. Each of the individual grains may be anisotropic, but if the material as a whole comprises many randomly oriented grains, then its measured mechanical properties will be an average of the properties over all possible orientations of the individual grains. Orthotropy in physics Anisotropic material relations Material behavior is represented in physical theories by constitutive relations. A large class of physical behaviors can be represented by linear material models that take the form of a second-order tensor. The material tensor provides a relation between two vectors and can be written as where are two vectors representing physical quantities and is the second-order material tensor. If we express the above equation in terms of components with respect to an orthonormal coordinate system, we can write Summation over repeated indices has been assumed in the above relation. In matrix form we have Examples of physical problems that fit the above template are listed in the table below. Condition for material symmetry The material matrix has a symmetry with respect to a given orthogonal transformation () if it does not change when subjected to that transformation. For invariance of the material properties under such a transformation we require Hence the condition for material symmetry is (using the definition of an orthogonal transformation) Orthogonal transformations can be represented in Cartesian coordinates by a matrix given by Therefore, the symmetry condition can be written in matrix form as Orthotropic material properties An orthotropic material has three orthogonal symmetry planes. If we choose an orthonormal coordinate system such that the axes coincide with the normals to the three symmetry planes, the transformation matrices are It can be shown that if the matrix for a material is invariant under reflection about two orthogonal planes then it is also invariant under reflection about the third orthogonal plane. Consider the reflection about the plane. Then we have The above relation implies that . Next consider a reflection about the plane. We then have That implies that . Therefore, the material properties of an orthotropic material are described by the matrix Orthotropy in linear elasticity Anisotropic elasticity In linear elasticity, the relation between stress and strain depend on the type of material under consideration. This relation is known as Hooke's law. For anisotropic materials Hooke's law can be written as where is the stress tensor, is the strain tensor, and is the elastic stiffness tensor. If the tensors in the above expression are described in terms of components with respect to an orthonormal coordinate system we can write where summation has been assumed over repeated indices. Since the stress and strain tensors are symmetric, and since the stress-strain relation in linear elasticity can be derived from a strain energy density function, the following symmetries hold for linear elastic materials Because of the above symmetries, the stress-strain relation for linear elastic materials can be expressed in matrix form as An alternative representation in Voigt notation is or The stiffness matrix in the above relation satisfies point symmetry. Condition for material symmetry The stiffness matrix satisfies a given symmetry condition if it does not change when subjected to the corresponding orthogonal transformation. The orthogonal transformation may represent symmetry with respect to a point, an axis, or a plane. Orthogonal transformations in linear elasticity include rotations and reflections, but not shape changing transformations and can be represented, in orthonormal coordinates, by a matrix given by In Voigt notation, the transformation matrix for the stress tensor can be expressed as a matrix given by The transformation for the strain tensor has a slightly different form because of the choice of notation. This transformation matrix is It can be shown that . The elastic properties of a continuum are invariant under an orthogonal transformation if and only if Stiffness and compliance matrices in orthotropic elasticity An orthotropic elastic material has three orthogonal symmetry planes. If we choose an orthonormal coordinate system such that the axes coincide with the normals to the three symmetry planes, the transformation matrices are We can show that if the matrix for a linear elastic material is invariant under reflection about two orthogonal planes then it is also invariant under reflection about the third orthogonal plane. If we consider the reflection about the plane, then we have Then the requirement implies that The above requirement can be satisfied only if Let us next consider the reflection about the plane. In that case Using the invariance condition again, we get the additional requirement that No further information can be obtained because the reflection about third symmetry plane is not independent of reflections about the planes that we have already considered. Therefore, the stiffness matrix of an orthotropic linear elastic material can be written as The inverse of this matrix is commonly written as where is the Young's modulus along axis , is the shear modulus in direction on the plane whose normal is in direction , and is the Poisson's ratio that corresponds to a contraction in direction when an extension is applied in direction . Bounds on the moduli of orthotropic elastic materials The strain-stress relation for orthotropic linear elastic materials can be written in Voigt notation as where the compliance matrix is given by The compliance matrix is symmetric and must be positive definite for the strain energy density to be positive. This implies from Sylvester's criterion that all the principal minors of the matrix are positive, i.e., where is the principal submatrix of . Then, We can show that this set of conditions implies that or However, no similar lower bounds can be placed on the values of the Poisson's ratios . See also Anisotropy Stress (mechanics) Infinitesimal strain theory Finite strain theory Hooke's law References Further reading Orthotropy modeling equations from OOFEM Matlib manual section. Hooke's law for orthotropic materials Continuum mechanics Elasticity (physics) models Materials
Orthotropic material
[ "Physics", "Mathematics" ]
1,911
[ "Continuum mechanics", "Classical mechanics", "Materials", "Topology", "Space", "Geometry", "Spacetime", "Orientation (geometry)", "Matter" ]
3,587,470
https://en.wikipedia.org/wiki/Effective%20stress
The effective stress can be defined as the stress, depending on the applied tension and pore pressure , which controls the strain or strength behaviour of soil and rock (or a generic porous body) for whatever pore pressure value or, in other terms, the stress which applied over a dry porous body (i.e. at ) provides the same strain or strength behaviour which is observed at ≠ 0. In the case of granular media it can be viewed as a force that keeps a collection of particles rigid. Usually this applies to sand, soil, or gravel, as well as every kind of rock and several other porous materials such as concrete, metal powders, biological tissues etc. The usefulness of an appropriate ESP formulation consists in allowing to assess the behaviour of a porous body for whatever pore pressure value on the basis of experiments involving dry samples (i.e. carried out at zero pore pressure). History Karl von Terzaghi first proposed the relationship for effective stress in 1925. For him, the term "effective" meant the calculated stress that was effective in moving soil, or causing displacements. It has been often interpreted as the average stress carried by the soil skeleton. Afterwards, different formulations have been proposed for the effective stress. Maurice Biot fully developed the three-dimensional soil consolidation theory, extending the one-dimensional model previously developed by Terzaghi to more general hypotheses and introducing the set of basic equations of Poroelasticity. Alec Skempton in his work in 1960, has carried out an extensive review of available formulations and experimental data in literature about effective stress valid in soil, concrete and rock, in order to reject some of these expressions, as well as clarify what expression was appropriate according to several work hypotheses, such as stress–strain or strength behaviour, saturated or nonsaturated media, rock/concrete or soil behaviour, etc. In 1962, work by Jeremiah Jennings and John Burland examined the applicability of Terzaghi’s effective stress principle to partly saturated soils. Through a series of experiments undertaken at the University of the Witwatersrand, including oedometer and compression tests on various soil types, they showed that behaviours such as volume changes and shear strength in partly saturated soils do not align with predictions based on effective stress changes alone. Their findings showed that the structural changes due to pressure deficiencies behave differently from changes due to applied stress. Description Effective stress (σ') acting on a soil is calculated from two parameters, total stress (σ) and pore water pressure (u) according to: Typically, for simple examples Much like the concept of stress itself, the formula is a construct, for the easier visualization of forces acting on a soil mass, especially simple analysis models for slope stability, involving a slip plane. With these models, it is important to know the total weight of the soil above (including water), and the pore water pressure within the slip plane, assuming it is acting as a confined layer. However, the formula becomes confusing when considering the true behaviour of the soil particles under different measurable conditions, since none of the parameters are actually independent actors on the particles. Consider a grouping of round quartz sand grains, piled loosely, in a classic "cannonball" arrangement. As can be seen, there is a contact stress where the spheres actually touch. Pile on more spheres and the contact stresses increase, to the point of causing frictional instability (dynamic friction), and perhaps failure. The independent parameter affecting the contacts (both normal and shear) is the force of the spheres above. This can be calculated by using the overall average density of the spheres and the height of spheres above. If we then have these spheres in a beaker and add some water, they will begin to float a little depending on their density (buoyancy). With natural soil materials, the effect can be significant, as anyone who has lifted a large rock out of a lake can attest. The contact stress on the spheres decreases as the beaker is filled to the top of the spheres, but then nothing changes if more water is added. Although the water pressure between the spheres (pore water pressure) is increasing, the effective stress remains the same, because the concept of "total stress" includes the weight of all the water above. This is where the equation can become confusing, and the effective stress can be calculated using the buoyant density of the spheres (soil), and the height of the soil above. The concept of effective stress truly becomes interesting when dealing with non-hydrostatic pore water pressure. Under the conditions of a pore pressure gradient, the ground water flows, according to the permeability equation (Darcy's law). Using our spheres as a model, this is the same as injecting (or withdrawing) water between the spheres. If water is being injected, the seepage force acts to separate the spheres and reduces the effective stress. Thus, the soil mass becomes weaker. If water is being withdrawn, the spheres are forced together and the effective stress increases. Two extremes of this effect are quicksand, where the groundwater gradient and seepage force act against gravity; and the "sandcastle effect", where the water drainage and capillary action act to strengthen the sand. As well, effective stress plays an important role in slope stability, and other geotechnical engineering and engineering geology problems, such as groundwater-related subsidence. References Terzaghi, K. (1925). Principles of Soil Mechanics. Engineering News-Record, 95(19-27). Soil mechanics
Effective stress
[ "Physics" ]
1,149
[ "Soil mechanics", "Applied and interdisciplinary physics" ]
3,587,477
https://en.wikipedia.org/wiki/4Pi%20microscope
A 4Pi microscope is a laser scanning fluorescence microscope with an improved axial resolution. With it the typical range of the axial resolution of 500–700 nm can be improved to 100–150 nm, which corresponds to an almost spherical focal spot with 5–7 times less volume than that of standard confocal microscopy. Working principle The improvement in resolution is achieved by using two opposing objective lenses, which both are focused to the same geometrical location. Also the difference in optical path length through each of the two objective lenses is carefully aligned to be minimal. By this method, molecules residing in the common focal area of both objectives can be illuminated coherently from both sides and the reflected or emitted light can also be collected coherently, i.e. coherent superposition of emitted light on the detector is possible. The solid angle that is used for illumination and detection is increased and approaches its maximum. In this case the sample is illuminated and detected from all sides simultaneously. The operation mode of a 4Pi microscope is shown in the figure. The laser light is divided by a beam splitter and directed by mirrors towards the two opposing objective lenses. At the common focal point superposition of both focused light beams occurs. Excited molecules at this position emit fluorescence light, which is collected by both objective lenses, combined by the same beam splitter and deflected by a dichroic mirror onto a detector. There, superposition of both emitted light pathways can take place again. In the ideal case each objective lens can collect light from a solid angle of . With two objective lenses one can collect from every direction (solid angle ). The name of this type of microscopy is derived from the maximal possible solid angle for excitation and detection. Practically, one can achieve only aperture angles of about 140° for an objective lens, which corresponds to . The microscope can be operated in three different ways: In a 4Pi microscope of type A, the coherent superposition of excitation light is used to generate the increased resolution. The emission light is either detected from one side only or in an incoherent superposition from both sides. In a 4Pi microscope of type B, only the emission light is interfering. When operated in the type C mode, both excitation and emission light are allowed to interfere, leading to the highest possible resolution increase (~7-fold along the optical axis as compared to confocal microscopy). In a real 4Pi microscope light cannot be applied or collected from all directions equally, leading to so-called side lobes in the point spread function. Typically (but not always) two-photon excitation microscopy is used in a 4Pi microscope in combination with an emission pinhole to lower these side lobes to a tolerable level. History In 1971, Christoph Cremer and Thomas Cremer proposed the creation of a perfect hologram, i.e. one that carries the whole field information of the emission of a point source in all directions, a so-called hologram. However the publication from 1978 had drawn an improper physical conclusion (i.e. a point-like spot of light) and had completely missed the axial resolution increase as the actual benefit of adding the other side of the solid angle. The first description of a practicable system of 4Pi microscopy, i.e. the setup with two opposing, interfering lenses, was invented by Stefan Hell in 1991. He demonstrated it experimentally in 1994. In the following years, the number of applications for this microscope has grown. For example, parallel excitation and detection with 64 spots in the sample simultaneously combined with the improved spatial resolution resulted in the successful recording of the dynamics of mitochondria in yeast cells with a 4Pi microscope in 2002. A commercial version was launched by microscope manufacturer Leica Microsystems in 2004 and later discontinued. Up to now, the best quality in a 4Pi microscope was reached in conjunction with super-resolution techniques like the stimulated emission depletion (STED) principle. Using a 4Pi microscope with appropriate excitation and de-excitation beams, it was possible to create a uniformly 50 nm sized spot, which corresponds to a decreased focal volume compared to confocal microscopy by a factor of 150–200 in fixed cells. With the combination of 4Pi microscopy and RESOLFT microscopy with switchable proteins, it is now possible to take images of living cells at low light levels with isotropic resolutions below 40 nm. See also Stimulated emission depletion microscope (STED) Multifocal plane microscopy (MUM) References Microscopes Fluorescence Cell imaging Optical microscopy techniques
4Pi microscope
[ "Chemistry", "Technology", "Engineering", "Biology" ]
939
[ "Luminescence", "Fluorescence", "Measuring instruments", "Microscopes", "Microscopy", "Cell imaging" ]
3,587,852
https://en.wikipedia.org/wiki/Scaffold%20protein
In biology, scaffold proteins are crucial regulators of many key signalling pathways. Although scaffolds are not strictly defined in function, they are known to interact and/or bind with multiple members of a signalling pathway, tethering them into complexes. In such pathways, they regulate signal transduction and help localize pathway components (organized in complexes) to specific areas of the cell such as the plasma membrane, the cytoplasm, the nucleus, the Golgi, endosomes, and the mitochondria. History The first signaling scaffold protein discovered was the Ste5 protein from the yeast Saccharomyces cerevisiae. Three distinct domains of Ste5 were shown to associate with the protein kinases Ste11, Ste7, and Fus3 to form a multikinase complex. Function Scaffold proteins act in at least four ways: tethering signaling components, localizing these components to specific areas of the cell, regulating signal transduction by coordinating positive and negative feedback signals, and insulating correct signaling proteins from competing proteins. Tethering signaling components This particular function is considered a scaffold's most basic function. Scaffolds assemble signaling components of a cascade into complexes. This assembly may be able to enhance signaling specificity by preventing unnecessary interactions between signaling proteins, and enhance signaling efficiency by increasing the proximity and effective concentration of components in the scaffold complex. A common example of how scaffolds enhance specificity is a scaffold that binds a protein kinase and its substrate, thereby ensuring specific kinase phosphorylation. Additionally, some signaling proteins require multiple interactions for activation and scaffold tethering may be able to convert these interactions into one interaction that results in multiple modifications. Scaffolds may also be catalytic as interaction with signaling proteins may result in allosteric changes of these signaling components. Such changes may be able to enhance or inhibit the activation of these signaling proteins. An example is the Ste5 scaffold in the mitogen-activated protein kinase (MAPK) pathway. Ste5 has been proposed to direct mating signaling through the Fus3 MAPK by catalytically unlocking this particular kinase for activation by its MAPKK Ste7. Localization of signaling components in the cell Scaffolds localize the signaling reaction to a specific area in the cell, a process that could be important for the local production of signaling intermediates. A particular example of this process is the scaffold, A-kinase anchor proteins (AKAPs), which target cyclic AMP-dependent protein kinase (PKA) to various sites in the cell. This localization is able to locally regulate PKA and results in the local phosphorylation by PKA of its substrates. Coordinating positive and negative feedback Many hypotheses about how scaffolds coordinate positive and negative feedback come from engineered scaffolds and mathematical modeling. In three-kinase signaling cascades, scaffolds bind all three kinases, enhancing kinase specificity and restricting signal amplification by limiting kinase phosphorylation to only one downstream target. These abilities may be related to stability of the interaction between the scaffold and the kinases, the basal phosphatase activity in the cell, scaffold location, and expression levels of the signaling components. Insulating correct signaling proteins from inactivation Signaling pathways are often inactivated by enzymes that reverse the activation state and/or induce the degradation of signaling components. Scaffolds have been proposed to protect activated signaling molecules from inactivation and/or degradation. Mathematical modeling has shown that kinases in a cascade without scaffolds have a higher probability of being dephosphorylated by phosphatases before they are even able to phosphorylate downstream targets. Furthermore, scaffolds have been shown to insulate kinases from substrate- and ATP-competitive inhibitors. Scaffold protein summary Huntingtin protein Huntingtin protein co-localizes with ATM repair protein at sites of DNA damage. Huntingtin is a scaffolding protein in the ATM oxidative DNA damage response complex. Huntington's disease patients with aberrant huntingtin protein are deficient in repair of oxidative DNA damage. Oxidative DNA damage appears to underlie Huntington's disease pathogenesis. Huntington's disease is likely caused by the dysfunction of mutant huntingtin scaffold protein in DNA repair leading to increased oxidative DNA damage in metabolically active cells. DNA repair SPIDR (scaffold protein involved in DNA repair) regulates the stability or assembly of RAD51 and DMC1 on single-stranded DNA. RAD51 and DMC1 are recombinases that act during mammalian meiosis to mediate strand exchange during the repair of DNA double-strand breaks by homologous recombination. Other usage of the term Scaffold Protein On some other instances in biology (not necessarily about cell signaling), the term "Scaffold protein" is used in a broader sense, where a protein holds several things together for any purpose. In chromosome folding Chromosome scaffold has important role to hold the chromatin into compact chromosome. Chromosome scaffold is made of proteins including condensin, topoisomerase IIα and kinesin family member 4 (KIF4) Chromosome scaffold constituent proteins are also called scaffold protein. In enzymatic reaction Large multifunctional enzymes that performs a series or chain of reaction in a common pathway, sometimes called scaffold proteins. such as Pyruvate dehydrogenase. In molecule shape formation An enzyme or structural protein that holds several molecules together to hold them in proper spatial arrangement, such as Iron sulphur cluster scaffold proteins. Structural scaffold In cytoskeleton and ECM, the molecules provide mechanical scaffold. Such as type 4 collagen References Cell biology Signal transduction Cell signaling
Scaffold protein
[ "Chemistry", "Biology" ]
1,228
[ "Biochemistry", "Neurochemistry", "Cell biology", "Signal transduction" ]
1,272,080
https://en.wikipedia.org/wiki/Post-metallocene%20catalyst
A post-metallocene catalyst is a kind of catalyst for the polymerization of olefins, i.e., the industrial production of some of the most common plastics. "Post-metallocene" refers to a class of homogeneous catalysts that are not metallocenes. This area has attracted much attention because the market for polyethylene, polypropylene, and related copolymers is large. There is a corresponding intense market for new processes as indicated by the fact that, in the US alone, 50,000 patents were issued between 1991-2007 on polyethylene and polypropylene. Many methods exist to polymerize alkenes, including the traditional routes using Philips catalyst and traditional heterogeneous Ziegler-Natta catalysts, which still are used to produce the bulk of polyethylene. Catalysts based on early transition metals Homogeneous metallocene catalysts, e.g., derived from or related to zirconocene dichloride introduced a level of microstructural control that was unavailable with heterogeneous systems. Metallocene catalysts are homogeneous single-site systems, implying that a uniform catalyst is present in the solution. In contrast, commercially important Ziegler-Natta heterogeneous catalysts contain a distribution of catalytic sites. The catalytic properties of single-site catalysts can be controlled by modification of the ligand. Initially ligand modifications focused on various cyclopentadienyl derivatives, but great diversity was uncovered through high throughput screening. These post-metallocene catalysts employ a range of chelating ligands, often including pyridine and amido (R2N−). These ligands are available in great diversity with respect to their steric and electronic properties. Such postmetallocene catalysts enabled the introduction of Chain shuttling polymerization. Catalysts based on late transition metals The copolymerization of ethylene with polar monomers has been heavily studied. The high oxophilicity of the early metals precluded their use in this application. Efforts to copolymerize polar comonomers led to catalysts based upon nickel and palladium, inspired by the success of the Shell Higher Olefin Process. Typical post-metallocene catalysts feature bulky, neutral, alpha-diimine ligands. DuPont commercialized the Versipol olefin polymerization system. Eastman commercialized the related Gavilan technology. These complexes catalyze the homopolymerize ethylene to a variety of structures that range from high density polyethylene through hydrocarbon plastomers and elastomers by a mechanism referred to as “chain-walking”. By modifying the bulk of the alpha-diimine, the product distribution of these systems can be 'tuned' to consist of hydrocarbon oils (alpha-olefins), similar to those produced by more tradition nickel(II) oligo/polymerization catalysts. As opposed to metallocenes, they can also randomly copolymerize ethylene with polar comonomers such as methyl acrylate. A second class of catalysts feature mono-anionic bidentate ligands related to salen ligands. and DuPont. The concept of bulky bis-imine ligands was extended to iron complexes Representative catalysts feature diiminopyridine ligands. These catalysts are highly active but do not promote chain walking. The give very linear high-density polyethylene when bulky and when the steric bulk is removed, they are very active for ethylene oligomerization to linear alpha-olefins. A salicylimine catalyst system based on zirconium exhibits high activity for ethylene polymerization. The catalysts can also produce some novel polypropylene structures. Despite intensive efforts, few catalysts have been successfully commercialized for the copolymerization of polar monomers. References Catalysts Coordination chemistry Polymer chemistry
Post-metallocene catalyst
[ "Chemistry", "Materials_science", "Engineering" ]
829
[ "Catalysis", "Catalysts", "Coordination chemistry", "Materials science", "Polymer chemistry", "Chemical kinetics" ]
1,272,743
https://en.wikipedia.org/wiki/Volume%20of%20distribution
In pharmacology, the volume of distribution (VD, also known as apparent volume of distribution, literally, volume of dilution) is the theoretical volume that would be necessary to contain the total amount of an administered drug at the same concentration that it is observed in the blood plasma. In other words, it is the ratio of amount of drug in a body (dose) to concentration of the drug that is measured in blood, plasma, and un-bound in interstitial fluid. The VD of a drug represents the degree to which a drug is distributed in body tissue rather than the plasma. VD is directly proportional with the amount of drug distributed into tissue; a higher VD indicates a greater amount of tissue distribution. A VD greater than the total volume of body water (approximately 42 liters in humans) is possible, and would indicate that the drug is highly distributed into tissue. In other words, the volume of distribution is smaller in the drug staying in the plasma than that of a drug that is widely distributed in tissues. In rough terms, drugs with a high lipid solubility (non-polar drugs), low rates of ionization, or low plasma protein binding capabilities have higher volumes of distribution than drugs which are more polar, more highly ionized or exhibit high plasma protein binding in the body's environment. Volume of distribution may be increased by kidney failure (due to fluid retention) and liver failure (due to altered body fluid and plasma protein binding). Conversely it may be decreased in dehydration. The initial volume of distribution describes blood concentrations prior to attaining the apparent volume of distribution and uses the same formula. Equations The volume of distribution is given by the following equation: Therefore, the dose required to give a certain plasma concentration can be determined if the VD for that drug is known. The VD is not a physiological value; it is more a reflection of how a drug will distribute throughout the body depending on several physicochemical properties, e.g. solubility, charge, size, etc. The unit for Volume of Distribution is typically reported in litres. As body composition changes with age, VD decreases. The VD may also be used to determine how readily a drug will displace into the body tissue compartments relative to the blood: Where: VP = plasma volume VT = apparent tissue volume fu = fraction unbound in plasma fuT = fraction unbound in tissue Examples If you administer a dose D of a drug intravenously in one go (IV-bolus), you would naturally expect it to have an immediate blood concentration which directly corresponds to the amount of blood contained in the body . Mathematically this would be: But this is generally not what happens. Instead you observe that the drug has distributed out into some other volume (read organs/tissue). So probably the first question you want to ask is: how much of the drug is no longer in the blood stream? The volume of distribution quantifies just that by specifying how big a volume you would need in order to observe the blood concentration actually measured. An example for a simple case (mono-compartmental) would be to administer D=8 mg/kg to a human. A human has a blood volume of around 0.08 L/kg . This gives a 100 μg/mL if the drug stays in the blood stream only, and thus its volume of distribution is the same as that is 0.08 L/kg. If the drug distributes into all body water the volume of distribution would increase to approximately 0.57 L/kg If the drug readily diffuses into the body fat the volume of distribution may increase dramatically, an example is chloroquine which has a 250-302 L/kg In the simple mono-compartmental case the volume of distribution is defined as: , where the in practice is an extrapolated concentration at time = 0 from the first early plasma concentrations after an IV-bolus administration (generally taken around 5 min - 30 min after giving the drug). References External links Tutorial on volume of distribution Overview at icp.org.nz Overview at cornell.edu Overview at stanford.edu Overview at boomer.org Pharmacokinetics
Volume of distribution
[ "Chemistry" ]
866
[ "Pharmacology", "Pharmacokinetics" ]
1,273,491
https://en.wikipedia.org/wiki/Exponential%20stability
In control theory, a continuous linear time-invariant system (LTI) is exponentially stable if and only if the system has eigenvalues (i.e., the poles of input-to-output systems) with strictly negative real parts (i.e., in the left half of the complex plane). A discrete-time input-to-output LTI system is exponentially stable if and only if the poles of its transfer function lie strictly within the unit circle centered on the origin of the complex plane. Systems that are not LTI are exponentially stable if their convergence is bounded by exponential decay. Exponential stability is a form of asymptotic stability, valid for more general dynamical systems. Practical consequences An exponentially stable LTI system is one that will not "blow up" (i.e., give an unbounded output) when given a finite input or non-zero initial condition. Moreover, if the system is given a fixed, finite input (i.e., a step), then any resulting oscillations in the output will decay at an exponential rate, and the output will tend asymptotically to a new final, steady-state value. If the system is instead given a Dirac delta impulse as input, then induced oscillations will die away and the system will return to its previous value. If oscillations do not die away, or the system does not return to its original output when an impulse is applied, the system is instead marginally stable. Example exponentially stable LTI systems The graph on the right shows the impulse response of two similar systems. The green curve is the response of the system with impulse response , while the blue represents the system . Although one response is oscillatory, both return to the original value of 0 over time. Real-world example Imagine putting a marble in a ladle. It will settle itself into the lowest point of the ladle and, unless disturbed, will stay there. Now imagine giving the ball a push, which is an approximation to a Dirac delta impulse. The marble will roll back and forth but eventually resettle in the bottom of the ladle. Drawing the horizontal position of the marble over time would give a gradually diminishing sinusoid rather like the blue curve in the image above. A step input in this case requires supporting the marble away from the bottom of the ladle, so that it cannot roll back. It will stay in the same position and will not, as would be the case if the system were only marginally stable or entirely unstable, continue to move away from the bottom of the ladle under this constant force equal to its weight. It is important to note that in this example the system is not stable for all inputs. Give the marble a big enough push, and it will fall out of the ladle and fall, stopping only when it reaches the floor. For some systems, therefore, it is proper to state that a system is exponentially stable over a certain range of inputs. See also Marginal stability Control theory State space (controls) References External links Parameter estimation and asymptotic stability instochastic filtering, Anastasia Papavasiliou∗September 28, 2004 Dynamical systems Stability theory fr:Stabilité de Lyapunov#Les stabilités
Exponential stability
[ "Physics", "Mathematics" ]
686
[ "Stability theory", "Mechanics", "Dynamical systems" ]
1,274,368
https://en.wikipedia.org/wiki/EGR1
EGR-1 (Early growth response protein 1) or NGFI-A (nerve growth factor-induced protein A) is a protein that in humans is encoded by the EGR1 gene. EGR-1 is a mammalian transcription factor. It was also named Krox-24, TIS8, and ZENK. It was originally discovered in mice. Function The protein encoded by this gene belongs to the EGR family of Cys2His2-type zinc finger proteins. It is a nuclear protein and functions as a transcriptional regulator. The products of target genes it activates are required for differentiation and mitogenesis. Studies suggest this is a tumor suppressor gene. It has a distinct pattern of expression in the brain, and its induction has been shown to be associated with neuronal activity. Several studies suggest it has a role in neuronal plasticity. EGR-1 is an important transcription factor in memory formation. It has an essential role in brain neuron epigenetic reprogramming. EGR-1 recruits the TET1 protein that initiates a pathway of DNA demethylation. Removing DNA methylation marks allows the activation of downstream genes. EGR-1, together with TET1, is employed in programming the distribution of methylation sites on brain DNA during brain development, in learning and in long-term neuronal plasticity. EGR-1 has also been found to regulate the expression of VAMP2 (a protein important for synaptic exocytosis). Beside its function in the nervous system, there is significant evidence that EGR-1 along with its paralog EGR-2 is induced in fibrotic diseases has key functions in fibrinogenesis and is necessary for experimentally induced fibrosis in mice. It may also be involved in ovarian function Structure The DNA-binding domain of EGR-1 consists of three zinc finger domains of the Cys2His2 type. The amino acid structure of the EGR-1 zinc finger domain is given in this table, using the single letter amino acid code. The fingers 1 to 3 are indicated by f1 - f3. The numbers are in reference to the residues (amino acids) of alpha helix (there is no zero). The residues marked 'x' are not part of the zinc fingers, but rather serve to connect them all together. Amino acid key: Alanine (Ala, A), Arginine (Arg, R), Asparagine (Asn, N), Aspartic acid (Asp, D), Cysteine (Cys, C), Glutamic acid (Glu, E), Glutamine (Gln, Q), Glycine (Gly, G), Histidine (His, H), Isoleucine (Ile, I), Leucine (Leu, L), Lysine (Lys, K), Methionine (Met, M), Phenylalanine (Phe, F), Proline (Pro, P), Serine (Ser, S), Threonine (Thr, T), Tryptophan (Trp, W), Tyrosine (Tyr, Y), Valine (Val, V) The crystal structure of DNA bound by the zinc finger domain of EGR-1 was solved in 1991, which greatly aided early research in zinc finger DNA-binding domains. The human EGR-1 protein contains (in its unprocessed form) 543 amino acids with a molecular weight of 57.5 kDa, and the gene is located on the chromosome 5. DNA binding specificity EGR-1 binds the DNA sequence 5'-GCG TGG GCG-3' (and similar ones like 5'-GCG GGG GCG-3'). The f1 position 6 binds the 5' G (the first base count from the left); the f1 position 3 to the second base (C); f1 position -1 binds to the third position (G); f2 position 6 to the fourth base (T); and so on. Interactions EGR-1 has been shown to interact with: CEBPB, CREB-binding protein, EP300, NAB1, P53, and PSMA3 See also Zinc finger References Further reading External links Molecular neuroscience Transcription factors Zinc proteins
EGR1
[ "Chemistry", "Biology" ]
929
[ "Gene expression", "Signal transduction", "Molecular neuroscience", "Induced stem cells", "Molecular biology", "Transcription factors" ]
26,759,187
https://en.wikipedia.org/wiki/Summation%20equation
In mathematics, a summation equation or discrete integral equation is an equation in which an unknown function appears under a summation sign. The theories of summation equations and integral equations can be unified as integral equations on time scales using time scale calculus. A summation equation compares to a difference equation as an integral equation compares to a differential equation. The Volterra summation equation is: where is the unknown function, are integers, and are known functions. References Summation equations or discrete integral equations Integral equations
Summation equation
[ "Mathematics" ]
103
[ "Mathematical analysis", "Integral equations", "Mathematical analysis stubs", "Mathematical objects", "Equations" ]
26,760,516
https://en.wikipedia.org/wiki/Data%20stream%20management%20system
A data stream management system (DSMS) is a computer software system to manage continuous data streams. It is similar to a database management system (DBMS), which is, however, designed for static data in conventional databases. A DBMS also offers a flexible query processing so that the information needed can be expressed using queries. However, in contrast to a DBMS, a DSMS executes a continuous query that is not only performed once, but is permanently installed. Therefore, the query is continuously executed until it is explicitly uninstalled. Since most DSMS are data-driven, a continuous query produces new results as long as new data arrive at the system. This basic concept is similar to Complex event processing so that both technologies are partially coalescing. Functional principle One important feature of a DSMS is the possibility to handle potentially infinite and rapidly changing data streams by offering flexible processing at the same time, although there are only limited resources such as main memory. The following table provides various principles of DSMS and compares them to traditional DBMS. Processing and streaming models One of the biggest challenges for a DSMS is to handle potentially infinite data streams using a fixed amount of memory and no random access to the data. There are different approaches to limit the amount of data in one pass, which can be divided into two classes. For the one hand, there are compression techniques that try to summarize the data and for the other hand there are window techniques that try to portion the data into (finite) parts. Synopses The idea behind compression techniques is to maintain only a synopsis of the data, but not all (raw) data points of the data stream. The algorithms range from selecting random data points called sampling to summarization using histograms, wavelets or sketching. One simple example of a compression is the continuous calculation of an average. Instead of memorizing each data point, the synopsis only holds the sum and the number of items. The average can be calculated by dividing the sum by the number. However, it should be mentioned that synopses cannot reflect the data accurately. Thus, a processing that is based on synopses may produce inaccurate results. Windows Instead of using synopses to compress the characteristics of the whole data streams, window techniques only look on a portion of the data. This approach is motivated by the idea that only the most recent data are relevant. Therefore, a window continuously cuts out a part of the data stream, e.g. the last ten data stream elements, and only considers these elements during the processing. There are different kinds of such windows like sliding windows that are similar to FIFO lists or tumbling windows that cut out disjoint parts. Furthermore, the windows can also be differentiated into element-based windows, e.g., to consider the last ten elements, or time-based windows, e.g., to consider the last ten seconds of data. There are also different approaches to implementing windows. There are, for example, approaches that use timestamps or time intervals for system-wide windows or buffer-based windows for each single processing step. Sliding-window query processing is also suitable to being implemented in parallel processors by exploiting parallelism between different windows and/or within each window extent. Query processing Since there are a lot of prototypes, there is no standardized architecture. However, most DSMS are based on the query processing in DBMS by using declarative languages to express queries, which are translated into a plan of operators. These plans can be optimized and executed. A query processing often consists of the following steps. Formulation of continuous queries The formulation of queries is mostly done using declarative languages like SQL in DBMS. Since there are no standardized query languages to express continuous queries, there are a lot of languages and variations. However, most of them are based on SQL, such as the Continuous Query Language (CQL), StreamSQL and ESP. There are also graphical approaches where each processing step is a box and the processing flow is expressed by arrows between the boxes. The language strongly depends on the processing model. For example, if windows are used for the processing, the definition of a window has to be expressed. In StreamSQL, a query with a sliding window for the last 10 elements looks like follows: SELECT AVG(price) FROM examplestream [SIZE 10 ADVANCE 1 TUPLES] WHERE value > 100.0 This stream continuously calculates the average value of "price" of the last 10 tuples, but only considers those tuples whose prices are greater than 100.0. In the next step, the declarative query is translated into a logical query plan. A query plan is a directed graph where the nodes are operators and the edges describe the processing flow. Each operator in the query plan encapsulates the semantic of a specific operation, such as filtering or aggregation. In DSMSs that process relational data streams, the operators are equal or similar to the operators of the Relational algebra, so that there are operators for selection, projection, join, and set operations. This operator concept allows the very flexible and versatile processing of a DSMS. Optimization of queries The logical query plan can be optimized, which strongly depends on the streaming model. The basic concepts for optimizing continuous queries are equal to those from database systems. If there are relational data streams and the logical query plan is based on relational operators from the Relational algebra, a query optimizer can use the algebraic equivalences to optimize the plan. These may be, for example, to push selection operators down to the sources, because they are not so computationally intensive like join operators. Furthermore, there are also cost-based optimization techniques like in DBMS, where a query plan with the lowest costs is chosen from different equivalent query plans. One example is to choose the order of two successive join operators. In DBMS this decision is mostly done by certain statistics of the involved databases. But, since the data of a data streams is unknown in advance, there are no such statistics in a DSMS. However, it is possible to observe a data stream for a certain time to obtain some statistics. Using these statistics, the query can also be optimized later. So, in contrast to a DBMS, some DSMS allows to optimize the query even during runtime. Therefore, a DSMS needs some plan migration strategies to replace a running query plan with a new one. Transformation of queries Since a logical operator is only responsible for the semantics of an operation but does not consist of any algorithms, the logical query plan must be transformed into an executable counterpart. This is called a physical query plan. The distinction between a logical and a physical operator plan allows more than one implementation for the same logical operator. The join, for example, is logically the same, although it can be implemented by different algorithms like a Nested loop join or a Sort-merge join. Notice, these algorithms also strongly depend on the used stream and processing model. Finally, the query is available as a physical query plan. Execution of queries Since the physical query plan consists of executable algorithms, it can be directly executed. For this, the physical query plan is installed into the system. The bottom of the graph (of the query plan) is connected to the incoming sources, which can be everything like connectors to sensors. The top of the graph is connected to the outgoing sinks, which may be for example a visualization. Since most DSMSs are data-driven, a query is executed by pushing the incoming data elements from the source through the query plan to the sink. Each time when a data element passes an operator, the operator performs its specific operation on the data element and forwards the result to all successive operators. Examples AURORA, StreamBase Systems, Inc. Hortonworks DataFlow IBM Streams NIAGARA Query Engine NiagaraST: A Research Data Stream Management System at Portland State University Odysseus, an open source Java-based framework for Data Stream Management Systems Pipeline DB PIPES , webMethods Business Events QStream SAS Event Stream Processing STREAM StreamGlobe StreamInsight TelegraphCQ WSO2 Stream Processor See also Complex Event Processing Event stream processing Relational data stream management system References External links Processing Flows of Information: From Data Stream to Complex Event Processing - Survey article on Data Stream and Complex Event Processing Systems - Introduction to streaming data management with SQL Big data Data management Data engineering
Data stream management system
[ "Technology", "Engineering" ]
1,738
[ "Data management", "Software engineering", "Data engineering", "Data", "Big data" ]
26,762,530
https://en.wikipedia.org/wiki/PhotoRC%20RNA%20motifs
PhotoRC RNA motifs refer to conserved RNA structures that are associated with genes acting in the photosynthetic reaction centre of photosynthetic bacteria. Two such RNA classes were identified and called the PhotoRC-I and PhotoRC-II motifs. PhotoRC-I RNAs were detected in the genomes of some cyanobacteria. Although no PhotoRC-II RNA has been detected in cyanobacteria, one is found in the genome of a purified phage that infects cyanobacteria. Both PhotoRC-I and PhotoRC-II RNAs are present in sequences derived from DNA that was extracted from uncultivated marine bacteria. The PhotoRC motif RNAs are located upstream of, and presumably in the 5′ untranslated regions (5′ UTRs), of genes that are sometimes annotated as psbA. The proteins encoded by psbA genes form the reaction center of the photosystem II complex. It was proposed that PhotoRC RNAs are cis-regulatory elements functioning at the RNA level, since bacterial cis-regulatory RNAs typically reside in 5′ UTRs. References External links Cis-regulatory RNA elements
PhotoRC RNA motifs
[ "Chemistry" ]
243
[ "Biochemistry stubs", "Molecular and cellular biology stubs" ]
26,763,104
https://en.wikipedia.org/wiki/Jacobi%20zeta%20function
In mathematics, the Jacobi zeta function Z(u) is the logarithmic derivative of the Jacobi theta function Θ(u). It is also commonly denoted as Where E, K, and F are generic Incomplete Elliptical Integrals of the first and second kind. Jacobi Zeta Functions being kinds of Jacobi theta functions have applications to all their relevant fields and application. This relates Jacobi's common notation of, , , . to Jacobi's Zeta function. Some additional relations include , References https://booksite.elsevier.com/samplechapters/9780123736376/Sample_Chapters/01~Front_Matter.pdf Pg.xxxiv http://mathworld.wolfram.com/JacobiZetaFunction.html Special functions
Jacobi zeta function
[ "Mathematics" ]
168
[ "Mathematical analysis", "Special functions", "Mathematical analysis stubs", "Combinatorics" ]
26,769,724
https://en.wikipedia.org/wiki/866A
The 866 is a mercury vapor half-wave rectifier intended for high-voltage applications. The voltage drop is approximately 15 volts up to 150 Hz. To avoid unwanted shorts the tube must be operated in a vertical position and the filament preheated for at least 30 seconds before applying the plate voltage. Construction Structurally, it consists of a linear electrode arrangement; a cup shaped anode with top cap and a cylindrical cathode. The socket is a medium 4 pin bayonet UX-4 and the glass envelope is ST-19. The 2.5 volt/ 5 Amp filament is connected to pins 1 and 4. Operation Under normal operating conditions the tube glows blue and mercury droplets are visible. Pictures in working conditions References Further reading External links 866 @ The National Valve Museum 866A at Radiomuseum.org Vacuum tubes Electric power conversion
866A
[ "Physics" ]
183
[ "Vacuum tubes", "Vacuum", "Matter" ]
18,665,459
https://en.wikipedia.org/wiki/Shapiro%27s%20lemma
In mathematics, especially in the areas of abstract algebra dealing with group cohomology or relative homological algebra, Shapiro's lemma, also known as the Eckmann–Shapiro lemma, relates extensions of modules over one ring to extensions over another, especially the group ring of a group and of a subgroup. It thus relates the group cohomology with respect to a group to the cohomology with respect to a subgroup. Shapiro's lemma is named after Arnold S. Shapiro, who proved it in 1961; however, Beno Eckmann had discovered it earlier, in 1953. Statement for rings Let R → S be a ring homomorphism, so that S becomes a left and right R-module. Let M be a left S-module and N a left R-module. By restriction of scalars, M is also a left R-module. If S is projective as a right R-module, then: If S is projective as a left R-module, then: See . The projectivity conditions can be weakened into conditions on the vanishing of certain Tor- or Ext-groups: see . Statement for group rings When H is a subgroup of finite index in G, then the group ring R[G] is finitely generated projective as a left and right R[H] module, so the previous theorem applies in a simple way. Let M be a finite-dimensional representation of G and N a finite-dimensional representation of H. In this case, the module S ⊗R N is called the induced representation of N from H to G, and RM is called the restricted representation of M from G to H. One has that: When n = 0, this is called Frobenius reciprocity for completely reducible modules, and Nakayama reciprocity in general. See , which also contains these higher versions of the Mackey decomposition. Statement for group cohomology Specializing M to be the trivial module produces the familiar Shapiro's lemma. Let H be a subgroup of G and N a representation of H. For NG the induced representation of N from H to G using the tensor product, and for H the group homology: H(G, NG) = H(H, N) Similarly, for NG the co-induced representation of N from H to G using the Hom functor, and for H the group cohomology: H(G, NG) = H(H, N) When H has finite index in G, then the induced and coinduced representations coincide and the lemma is valid for both homology and cohomology. See . See also Change of rings Notes References . , page 59 Homological algebra Representation theory Lemmas in algebra
Shapiro's lemma
[ "Mathematics" ]
556
[ "Mathematical structures", "Theorems in algebra", "Lemmas in algebra", "Fields of abstract algebra", "Category theory", "Representation theory", "Lemmas", "Homological algebra" ]
18,665,993
https://en.wikipedia.org/wiki/Corrosion%20engineering
Corrosion engineering is an engineering specialty that applies scientific, technical, engineering skills, and knowledge of natural laws and physical resources to design and implement materials, structures, devices, systems, and procedures to manage corrosion. From a holistic perspective, corrosion is the phenomenon of metals returning to the state they are found in nature. The driving force that causes metals to corrode is a consequence of their temporary existence in metallic form. To produce metals starting from naturally occurring minerals and ores, it is necessary to provide a certain amount of energy, e.g. Iron ore in a blast furnace. It is therefore thermodynamically inevitable that these metals when exposed to various environments would revert to their state found in nature. Corrosion and corrosion engineering thus involves a study of chemical kinetics, thermodynamics, electrochemistry and materials science. General background Generally related to metallurgy or materials science, corrosion engineering also relates to non-metallics including ceramics, cement, composite material, and conductive materials such as carbon and graphite. Corrosion engineers often manage other not-strictly-corrosion processes including (but not restricted to) cracking, brittle fracture, crazing, fretting, erosion, and more typically categorized as Infrastructure asset management. In the 1990s, Imperial College London even offered a Master of Science degree entitled "The Corrosion of Engineering Materials". UMIST – University of Manchester Institute of Science and Technology and now part of the University of Manchester also offered a similar course. Corrosion Engineering master's degree courses are available worldwide and the curricula contain study material about the control and understanding of corrosion. Ohio State University has a corrosion center named after one of the more well known corrosion engineers Mars G Fontana. Corrosion costs In the year 1995, it was reported that the costs of corrosion nationwide in the USA were nearly $300 billion per year. This confirmed earlier reports of damage to the world economy caused by corrosion. Zaki Ahmad, in his book Principles of corrosion engineering and corrosion control, states that "Corrosion engineering is the application of the principles evolved from corrosion science to minimize or prevent corrosion". Shreir et al. suggest likewise in their large, two volume work entitled Corrosion. Corrosion engineering involves designing of corrosion prevention schemes and implementation of specific codes and practices. Corrosion prevention measures, including Cathodic protection, designing to prevent corrosion and coating of structures fall within the regime of corrosion engineering. However, corrosion science and engineering go hand-in-hand and they cannot be separated: it is a permanent marriage to produce new and better methods of protection from time to time. This may include the use of Corrosion inhibitors. In the Handbook of corrosion engineering, the author Pierre R. Roberge states "Corrosion is the destructive attack of a material by reaction with its environment. The serious consequences of the corrosion process have become a problem of worldwide significance." Costs are not only monetary. There is a financial cost and also a waste of natural resources. In 1988 it was estimated that one tonne of metal was converted completely to rust every ninety seconds in the United Kingdom. There is also the cost of human lives. Failure whether catastrophic or otherwise due to corrosion has cost human lives. Corrosion engineering and corrosion societies and associations Corrosion engineering groups have formed around the world to educate, prevent, slow, and manage corrosion. These include the National Association of Corrosion Engineers (NACE), the European Federation of Corrosion (EFC), The Institute of Corrosion in the UK and the Australasian Corrosion Association. The corrosion engineer's main task is to economically and safely manage the effects of corrosion of materials. Notable contributors to the field Some of the most notable contributors to the Corrosion Engineering discipline include among others: Michael Faraday (1791–1867) Marcel Pourbaix (1904–1998) Herbert H. Uhlig (1907–1993) Ulick Richardson Evans (1889–1980) Mars Guy Fontana (1910–1988) Melvin Romanoff ( -1970) Types of corrosion situations Corrosion engineers and consultants tend to specialize in Internal or External corrosion scenarios. In both, they may provide corrosion control recommendations, failure analysis investigations, sell corrosion control products, or provide installation or design of corrosion control and monitoring systems. Every material has its weakness. Aluminum, galvanized/zinc coatings, brass, and copper do not survive well in very alkaline or very acidic pH environments. Copper and brasses do not survive well in high nitrate or ammonia environments. Carbon steels and iron do not survive well in low soil resistivity and high chloride environments. High chloride environments can even overcome and attack steel encased in normally protective concrete. Concrete does not survive well in high sulfate and acidic environments. And nothing survives well in high sulfide and low redox potential environments with corrosive bacteria. This is called Biogenic sulfide corrosion. External corrosion Underground soil side corrosion Underground corrosion control engineers collect soil samples to test soil chemistry for corrosive factors such as pH, minimum soil resistivity, chlorides, sulfates, ammonia, nitrates, sulfide, and redox potential. They collect samples from the depth that infrastructure will occupy, because soil properties may change from strata to strata. The minimum test of in-situ soil resistivity is measured using the Wenner four pin method if often performed to judge a site's corrosivity. However, during a dry period, the test may not show actual corrosivity, since underground condensation can leave soil in contact with buried metal surfaces more moist. This is why measuring a soil's minimum or saturated resistivity is important. Soil resistivity testing alone does not identify corrosive elements. Corrosion engineers can investigate locations experiencing active corrosion using above ground survey methods and design corrosion control systems such as cathodic protection to stop or reduce the rate of corrosion. Geotechnical engineers typically do not practice corrosion engineering, and refer clients to a corrosion engineer if soil resistivity is below 3,000 ohm-cm or less, depending the soil corrosivity categorization table they read. Unfortunately, an old dairy farm can have soil resistivities above 3,000 ohm-cm and still contain corrosive ammonia and nitrate levels that corrode copper piping or grounding rods. A general saying about corrosion is, "If the soil is great for farming, it is great for corrosion." Underwater external corrosion Underwater corrosion engineers apply the same principals used in underground corrosion control but use specially trained and certified scuba divers for condition assessment, and corrosion control system installation and commissioning. The main difference being in the type of reference cells used to collect voltage readings. Corrosion of piles and the legs of oil and gas rigs are of particular concern. This includes rigs in the North Sea off the coast of the United Kingdom and the Gulf of Mexico. Atmospheric corrosion Atmospheric corrosion generally refers to general corrosion in a non-specific environment. Prevention of atmospheric corrosion is typically handled by use of materials selection and coatings specifications. The use of zinc coatings also known as galvanization on steel structures is a form of cathodic protection where the zinc acts as a sacrificial anode and also a form of coating. Small scratches are expected to occur in the galvanized coating over time. The zinc being more active in the galvanic series corrodes in preference to the underlying steel and the corrosion products fil the scratch preventing further corrosion. As long as the scratches are fine, condensation moisture should not corrode the underlying steel as long as both the zinc and steel are in contact. As long as there is moisture, the zinc corrodes and eventually disappears. Impressed current cathodic protection is also used. Splash zone and water spray corrosion The usual definition of a splash zone is the area just above and just below the average water level of a body of water. It also includes areas that may be subject to water spray and mist. A significant amount of corrosion of fences is due to landscaper tools scratching fence coatings and irrigation sprinklers spraying these damaged fences. Recycled water typically has a higher salt content than potable drinking water, meaning that it is more corrosive than regular tap water. The same risk from damage and water spray exists for above ground piping and backflow preventers. Fiberglass covers, cages, and concrete footings have worked well to keep tools at an arm's length. Even the location where a roof drain splashes down can matter. Drainage from a home's roof valley can fall directly down onto a gas meter causing its piping to corrode at an accelerated rate reaching 50% wall thickness within 4 years. It is the same effect as a splash zone in the ocean, or in a pool with lot of oxygen and agitation that removes material as it corrodes. Tanks or structural tubing such as bench seat supports or amusement park rides can accumulate water and moisture if the structure does not allow for drainage. This humid environment can then lead to internal corrosion of the structure affecting the structural integrity. The same can happen in tropical environments leading to external corrosion. This would include Corrosion in ballast tanks on ships. Pipeline corrosion Hazardous materials are often carried in pipelines and thus their structural integrity is of paramount importance. Corrosion of a pipeline can thus have grave consequences. One of the methods used to control pipeline corrosion is by the use of Fusion bonded epoxy coatings. DCVG is used to monitor it. Impressed current cathodic protection is also used. Corrosion in the petrochemical industry The Petrochemical industry typically encounters aggressive corrosive media. These include sulfides and high temperatures. Corrosion control and solutions are thus necessary for the world economy. Scale formation in injection water presents its own problems with regard to corrosion and thus for the corrosion engineer. Corrosion in ballast tanks Ballast tanks on ships contain the fuels for corrosion. Water is one and air is usually present too and the water can become stagnant. Structural integrity is important for safety and to avoid marine pollution. Coatings have become the solution of choice to reduce the amount of corrosion in ballast tanks. Impressed current cathodic protection has also been used. Likewise sacrificial anode cathodic protection is also used. Since chlorides vastly accelerate corrosion, ballast tanks of marine vessels are particularly susceptible. Corrosion in the railway industry It has been stated that one of the biggest challenges in the United Kingdom railway industry is corrosion. The biggest problem is that corrosion can affect the structural integrity of passenger carrying railway carriages thus affecting their crashworthiness. Other railway structures and assets can also be affected. The Permanent Way Institution give lectures on the subject periodically. In January 2018 corrosion of a metal structure caused the emergency closure of Liverpool Lime Street railway station. Galvanic corrosion Galvanic corrosion (also called bimetallic corrosion) is an electrochemical process in which one metal (more active one) corrodes preferentially when it is in electrical contact with another dissimilar metal, in the presence of an electrolyte. A similar galvanic reaction is exploited in primary cells to generate a useful electrical voltage to power portable devices – a classic example being a cell with zinc and copper electrodes. Galvanic corrosion is also exploited when a sacrificial metal is used in cathodic protection. Galvanic corrosion happens when there are an active metal and a more noble metal in contact in the presence of electrolyte. Pitting corrosion Pitting corrosion, or pitting, is extremely localized corrosion that leads to the creation of small holes in the material – nearly always a metal. The failures resulting from this form of corrosion can be catastrophic. With general corrosion it is easier to predict the amount of material that will be lost over time and this can be designed into the engineered structure. Pitting, like crevice corrosion can cause a catastrophic failure with very little loss of material. Pitting corrosion happens for passive materials. The classic reaction mechanism has been ascribed to Ulick Richardson Evans. Crevice corrosion Crevice corrosion is a type of localized corrosion with a very similar mechanism to pitting corrosion. Stress corrosion cracking Stress corrosion cracking (SCC) is the growth of a crack in a corrosive environment. It requires three conditions to take place: 1)corrosive environment 2)stress 3)susceptible material. SCC can lead to unexpected sudden and hence catastrophic failure of normally ductile metals under tensile stress. This is usually exacerbated at elevated temperature. SCC is highly chemically specific in that certain alloys are likely to undergo SCC only when exposed to a small number of chemical environments. It is common for SCC to go undetected prior to failure. SCC usually quite progresses rapidly after initial crack initiation, and is seen more often in alloys as opposed to pure metals. The corrosion engineer thus must be aware of this phenomenon. Filiform Corrosion Filiform corrosion may be considered as a type of crevice corrosion and is sometimes seen on metals coated with an organic coating (paint). Filiform corrosion is unusual in that it does not weaken or destroy the integrity of the metal but only affects the surface appearance. Corrosion fatigue This form of corrosion is usually caused by a combination of corrosion and cyclic stress. Measuring and controlling this is difficult because of the many factors at play including the nature or form of the stress cycle. The stress cycles cause localized work hardening. So avoiding stress concentrators such as holes etc would be good corrosion engineering design. Selective leaching This form of corrosion occurs principally in metal alloys. The less noble metal of the alloy, is selectively leached from the alloy. Removal of zinc from brass is a more common example. Microbial corrosion Biocorrosion, biofouling and corrosion caused by living organisms are now known to have an electrochemistry foundation. Other marine creatures such as mussels, worms and even sponges have been known to degrade engineering materials. Hydrogen damage Hydrogen damage is caused by hydrogen atoms (as opposed to hydrogen molecules in the gaseous state), interacting with metal. Erosion corrosion Erosion corrosion is a form of corrosion damage usually on a metal surface caused by turbulence of a liquid or solid containing liquid and the metal surface. Aluminum can be particularly susceptible due to the fact that the aluminum oxide layer which affords corrosion protection to the underlying metal is eroded away. Hydrogen embrittlement This phenomenon describes damage to the metal (nearly always iron or steel) at low temperature by diffusible hydrogen. Hydrogen can embrittle a number of metals and steel is one of them. It tends to happen to harder and higher tensile steels. Hydrogen cam also embrittle aluminum at high temperatures.). Titanium metal and alloys are also susceptible. High temperature corrosion High-temperature corrosion typically occurs in environments that have heat and chemical such as hydrocarbon fuel sources but also other chemicals enable this form of corrosion. Thus it can occur in boilers, automotive engines driven by diesel or gasoline, metal production furnaces and flare stacks from oil and gas production. High temperature oxidation of metals would also be included. Internal corrosion Internal corrosion is occasioned by the combined effects and severity of four modes of material deterioration, namely: general corrosion, pitting corrosion, microbial corrosion, and fluid corrosivity. The same principals of external corrosion control can be applied to internal corrosion but due to accessibility, the approaches can be different. Thus special instruments for internal corrosion control and inspection are used that are not used in external corrosion control. Video scoping of pipes and high tech smart pigs are used for internal inspections. The smart pigs can be inserted into a pipe system at one point and "caught" far down the line. The use of corrosion inhibitors, material selection studies, and internal coatings are mainly used to control corrosion in piping while anodes along with coatings are used to control corrosion in tanks. In-depth corrosion calculation are performed during material selection studies, and there are many different corrosion models and calculation methods (softwares) that are prevalent in industry. i.e ECE, Predict, De Waard, Norsok M-506 etc. Internal corrosion challenges apply to the following amongst others: Water pipes; Gas pipes; Oil pipes and Water tank reservoirs. Good design to prevent corrosion situations Corrosion engineering involves good design. Using a rounded edge rather than an acute edge reduces corrosion. Also not coupling by welding or other joining method, two dissimilar metals to avoid galvanic corrosion is best practice. Avoiding having a small anode (or anodic material) next to a large cathode (or cathodic material) is good practice. As an example, weld material should always be more noble than the surrounding material. Corrosion in ballast tanks on marine vessels can be an issue if good design is not undertaken. Other examples include simple design such as material thickness. In a known corrosion situation the material can just be made thicker so it will take much longer to corrode. Material selection to prevent corrosion situations Correct selection of the material by the design engineer affects the design life of a structure or pipeline which is very relevant in the Oil and Gas Industry. Sometimes stainless steel is not the correct choice and carbon steel would be better. There is a misconception that stainless steel has excellent corrosion resistance and will not corrode. This is not always the case and should not be used to handle deoxygenated solutions for example, as the stainless steel relies on oxygen to maintain passivation and is also susceptible to crevice corrosion. Galvanizing or hot-dip galvanizing is used to coat steel with a layer of metallic zinc. Lead or antimony are often added to the molten zinc bath, and also other metals have been studied. Controlling the environment to prevent corrosion situations One example of controlling the environment to prevent or reduce corrosion is the practice of storing aircraft in deserts. These storage places are usually called aircraft boneyards. The climate is usually arid so this and other factors make it an ideal environment. Use of corrosion inhibitors to prevent corrosion An inhibitor is usually a material added in a small quantity to a particular environment that reduces the rate of corrosion. They may be classified a number of ways but are usually 1) Oxidizing; 2) Scavenging; 3) Vapor-phase inhibitors; Sometimes they are called Volatile corrosion inhibitor 4) Adsorption inhibitors; 5) Hydrogen-evolution retarder. Another way to classify them is chemically. As there is more concern for the environment and people are more keen to use Renewable resources, there is ongoing research to modify these materials so they may be used as corrosion inhibitors. Use of coatings to prevent corrosion A coating or paint is usually a fluid applied covering applied to a surface in contact with a corrosive situation such as the atmosphere. The surface is usually called the substrate. In corrosion prevention applications the purpose of applying the coating is mainly functional rather than decorative. Paints and lacquers are coatings that have dual uses of protecting the substrate and being decorative, but paint on large industrial pipes as well as preventing corrosion is also used for identification e.g. red for fire-fighting control etc. Functional coatings may be applied to change the surface properties of the substrate, such as adhesion, wettability, corrosion resistance, or wear resistance. In the automotive industry, coatings are used to control corrosion but also for aesthetic reasons. Coatings are also extensively used in marine environments to control corrosion in an oceanic environment. Corrosion will eventually breakthrough a coating and so have a design life before maintenance. See also Anodic protection Coating Corrosion Corrosion societies Corrosion inhibitor Corrosion in ballast tanks DCVG (direct current voltage gradient) Electrochemistry Environmental stress cracking Fracture Mechanics Integrity engineering Metallurgical failure analysis National Institute of Standards and Technology Stainless steel Stress corrosion cracking Structural failure Sulfide stress cracking References Further reading Brett CMA, Brett AMO, ELECTROCHEMISTRY, Principles, methods, and applications, Oxford University Press, (1993) Papers presented at the Fourth International Symposium on 'Corrosion of Reinforcement in Concrete Construction', held at Robinson College, Cambridge, UK, 1–4 July 1996. Corrosion - 2nd Edition (elsevier.com) Volume 1and 2; Editor: L L Shreir A.W. Peabody, Peabody's Control of Pipeline Corrosion, 2nd Ed., 2001, NACE International. Ashworth V., Corrosion Vol. 2, 3rd Ed., 1994, Baeckmann, Schwenck & Prinz, Handbook of Cathodic Corrosion Protection, 3rd Edition 1997. Roberge, Pierre R, Handbook of Corrosion Engineering 1999 Gummow, RA, Corrosion Control of Municipal Infrastructure Using Cathodic Protection. NACE Conference Oct 1999, NACE Materials Performance Feb 2000 Engineering disciplines Corrosion Metallurgy Corrosion prevention
Corrosion engineering
[ "Chemistry", "Materials_science", "Engineering" ]
4,225
[ "Corrosion prevention", "Metallurgy", "Materials science", "Corrosion", "Electrochemistry", "nan", "Materials degradation" ]
20,913,655
https://en.wikipedia.org/wiki/Flood%20management
Flood management describes methods used to reduce or prevent the detrimental effects of flood waters. Flooding can be caused by a mix of both natural processes, such as extreme weather upstream, and human changes to waterbodies and runoff. Flood management methods can be either of the structural type (i.e. flood control) and of the non-structural type. Structural methods hold back floodwaters physically, while non-structural methods do not. Building hard infrastructure to prevent flooding, such as flood walls, is effective at managing flooding. However, it is best practice within landscape engineering to rely more on soft infrastructure and natural systems, such as marshes and flood plains, for handling the increase in water. Flood management can include flood risk management, which focuses on measures to reduce risk, vulnerability and exposure to flood disasters and providing risk analysis through, for example, flood risk assessment. Flood mitigation is a related but separate concept describing a broader set of strategies taken to reduce flood risk and potential impact while improving resilience against flood events. As climate change has led to increased flood risk an intensity, flood management is an important part of climate change adaptation and climate resilience. For example, to prevent or manage coastal flooding, coastal management practices have to handle natural processes like tides but also sea level rise due to climate change. The prevention and mitigation of flooding can be studied on three levels: on individual properties, small communities, and whole towns or cities. Terminology Flood management is a broad term that includes measures to control or mitigate flood waters, such as actions to prevent floods from occurring or to minimize their impacts when they do occur. Flood management methods can be structural or non-structural: Structural flood management (i.e: flood control) is the reduction of the effects of a flood using physical solutions, such as reservoirs, levees, dredging and diversions. Non-structural flood management includes land-use planning, advanced warning systems and flood insurance. Further examples are: "zoning ordinances and codes, flood forecasting, flood proofing, evacuation and channel clearing, flood fight activities, and upstream land treatment or management to control flood damages without physically restraining flood waters". There are several related terms that are closely connected or encompassed by flood management. Flood management can include flood risk management, which focuses on measures to reduce risk, vulnerability and exposure to flood disasters and providing risk analysis through, for example, flood risk assessment. In the context of natural hazards and disasters, risk management involves "plans, actions, strategies or policies to reduce the likelihood and/or magnitude of adverse potential consequences, based on assessed or perceived risks". Flood control, flood protection, flood defence and flood alleviation are all terms that mean "the detention and/or diversion of water during flood events for the purpose of reducing discharge or downstream inundation". Flood control is part of environmental engineering. It involves the management of water movement, such as redirecting flood run-off through the use of floodwalls and flood gates to prevent floodwaters from reaching a particular area. Flood mitigation is a related but separate concept describing a broader set of strategies taken to reduce flood risk and potential impact while improving resilience against flood events. These methods include prevention, prediction (which enables flood warnings and evacuation), proofing (e.g.: zoning regulations), physical control (nature-based solutions and physical structures like dams and flood walls) and insurance (e.g.: flood insurance policies). Flood relief methods are used to reduce the effects of flood waters or high water levels during a flooding event. They include evacuation plans and rescue operations. Flood relief is part of the response and recovery phase in a flood management plan. Causes of flooding Precipitation, absorption, and runoff Flood levels: blunting the peak Water levels during a flood tend to rise, then fall, very abruptly. The peak flood level occurs as a very steep, short spike; a quick spurt of water. Anything that slows the surface runoff (marshes, meanders, vegetation, porous materials, turbulent flow, the river spreading over a floodplain) will slow some of the flow more than other parts, spreading the flow over time and blunting the spike. Even slightly blunting the spike significantly decreases the peak flood level. Generally, the higher the peak flood level, the more flood damage is done. Modern flood control seeks to "slow the flow", and deliberately flood some low-lying areas, ideally vegetated, to act as sponges, letting them drain again as the floodwaters go down. Purposes Where floods interact with housing, industry and farming that flood management is indicated and in such cases environmentally helpful solutions may provide solutions. Natural flooding has many beneficial environmental effects. This kind of flooding is usually a seasonal occurrence where floods help replenish soil fertility, restore wetlands and promote biodiversity. Reducing the impacts of floods Flooding has many impacts. It damages property and endangers the lives of humans and other species. Rapid water runoff causes soil erosion and concomitant sediment deposition elsewhere (such as further downstream or down a coast). The spawning grounds for fish and other wildlife habitats can become polluted or completely destroyed. Some prolonged high floods can delay traffic in areas which lack elevated roadways. Floods can interfere with drainage and economical use of lands, such as interfering with farming. Structural damage can occur in bridge abutments, bank lines, sewer lines, and other structures within floodways. Waterway navigation and hydroelectric power are often impaired. Financial losses due to floods are typically millions of dollars each year, with the worst floods in recent U.S. history having cost billions of dollars. Protection of individual properties Property owners may fit their homes to stop water entering by blocking doors and air vents, waterproofing important areas and sandbagging the edges of the building. Private precautionary measures are increasingly important in flood risk management. Flood mitigation at the property level may also involve preventative measures focused on the building site, including scour protection for shoreline developments, improving rainwater in filtration through the use of permeable paving materials and grading away from structures, and inclusion of berms, wetlands or swales in the landscape. Protection of communities When more homes, shops and infrastructure are threatened by the effects of flooding, then the benefits of protection are worth the additional cost. Temporary flood defenses can be constructed in certain locations which are prone to floods and provide protection from rising flood waters. Rivers running through large urban developments are often controlled and channeled. Water rising above a canal's full capacity may cause flooding to spread to other waterways and areas of the community, which causes damage. Defenses (both long-term and short-term) can be constructed to minimize damage, which involves raising the edge of the water with levees, embankments or walls. The high population and value of infrastructure at risk often justifies the high cost of mitigation in larger urban areas. Protection of wider areas such as towns or cities The most effective way of reducing the risk to people and property is through the production of flood risk maps. Most countries have produced maps which show areas prone to flooding based on flood data. In the UK, the Environment Agency has produced maps which show areas at risk. The map to the right shows a flood map for the City of York, including the floodplain for a 1 in 100-year flood (dark blue), the predicted floodplain for a 1 in 1000 year flood (light blue) and low-lying areas in need of flood defence (purple). The most sustainable way of reducing risk is to prevent further development in flood-prone areas and old waterways. It is important for at-risk communities to develop a comprehensive Floodplain Management plan. In the US, communities that participate in the National Flood Insurance Program must agree to regulate development in flood-prone areas. Strategic retreat One way of reducing the damage caused by flooding is to remove buildings from flood-prone areas, leaving them as parks or returning them to wilderness. Floodplain buyout programs have been operated in places like New Jersey (both before and after Hurricane Sandy), Charlotte, North Carolina, and Missouri. In the United States, FEMA produces flood insurance rate maps that identify areas of future risk, enabling local governments to apply zoning regulations to prevent or minimize property damage. Resilience Buildings and other urban infrastructure can be designed so that even if a flood does happen, the city can recover quickly and costs are minimized. For example, homes can be put on stilts, electrical and HVAC equipment can be put on the roof instead of in the basement, and subway entrances and tunnels can have built-in movable water barriers. New York City began a substantial effort to plan and build for flood resilience after Hurricane Sandy. Flood resilience technologies support the fast recovery of individuals and communities affected, but their use remains limited. Climate change adaptation Structural methods Some methods of flood control have been practiced since ancient times. These methods include planting vegetation to retain extra water, terracing hillsides to slow flow downhill, and the construction of floodways (man-made channels to divert floodwater). Other techniques include the construction of levees, lakes, dams, reservoirs, retention ponds to hold extra water during times of flooding. Dams Many dams and their associated reservoirs are designed completely or partially to aid in flood protection and control. Many large dams have flood-control reservations in which the level of a reservoir must be kept below a certain elevation before the onset of the rainy/summer melt season to allow a certain amount of space in which floodwaters can fill. Other beneficial uses of dam created reservoirs include hydroelectric power generation, water conservation, and recreation. Reservoir and dam construction and design is based upon standards, typically set out by the government. In the United States, dam and reservoir design is regulated by the US Army Corps of Engineers (USACE). Design of a dam and reservoir follows guidelines set by the USACE and covers topics such as design flow rates in consideration to meteorological, topographic, streamflow, and soil data for the watershed above the structure. The term dry dam refers to a dam that serves purely for flood control without any conservation storage (e.g. Mount Morris Dam, Seven Oaks Dam). Diversion canals Floodplains and groundwater replenishment Excess water can be used for groundwater replenishment by diversion onto land that can absorb the water. This technique can reduce the impact of later droughts by using the ground as a natural reservoir. It is being used in California, where orchards and vineyards can be flooded without damaging crops, or in other places wilderness areas have been re-engineered to act as floodplains. River defenses In many countries, rivers are prone to floods and are often carefully managed. Defenses such as levees, bunds, reservoirs, and weirs are used to prevent rivers from bursting their banks. A weir, also known as a lowhead dam, is most often used to create millponds, but on the Humber River in Toronto, a weir was built near Raymore Drive to prevent a recurrence of the flood damage caused by Hurricane Hazel in October 1954. The Leeds flood alleviation scheme uses movable weirs which are lowered during periods of high water to reduce the chances of flooding upstream. Two such weirs, the first in the UK, were installed on the River Aire in October 2017 at Crown Point, Leeds city centre and Knostrop. The Knostrop weir was operated during the 2019 England floods. They are designed to reduce potential flood levels by up to one metre. Coastal defenses Coastal flooding is addressed with coastal defenses, such as sea walls, beach nourishment, and barrier islands. Tide gates are used in conjunction with dykes and culverts. They can be placed at the mouth of streams or small rivers, where an estuary begins or where tributary streams, or drainage ditches connect to sloughs. Tide gates close during incoming tides to prevent tidal waters from moving upland, and open during outgoing tides to allow waters to drain out via the culvert and into the estuary side of the dike. The opening and closing of the gates is driven by a difference in water level on either side of the gate. Flood barrier Self-closing flood barrier The self-closing flood barrier (SCFB) is a flood defense system designed to protect people and property from inland waterway floods caused by heavy rainfall, gales, or rapid melting snow. The SCFB can be built to protect residential properties and whole communities, as well as industrial or other strategic areas. The barrier system is constantly ready to deploy in a flood situation, it can be installed in any length and uses the rising flood water to deploy. Temporary perimeter barriers When permanent defenses fail, emergency measures such as sandbags, inflatable impermeable sacks, or other temporary barriers are used. In 1988, a method of using water to control flooding was discovered. This was accomplished by containing 2 parallel tubes within a third outer tube. When filled, this structure formed a non-rolling wall of water that can control 80 percent of its height in external water depth, with dry ground behind it. Eight foot tall water filled barriers were used to surround Fort Calhoun Nuclear Generating Station during the 2011 Missouri River Flooding. Instead of trucking in sandbag material for a flood, stacking it, then trucking it out to a hazmat disposal site, flood control can be accomplished by using the on site water. However, these are not fool proof. A high long water filled rubber flood berm that surrounded portions of the plant was punctured by a skid-steer loader and it collapsed flooding a portion of the facility. AquaFence consists of interlocking panels which are waterproof and puncture-resistant, can be bolted down to resist winds, and use the weight of floodwater to hold them in place. Materials include marine-grade batlic laminate, stainless steel, aluminum and reinforced PVC canvas. The panels are reusable and can be stored flat between uses. The technology was designed as an alternative to building seawalls or placing sandbags in the path of floodwaters. Other solutions, such as HydroSack, are polypropylene exteriors with wood pulp within, though they are one-time use. Non-structural methods Flood risk assessment There are several methods of non-structural flood management that form part of flood risk management strategies. These can involve policies that reduces the amount of urban structures built around floodplains or flood prone areas through land zoning regulations. This helps to reduce the amount of mitigation needed to protect humans and buildings from flooding events. Similarly, flood warning systems are important for reducing risks. Following the occurrence of flooding events, other measures such as rebuilding plans and insurance can be integrated into flood risk management plans. Flood risk management strategy diversification is needed to ensure that management strategies cover several different scenarios and ensure best practices. Flood risk management aims to reduce the human and socio-economic losses caused by flooding and is part of the larger field of risk management. Flood risk management analyzes the relationships between physical systems and socio-economic environments through flood risk assessment and tries to create understanding and action about the risks posed by flooding. The relationships cover a wide range of topics, from drivers and natural processes, to models and socio-economic consequences. This relationship examines management methods which includes a wide range of flood management methods including but are not limited to flood mapping and physical implication measures. Flood risk management looks at how to reduce flood risk and how to appropriately manage risks that are associated with flooding. Flood risk management includes mitigating and preparing for flooding disasters, analyzing risk, and providing a risk analysis system to mitigate the negative impacts caused by flooding. Flooding and flood risk are especially important with more extreme weather and sea level rise caused by climate change as more areas will be effected by flood risk. Flood mapping Flood mapping is a tool used by governments and policy makers to delineate the borders of potential flooding events, allowing educated decisions to prevent extreme flooding events. Flood maps are useful to create documentation that allows policy makers to make informed decisions about flood hazards. Flood mapping also provides conceptual models to both the public and private sectors with information about flooding hazards. Flood mapping has been criticized in many areas around the world, due to the absence of public accessibility, technical writing and data, and lack of easy-to-understand information. However, revived attention towards flood mapping has renewed the interest in enhancing current flood mapping for use as a flood risk management method. Flood modelling Flood modelling is a tool used to model flood hazard and the effects on humans and the physical environment. Flood modelling takes into consideration how flood hazards, external and internal processes and factors, and the main drivers of floods interact with each other. Flood modelling combines factors such as terrain, hydrology, and urban topography to reproduce the evolution of a flood in order to identify the different levels of flooding risks associated with each element exposed. The modelling can be carried out using hydraulic models, conceptual models, or geomorphic methods. Nowadays, there is a growing attention also in the production of maps obtained with remote sensing. Flood modelling is helpful for determining building development practices and hazard mitigation methods that reduce the risks associated with flooding. Stakeholder engagement Stakeholder engagement is a useful tool for flood risk management that allows enhanced public engagement for agreements to be reached on policy discussions. Different management considerations can be taken into account including emergency management and disaster risk reduction goals, interactions of land-use planning with the integration of flood risks and required policies. In flood management, stakeholder engagement is seen as an important way to achieve greater cohesion and consensus. Integrating stakeholder engagement into flood management often provides a more complex analysis of the situation; this generally adds more demand in determining collective solutions and increases the time it takes to determine solutions. Costs The costs of flood protection rise as more people and property are to be protected. The US FEMA, for example, estimates that for every $1.00 spent on mitigation, $4.00 is saved. Examples by country North America Canada An elaborate system of flood way defenses can be found in the Canadian province of Manitoba. The Red River flows northward from the United States, passing through the city of Winnipeg (where it meets the Assiniboine River) and into Lake Winnipeg. As is the case with all north-flowing rivers in the temperate zone of the Northern Hemisphere, snow melt in southern sections may cause river levels to rise before northern sections have had a chance to completely thaw. This can lead to devastating flooding, as occurred in Winnipeg during the spring of 1950. To protect the city from future floods, the Manitoba government undertook the construction of a massive system of diversions, dikes, and flood ways (including the Red River Floodway and the Portage Diversion). The system kept Winnipeg safe during the 1997 flood which devastated many communities upriver from Winnipeg, including Grand Forks, North Dakota and Ste. Agathe, Manitoba. United States In the United States, the U.S. Army Corps of Engineers is the lead flood control agency. After Hurricane Sandy, New York City's Metropolitan Transportation Authority (MTA) initiated multiple flood barrier projects to protect the transit assets in Manhattan. In one case, the MTA's New York City Transit Authority (NYCT) sealed subway entrances in lower Manhattan using a deployable fabric cover system called Flex-Gate, a system that protects the subway entrances against of water. Extreme storm flood protection levels have been revised based on new Federal Emergency Management Agency guidelines for 100-year and 500-year design flood elevations. In the New Orleans Metropolitan Area, 35 percent of which sits below sea level, is protected by hundreds of miles of levees and flood gates. This system failed catastrophically, with numerous breaks, during Hurricane Katrina (2005) in the city proper and in eastern sections of the Metro Area, resulting in the inundation of approximately 50 percent of the metropolitan area, ranging from a few inches to twenty feet in coastal communities. The Morganza Spillway provides a method of diverting water from the Mississippi River when a river flood threatens New Orleans, Baton Rouge and other major cities on the lower Mississippi. It is the largest of a system of spillways and floodways along the Mississippi. Completed in 1954, the spillway has been opened twice, in 1973 and in 2011. In an act of successful flood prevention, the federal government offered to buy out flood-prone properties in the United States in order to prevent repeated disasters after the 1993 flood across the Midwest. Several communities accepted and the government, in partnership with the state, bought 25,000 properties which they converted into wetlands. These wetlands act as a sponge in storms and in 1995, when the floods returned, the government did not have to expend resources in those areas. Asia In Kyoto, Japan, the Hata clan successfully controlled floods on the Katsura River in around 500 A.D and also constructed a sluice on the Kazuno River. In China flood diversion areas are rural areas that are deliberately flooded in emergencies in order to protect cities. The consequences of deforestation and changing land use on the risk and severity of flooding are subjects of discussion. In assessing the impacts of Himalayan deforestation on the Ganges-Brahmaputra Lowlands, it was found that forests would not have prevented or significantly reduced flooding in the case of an extreme weather event. However, more general or overview studies agree on the negative impacts that deforestation has on flood safety - and the positive effects of wise land use and reforestation. Many have proposed that loss of vegetation (deforestation) will lead to an increased risk of flooding. With natural forest cover the flood duration should decrease. Reducing the rate of deforestation should improve the incidents and severity of floods. Africa In Egypt, both the Aswan Low Dam (1902) and the Aswan High Dam (1976) have controlled various amounts of flooding along the Nile River. Europe France Following the misery and destruction caused by the 1910 Great Flood of Paris, the French government built a series of reservoirs called (or Great Lakes) which helps remove pressure from the Seine during floods, especially the regular winter flooding. United Kingdom London is protected from flooding by Thames Barrier, a huge mechanical barrier across the River Thames, which is raised when the water level reaches a certain point. This project has been operational since 1982 and was designed to protect against a surge of water such as the North Sea flood of 1953. In 2023 it was found that over 4,000 flood defence schemes in England were ‘almost useless’ with many of them in areas hit by Storm Babet. Russia The Saint Petersburg Dam was completed in 2008 to protect Saint Petersburg from storm surges. It also has a main traffic function, as it completes a ring road around Saint Petersburg. Eleven dams extend for and stand above water level. The Netherlands The Netherlands has one of the best flood control systems in the world, notably through its construction of dykes. The country faces high flooding risk due to the country's low-lying landscapes. The largest and most elaborate flood defenses are referred to as the Delta Works with the Oosterscheldekering as its crowning achievement. These works in the southwestern part of the country were built in response to the North Sea flood of 1953. The Dutch had already built one of the world's largest dams in the north of the country. The Afsluitdijk closing occurred in 1932. New ways to deal with water are constantly being developed and tested, such as the underground storage of water, storing water in reservoirs in large parking garages or on playgrounds. Rotterdam started a project to construct a floating housing development of to deal with rising sea levels. Several approaches, from high-tech sensors detecting imminent levee failure to movable semi-circular structures closing an entire river, are being developed or used around the world. Regular maintenance of hydraulic structures, however, is another crucial part of flood control. Oceania Flooding is the greatest natural hazard in New Zealand (Aotearoa), and its control is primarily managed and funded by local councils. Throughout the country there is a network of more than 5284 km of levees, while gravel extraction to lower river water levels is also a popular flood control technique. The management of flooding in the country is shifting towards nature based solutions, such as the widening of the Hutt River channel in Wellington. See also References External links Flood articles – BBC News Control Water management
Flood management
[ "Chemistry", "Engineering", "Environmental_science" ]
4,981
[ "Flood control", "Hydrology", "Flood", "Environmental engineering" ]
20,916,902
https://en.wikipedia.org/wiki/Relativistic%20heat%20conduction
Relativistic heat conduction refers to the modelling of heat conduction (and similar diffusion processes) in a way compatible with special relativity. In special (and general) relativity, the usual heat equation for non-relativistic heat conduction must be modified, as it leads to faster-than-light signal propagation. Relativistic heat conduction, therefore, encompasses a set of models for heat propagation in continuous media (solids, fluids, gases) that are consistent with relativistic causality, namely the principle that an effect must be within the light-cone associated to its cause. Any reasonable relativistic model for heat conduction must also be stable, in the sense that differences in temperature propagate both slower than light and are damped over time (this stability property is intimately intertwined with relativistic causality). Parabolic model (non-relativistic) Heat conduction in a Newtonian context is modelled by the Fourier equation, namely a parabolic partial differential equation of the kind: where θ is temperature, t is time, α = k/(ρ c) is thermal diffusivity, k is thermal conductivity, ρ is density, and c is specific heat capacity. The Laplace operator, , is defined in Cartesian coordinates as This Fourier equation can be derived by substituting Fourier’s linear approximation of the heat flux vector, q, as a function of temperature gradient, into the first law of thermodynamics where the del operator, ∇, is defined in 3D as It can be shown that this definition of the heat flux vector also satisfies the second law of thermodynamics, where s is specific entropy and σ is entropy production. This mathematical model is inconsistent with special relativity: the Green function associated to the heat equation (also known as heat kernel) has support that extends outside the light-cone, leading to faster-than-light propagation of information. For example, consider a pulse of heat at the origin; then according to Fourier equation, it is felt (i.e. temperature changes) at any distant point, instantaneously. The speed of propagation of heat is faster than the speed of light in vacuum, which is inadmissible within the framework of relativity. Hyperbolic model (relativistic) The parabolic model for heat conduction discussed above shows that the Fourier equation (and the more general Fick's law of diffusion) is incompatible with the theory of relativity for at least one reason: it admits infinite speed of propagation of the continuum field (in this case: heat, or temperature gradients). To overcome this contradiction, workers such as Carlo Cattaneo, Vernotte, Chester, and others proposed that Fourier equation should be upgraded from the parabolic to a hyperbolic form, where the n, the temperature field is governed by: In this equation, C is called the speed of second sound (that is related to excitations and quasiparticles, like phonons). The equation is known as the "hyperbolic heat conduction" (HHC) equation. Mathematically, the above equation is called "telegraph equation", as it is formally equivalent to the telegrapher's equations, which can be derived from Maxwell’s equations of electrodynamics. For the HHC equation to remain compatible with the first law of thermodynamics, it is necessary to modify the definition of heat flux vector, q, to where is a relaxation time, such that This equation for the heat flux is often referred to as "Maxwell-Cattaneo equation". The most important implication of the hyperbolic equation is that by switching from a parabolic (dissipative) to a hyperbolic (includes a conservative term) partial differential equation, there is the possibility of phenomena such as thermal resonance and thermal shock waves. Notes Heat conduction Thermodynamics Heat conduction Concepts in physics Hyperbolic partial differential equations Diffusion Transport phenomena
Relativistic heat conduction
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
820
[ "Transport phenomena", "Physical phenomena", "Diffusion", "Chemical engineering", "Special relativity", "Thermodynamics", "nan", "Theory of relativity", "Heat conduction", "Dynamical systems" ]
272,063
https://en.wikipedia.org/wiki/Component%20%28thermodynamics%29
In thermodynamics, a component is one of a collection of chemically independent constituents of a system. The number of components represents the minimum number of independent chemical species necessary to define the composition of all phases of the system. Calculating the number of components in a system is necessary when applying Gibbs' phase rule in determination of the number of degrees of freedom of a system. The number of components is equal to the number of distinct chemical species (constituents), minus the number of chemical reactions between them, minus the number of any constraints (like charge neutrality or balance of molar quantities). Calculation Suppose that a chemical system has elements and chemical species (elements or compounds). The latter are combinations of the former, and each species can be represented as a sum of elements: where are the integers denoting number of atoms of element in molecule . Each species is determined by a vector (a row of this matrix), but the rows are not necessarily linearly independent. If the rank of the matrix is , then there are linearly independent vectors, and the remaining vectors can be obtained by adding up multiples of those vectors. The chemical species represented by those vectors are components of the system. If, for example, the species are C (in the form of graphite), CO2 and CO, then Since CO can be expressed as CO = (1/2)C + (1/2)CO2, it is not independent and C and CO can be chosen as the components of the system. There are two ways that the vectors can be dependent. One is that some pairs of elements always appear in the same ratio in each species. An example is a series of polymers that are composed of different numbers of identical units. The number of such constraints is given by . In addition, some combinations of elements may be forbidden by chemical kinetics. If the number of such constraints is , then Equivalently, if is the number of independent reactions that can take place, then The constants are related by . Examples CaCO3 - CaO - CO2 system This is an example of a system with several phases, which at ordinary temperatures are two solids and a gas. There are three chemical species (CaCO3, CaO and CO2) and one reaction: CaCO3 CaO + CO2. The number of components is then 3 - 1 = 2. Water - Hydrogen - Oxygen system The reactions included in the calculation are only those that actually occur under the given conditions, and not those that might occur under different conditions such as higher temperature or the presence of a catalyst. For example, the dissociation of water into its elements does not occur at ordinary temperature, so a system of water, hydrogen and oxygen at 25 °C has 3 independent components. Aqueous solution of 4 kinds of salts Consider an aqueous solution containing sodium chloride (NaCl), potassium chloride (KCl), sodium bromide (NaBr), and potassium bromide (KBr), in equilibrium with their respective solid phases. While 6 elements are present (H, O, Na, K, Cl, Br), their quantities are not independent due to the following constraints: The stoichiometry of water: n(H) = 2n(O). This constraint imply that knowing the quantity of one determines the other. Charge balance in the solution: n(Na) + n(K) = n(Cl) + n(Br). Thin constraint imply that knowing the quantity of 3 of the 4 ionic species (Na, K, Cl, Br) determines the fourth. Consequently, the number of independently variable constituents, and therefore the number of components, is 4. References Chemical thermodynamics
Component (thermodynamics)
[ "Chemistry" ]
756
[ "Chemical thermodynamics" ]
272,483
https://en.wikipedia.org/wiki/Effective%20radiated%20power
Effective radiated power (ERP), synonymous with equivalent radiated power, is an IEEE standardized definition of directional radio frequency (RF) power, such as that emitted by a radio transmitter. It is the total power in watts that would have to be radiated by a half-wave dipole antenna to give the same radiation intensity (signal strength or power flux density in watts per square meter) as the actual source antenna at a distant receiver located in the direction of the antenna's strongest beam (main lobe). ERP measures the combination of the power emitted by the transmitter and the ability of the antenna to direct that power in a given direction. It is equal to the input power to the antenna multiplied by the gain of the antenna. It is used in electronics and telecommunications, particularly in broadcasting to quantify the apparent power of a broadcasting station experienced by listeners in its reception area. An alternate parameter that measures the same thing is effective isotropic radiated power (EIRP). Effective isotropic radiated power is the hypothetical power that would have to be radiated by an isotropic antenna to give the same ("equivalent") signal strength as the actual source antenna in the direction of the antenna's strongest beam. The difference between EIRP and ERP is that ERP compares the actual antenna to a half-wave dipole antenna, while EIRP compares it to a theoretical isotropic antenna. Since a half-wave dipole antenna has a gain of 1.64 (or 2.15 dB) compared to an isotropic radiator, if ERP and EIRP are expressed in watts their relation is If they are expressed in decibels Definitions Effective radiated power and effective isotropic radiated power both measure the power density a radio transmitter and antenna (or other source of electromagnetic waves) radiate in a specific direction: in the direction of maximum signal strength (the "main lobe") of its radiation pattern. This apparent power is dependent on two factors: The total power output and the radiation pattern of the antenna – how much of that power is radiated in the direction of maximal intensity. The latter factor is quantified by the antenna gain, which is the ratio of the signal strength radiated by an antenna in its direction of maximum radiation to that radiated by a standard antenna. For example, a 1,000 watt transmitter feeding an antenna with a gain of 4× (equiv. 6 dBi) will have the same signal strength in the direction of its main lobe, and thus the same ERP and EIRP, as a 4,000 watt transmitter feeding an antenna with a gain of 1× (equiv. 0 dBi). So ERP and EIRP are measures of radiated power that can compare different combinations of transmitters and antennas on an equal basis. In spite of the names, ERP and EIRP do not measure transmitter power, or total power radiated by the antenna, they are just a measure of signal strength along the main lobe. They give no information about power radiated in other directions, or total power. ERP and EIRP are always greater than the actual total power radiated by the antenna. The difference between ERP and EIRP is that antenna gain has traditionally been measured in two different units, comparing the antenna to two different standard antennas; an isotropic antenna and a half-wave dipole antenna: Isotropic gain is the ratio of the power density (signal strength in watts per square meter) received at a point far from the antenna (in the far field) in the direction of its maximum radiation (main lobe), to the power received at the same point from a hypothetical lossless isotropic antenna, which radiates equal power in all directions Gain is often expressed in logarithmic units of decibels (dB). The decibel gain relative to an isotropic antenna (dB) is given by Dipole gain is the ratio of the power density received from the antenna in the direction of its maximum radiation to the power density received from a lossless half-wave dipole antenna in the direction of its maximum radiation The decibel gain relative to a dipole (dB) is given by In contrast to an isotropic antenna, the dipole has a "donut-shaped" radiation pattern, its radiated power is maximum in directions perpendicular to the antenna, declining to zero on the antenna axis. Since the radiation of the dipole is concentrated in horizontal directions, the gain of a half-wave dipole is greater than that of an isotropic antenna. The isotropic gain of a half-wave dipole is 1.64, or in decibels so In decibels The two measures EIRP and ERP are based on the two different standard antennas above: EIRP is defined as the RMS power input in watts required to a lossless isotropic antenna to give the same maximum power density far from the antenna as the actual transmitter. It is equal to the power input to the transmitter's antenna multiplied by the isotropic antenna gain The ERP and EIRP are also often expressed in decibels (dB). The input power in decibels is usually calculated with comparison to a reference level of one watt (W): Since multiplication of two factors is equivalent to addition of their decibel values ERP is defined as the RMS power input in watts required to a lossless half-wave dipole antenna to give the same maximum power density far from the antenna as the actual transmitter. It is equal to the power input to the transmitter's antenna multiplied by the antenna gain relative to a half-wave dipole: In decibels Since the two definitions of gain only differ by a constant factor, so do ERP and EIRP In decibels Relation to transmitter output power The transmitter is usually connected to the antenna through a transmission line and impedance matching network. Since these components may have significant losses the power applied to the antenna is usually less than the output power of the transmitter The relation of ERP and EIRP to transmitter output power is Losses in the antenna itself are included in the gain. Relation to signal strength If the signal path is in free space (line-of-sight propagation with no multipath) the signal strength (power flux density in watts per square meter) of the radio signal on the main lobe axis at any particular distance from the antenna can be calculated from the EIRP or ERP. Since an isotropic antenna radiates equal power flux density over a sphere centered on the antenna, and the area of a sphere with radius is then Since After dividing out the factor of we get: However, if the radio waves travel by ground wave as is typical for medium or longwave broadcasting, skywave, or indirect paths play a part in transmission, the waves will suffer additional attenuation which depends on the terrain between the antennas, so these formulas are not valid. Dipole vs. isotropic radiators Because ERP is calculated as antenna gain (in a given direction) as compared with the maximum directivity of a half-wave dipole antenna, it creates a mathematically virtual effective dipole antenna oriented in the direction of the receiver. In other words, a notional receiver in a given direction from the transmitter would receive the same power if the source were replaced with an ideal dipole oriented with maximum directivity and matched polarization towards the receiver and with an antenna input power equal to the ERP. The receiver would not be able to determine a difference. Maximum directivity of an ideal half-wave dipole is a constant, i.e., Therefore, ERP is always 2.15 dB less than EIRP. The ideal dipole antenna could be further replaced by an isotropic radiator (a purely mathematical device which cannot exist in the real world), and the receiver cannot know the difference so long as the input power is increased by 2.15 dB. The distinction between dB and dB is often left unstated and the reader is sometimes forced to infer which was used. For example, a Yagi–Uda antenna is constructed from several dipoles arranged at precise intervals to create greater energy focusing (directivity) than a simple dipole. Since it is constructed from dipoles, often its antenna gain is expressed in dB, but listed only as dB. This ambiguity is undesirable with respect to engineering specifications. A Yagi–Uda antenna's maximum directivity is Its gain necessarily must be less than this by the factor η, which must be negative in units of dB. Neither ERP nor EIRP can be calculated without knowledge of the power accepted by the antenna, i.e., it is not correct to use units of dB or dB with ERP and EIRP. Let us assume a 100 watt (20 dB) transmitter with losses of 6 dB prior to the antenna. ERP < 22.77 dB and EIRP < 24.92 dB, both less than ideal by in dB. Assuming that the receiver is in the first side-lobe of the transmitting antenna, and each value is further reduced by 7.2 dB, which is the decrease in directivity from the main to side-lobe of a Yagi–Uda. Therefore, anywhere along the side-lobe direction from this transmitter, a blind receiver could not tell the difference if a Yagi–Uda was replaced with either an ideal dipole (oriented towards the receiver) or an isotropic radiator with antenna input power increased by 1.57 dB. Polarization Polarization has not been taken into account so far, but it must be properly clarified. When considering the dipole radiator previously we assumed that it was perfectly aligned with the receiver. Now assume, however, that the receiving antenna is circularly polarized, and there will be a minimum 3 dB polarization loss regardless of antenna orientation. If the receiver is also a dipole, it is possible to align it orthogonally to the transmitter such that theoretically zero energy is received. However, this polarization loss is not accounted for in the calculation of ERP or EIRP. Rather, the receiving system designer must account for this loss as appropriate. For example, a cellular telephone tower has a fixed linear polarization, but the mobile handset must function well at any arbitrary orientation. Therefore, a handset design might provide dual polarization receive on the handset so that captured energy is maximized regardless of orientation, or the designer might use a circularly polarized antenna and account for the extra 3 dB of loss with amplification. FM example For example, an FM radio station which advertises that it has 100,000 watts of power actually has 100,000 watts ERP, and not an actual 100,000-watt transmitter. The transmitter power output (TPO) of such a station typically may be 10,000–20,000 watts, with a gain factor of 5–10× (5–10×, or 7–10 dB). In most antenna designs, gain is realized primarily by concentrating power toward the horizontal plane and suppressing it at upward and downward angles, through the use of phased arrays of antenna elements. The distribution of power versus elevation angle is known as the vertical pattern. When an antenna is also directional horizontally, gain and ERP will vary with azimuth (compass direction). Rather than the average power over all directions, it is the apparent power in the direction of the peak of the antenna's main lobe that is quoted as a station's ERP (this statement is just another way of stating the definition of ERP). This is particularly applicable to the huge ERPs reported for shortwave broadcasting stations, which use very narrow beam widths to get their signals across continents and oceans. United States regulatory usage ERP for FM radio in the United States is always relative to a theoretical reference half-wave dipole antenna. (That is, when calculating ERP, the most direct approach is to work with antenna gain in dB). To deal with antenna polarization, the Federal Communications Commission (FCC) lists ERP in both the horizontal and vertical measurements for FM and TV. Horizontal is the standard for both, but if the vertical ERP is larger it will be used instead. The maximum ERP for US FM broadcasting is usually 100,000 watts (FM Zone II) or 50,000 watts (in the generally more densely populated Zones I and I-A), though exact restrictions vary depending on the class of license and the antenna height above average terrain (HAAT). Some stations have been grandfathered in or, very infrequently, been given a waiver, and can exceed normal restrictions. Microwave band issues For most microwave systems, a completely non-directional isotropic antenna (one which radiates equally and perfectly well in every direction – a physical impossibility) is used as a reference antenna, and then one speaks of EIRP (effective isotropic radiated power) rather than ERP. This includes satellite transponders, radar, and other systems which use microwave dishes and reflectors rather than dipole-style antennas. Lower-frequency issues In the case of medium wave (AM) stations in the United States, power limits are set to the actual transmitter power output, and ERP is not used in normal calculations. Omnidirectional antennas used by a number of stations radiate the signal equally in all horizontal directions. Directional arrays are used to protect co- or adjacent channel stations, usually at night, but some run directionally continuously. While antenna efficiency and ground conductivity are taken into account when designing such an array, the FCC database shows the station's transmitter power output, not ERP. Related terms According to the Institution of Electrical Engineers (UK), ERP is often used as a general reference term for radiated power, but strictly speaking should only be used when the antenna is a half-wave dipole, and is used when referring to FM transmission. EMRP Effective monopole radiated power (EMRP) may be used in Europe, particularly in relation to medium wave broadcasting antennas. This is the same as ERP, except that a short vertical antenna (i.e. a short monopole) is used as the reference antenna instead of a half-wave dipole. CMF Cymomotive force (CMF) is an alternative term used for expressing radiation intensity in volts, particularly at the lower frequencies. It is used in Australian legislation regulating AM broadcasting services, which describes it as: "for a transmitter, [it] means the product, expressed in volts, of: (a) the electric field strength at a given point in space, due to the operation of the transmitter; and (b) the distance of that point from the transmitter's antenna". It relates to AM broadcasting only, and expresses the field strength in "microvolts per metre at a distance of 1 kilometre from the transmitting antenna". HAAT The height above average terrain for VHF and higher frequencies is extremely important when considering ERP, as the signal coverage (broadcast range) produced by a given ERP dramatically increases with antenna height. Because of this, it is possible for a station of only a few hundred watts ERP to cover more area than a station of a few thousand watts ERP, if its signal travels above obstructions on the ground. See also Nominal power (radio broadcasting) List of North American broadcast station classes References Antennas (radio) Radio transmission power Broadcast engineering Logarithmic scales of measurement
Effective radiated power
[ "Physics", "Mathematics", "Engineering" ]
3,199
[ "Broadcast engineering", "Physical quantities", "Radio transmission power", "Quantity", "Power (physics)", "Electronic engineering", "Logarithmic scales of measurement" ]
272,496
https://en.wikipedia.org/wiki/Height%20above%20average%20terrain
Height above average terrain (HAAT), or (less popularly) effective height above average terrain (EHAAT), is the vertical position of an antenna site above the surrounding landscape. HAAT is used extensively in FM radio and television, as it is more important than effective radiated power (ERP) in determining the range of broadcasts (VHF and UHF in particular, as they are line of sight transmissions). For international coordination, it is officially measured in meters, even by the Federal Communications Commission in the United States, as Canada and Mexico have extensive border zones where stations can be received on either side of the international boundaries. Stations that want to increase above a certain HAAT must reduce their power accordingly, based on the maximum distance their station class is allowed to cover (see List of North American broadcast station classes for more information on this). The FCC procedure to calculate HAAT is: from the proposed or actual antenna site, either 12 or 16 radials were drawn, and points at 2, 4, 6, 8, and radius along each radial were used. The entire radial graph could be rotated to achieve the best effect for the station. The altitude of the antenna site, minus the average altitude of all the specified points, is the HAAT. This can create some unusual cases, particularly in mountainous regions—it is possible to have a negative number for HAAT (the transmitter would not be located underground, but rather in a valley, with hills on both sides taller than the transmitter itself, for example). The FCC has divided the Contiguous United States into three zones for the determination of spacing between FM and TV stations using the same frequencies. FM and TV stations are assigned maximum ERP and HAAT values, depending on their assigned zones, to prevent co-channel interference. The FCC regulations for ERP and HAAT are listed under Title 47, Part 73 of the Code of Federal Regulations (CFR). FM Zones I and I-A Maximum HAAT: Maximum ERP: 50 kilowatts (47dBW) Minimum co-channel separation: Zones II and III Maximum HAAT: Maximum ERP: 100 kilowatts (50dBW) Minimum co-channel separation: . TV In all zones, maximum ERP for analog TV transmitters is as follows: VHF 2-6: 100 kilowatts (50dBW) (analog); 45 kilowatts (46.5dBW) (digital) VHF 7-13: 316 kilowatts (55dBW) (analog); 160 kilowatts (52dBW) (digital) UHF: 5,000 kilowatts (67dBW) (analog); 1,000 kilowatts (60dBW) (digital) Maximum HAAT Zone I: Zones II and III: Minimum co-channel separation Zone layouts Zone I (the most densely populated zone) consists of the entire land masses of the following states: Connecticut, Delaware, Illinois, Indiana, Maryland, Massachusetts, New Jersey, Ohio, Pennsylvania, Rhode Island, and West Virginia; in addition to the northern and eastern portions of Virginia; the areas of Michigan and southeastern Wisconsin south of 43° 30' north latitude; the coastal strip of Maine; the areas of New Hampshire and Vermont south of 45° north latitude; and the areas of western New York south of 43° 30' north latitude and eastern New York south of 45° north latitude. In addition, Zone I-A (FM only) consists of all of California south of 40° north latitude, Puerto Rico and the U.S. Virgin Islands (If the dividing line between Zones I and II runs through a city, that city is considered to be in Zone I.). Zones I and I-A have the most "grandfathered" overpowered stations, which are allowed the same extended coverage areas that they had before the zones were established. One of the most powerful of these stations is WBCT in Grand Rapids, Michigan, which operates at 320,000 watts and 238 meters (781 ft) HAAT. Zone III (the zone with the flattest terrain) consists of all of Florida and the areas of Alabama, Georgia, Louisiana, Mississippi, and Texas within approximately of the Gulf of Mexico. Zone II is all the rest of the Continental United States, Alaska and Hawaii. See also Above mean sea level (AMSL) Above ground level (AGL) Canadian Radio-television and Telecommunications Commission (CRTC) List of broadcast station classes Topographic prominence – a similar measurement for mountains External links 47 CFR Part 73 Index (2005) FCC: Mass Media Calculated Contours FCC: HAAT Calculator "Superpower" Grandfathered FM stations Antennas Broadcasting Height Vertical extent
Height above average terrain
[ "Physics", "Mathematics", "Engineering" ]
967
[ "Antennas", "Vertical extent", "Telecommunications engineering", "Physical quantities", "Distance", "Quantity", "Size", "Height", "Wikipedia categories named after physical quantities" ]
273,524
https://en.wikipedia.org/wiki/Linear%20particle%20accelerator
A linear particle accelerator (often shortened to linac) is a type of particle accelerator that accelerates charged subatomic particles or ions to a high speed by subjecting them to a series of oscillating electric potentials along a linear beamline. The principles for such machines were proposed by Gustav Ising in 1924, while the first machine that worked was constructed by Rolf Widerøe in 1928 at the RWTH Aachen University. Linacs have many applications: they generate X-rays and high energy electrons for medicinal purposes in radiation therapy, serve as particle injectors for higher-energy accelerators, and are used directly to achieve the highest kinetic energy for light particles (electrons and positrons) for particle physics. The design of a linac depends on the type of particle that is being accelerated: electrons, protons or ions. Linacs range in size from a cathode-ray tube (which is a type of linac) to the linac at the SLAC National Accelerator Laboratory in Menlo Park, California. History In 1924, Gustav Ising published the first description of a linear particle accelerator using a series of accelerating gaps. Particles would proceed down a series of tubes. At a regular frequency, an accelerating voltage would be applied across each gap. As the particles gained speed while the frequency remained constant, the gaps would be spaced farther and farther apart, in order to ensure the particle would see a voltage applied as it reached each gap. Ising never successfully implemented this design. Rolf Wideroe discovered Ising's paper in 1927, and as part of his PhD thesis he built an 88-inch long, two gap version of the device. Where Ising had proposed a spark gap as the voltage source, Wideroe used a 25kV vacuum tube oscillator. He successfully demonstrated that he had accelerated sodium and potassium ions to an energy of 50,000 electron volts (50 keV), twice the energy they would have received if accelerated only once by the tube. By successfully accelerating a particle multiple times using the same voltage source, Wideroe demonstrated the utility of radio frequency (RF) acceleration. This type of linac was limited by the voltage sources that were available at the time, and it was not until after World War II that Luis Alvarez was able to use newly developed high frequency oscillators to design the first resonant cavity drift tube linac. An Alvarez linac differs from the Wideroe type in that the RF power is applied to the entire resonant chamber through which the particle travels, and the central tubes are only used to shield the particles during the decelerating portion of the oscillator's phase. Using this approach to acceleration meant that Alvarez's first linac was able to achieve proton energies of 31.5 MeV in 1947, the highest that had ever been reached at the time. The initial Alvarez type linacs had no strong mechanism for keeping the beam focused and were limited in length and energy as a result. The development of the strong focusing principle in the early 1950s led to the installation of focusing quadrupole magnets inside the drift tubes, allowing for longer and thus more powerful linacs. Two of the earliest examples of Alvarez linacs with strong focusing magnets were built at CERN and Brookhaven National Laboratory. In 1947, at about the same time that Alvarez was developing his linac concept for protons, William Hansen constructed the first travelling-wave electron accelerator at Stanford University. Electrons are sufficiently lighter than protons that they achieve speeds close to the speed of light early in the acceleration process. As a result, "accelerating" electrons increase in energy but can be treated as having a constant velocity from an accelerator design standpoint. This allowed Hansen to use an accelerating structure consisting of a horizontal waveguide loaded by a series of discs. The 1947 accelerator had an energy of 6 MeV. Over time, electron acceleration at the SLAC National Accelerator Laboratory would extend to a size of and an output energy of 50 GeV. As linear accelerators were developed with higher beam currents, using magnetic fields to focus proton and heavy ion beams presented difficulties for the initial stages of the accelerator. Because the magnetic force is dependent on the particle velocity, it was desirable to create a type of accelerator which could simultaneously accelerate and focus low-to-mid energy hadrons. In 1970, Soviet physicists I. M. Kapchinsky and Vladimir Teplyakov proposed the radio-frequency quadrupole (RFQ) type of accelerating structure. RFQs use vanes or rods with precisely designed shapes in a resonant cavity to produce complex electric fields. These fields provide simultaneous acceleration and focusing to injected particle beams. Beginning in the 1960s, scientists at Stanford and elsewhere began to explore the use of superconducting radio frequency cavities for particle acceleration. Superconducting cavities made of niobium alloys allow for much more efficient acceleration, as a substantially higher fraction of the input power could be applied to the beam rather than lost to heat. Some of the earliest superconducting linacs included the Superconducting Linear Accelerator (for electrons) at Stanford and the Argonne Tandem Linear Accelerator System (for protons and heavy ions) at Argonne National Laboratory. Basic principles of operation Radiofrequency acceleration When a charged particle is placed in an electromagnetic field it experiences a force given by the Lorentz force law: where is the charge on the particle, is the electric field, is the particle velocity, and is the magnetic field. The cross product in the magnetic field term means that static magnetic fields cannot be used for particle acceleration, as the magnetic force acts perpendicularly to the direction of particle motion. As electrostatic breakdown limits the maximum constant voltage which can be applied across a gap to produce an electric field, most accelerators use some form of RF acceleration. In RF acceleration, the particle traverses a series of accelerating regions, driven by a source of voltage in such a way that the particle sees an accelerating field as it crosses each region. In this type of acceleration, particles must necessarily travel in "bunches" corresponding to the portion of the oscillator's cycle where the electric field is pointing in the intended direction of acceleration. If a single oscillating voltage source is used to drive a series of gaps, those gaps must be placed increasingly far apart as the speed of the particle increases. This is to ensure that the particle "sees" the same phase of the oscillator's cycle as it reaches each gap. As particles asymptotically approach the speed of light, the gap separation becomes constant: additional applied force increases the energy of the particles but does not significantly alter their speed. Focusing In order to ensure particles do not escape the accelerator, it is necessary to provide some form of focusing to redirect particles moving away from the central trajectory back towards the intended path. With the discovery of strong focusing, quadrupole magnets are used to actively redirect particles moving away from the reference path. As quadrupole magnets are focusing in one transverse direction and defocusing in the perpendicular direction, it is necessary to use groups of magnets to provide an overall focusing effect in both directions. Phase stability Focusing along the direction of travel, also known as phase stability, is an inherent property of RF acceleration. If the particles in a bunch all reach the accelerating region during the rising phase of the oscillating field, then particles which arrive early will see slightly less voltage than the "reference" particle at the center of the bunch. Those particles will therefore receive slightly less acceleration and eventually fall behind the reference particle. Correspondingly, particles which arrive after the reference particle will receive slightly more acceleration, and will catch up to the reference as a result. This automatic correction occurs at each accelerating gap, so the bunch is refocused along the direction of travel each time it is accelerated. Construction and operation A linear particle accelerator consists of the following parts: A straight hollow pipe vacuum chamber which contains the other components. It is evacuated with a vacuum pump so that the accelerated particles will not collide with air molecules. The length will vary with the application. If the device is used for the production of X-rays for inspection or therapy, then the pipe may be only 0.5 to 1.5 meters long. If the device is to be an injector for a synchrotron, it may be about ten meters long. If the device is used as the primary accelerator for nuclear particle investigations, it may be several thousand meters long. The particle source (S) at one end of the chamber which produces the charged particles which the machine accelerates. The design of the source depends on the particle that is being accelerated. Electrons are generated by a cold cathode, a hot cathode, a photocathode, or radio frequency ion sources. Protons are generated in an ion source, which can have many different designs. If heavier particles are to be accelerated, (e.g., uranium ions), a specialized ion source is needed. The source has its own high voltage supply to inject the particles into the beamline. Extending along the pipe from the source is a series of open-ended cylindrical electrodes (C1, C2, C3, C4), whose length increases progressively with the distance from the source. The particles from the source pass through these electrodes. The length of each electrode is determined by the frequency and power of the driving power source and the particle to be accelerated, so that the particle passes through each electrode in exactly one-half cycle of the accelerating voltage. The mass of the particle has a large effect on the length of the cylindrical electrodes; for example an electron is considerably lighter than a proton and so will generally require a much smaller section of cylindrical electrodes as it accelerates very quickly. A target with which the particles collide, located at the end of the accelerating electrodes. If electrons are accelerated to produce X-rays, then a water-cooled tungsten target is used. Various target materials are used when protons or other nuclei are accelerated, depending upon the specific investigation. Behind the target are various detectors to detect the particles resulting from the collision of the incoming particles with the atoms of the target. Many linacs serve as the initial accelerator stage for larger particle accelerators such as synchrotrons and storage rings, and in this case after leaving the electrodes the accelerated particles enter the next stage of the accelerator. An electronic oscillator and amplifier (G) which generates a radio frequency AC voltage of high potential (usually thousands of volts) which is applied to the cylindrical electrodes. This is the accelerating voltage which produces the electric field which accelerates the particles. Opposite phase voltage is applied to successive electrodes. A high power accelerator will have a separate amplifier to power each electrode, all synchronized to the same frequency. As shown in the animation, the oscillating voltage applied to alternate cylindrical electrodes has opposite polarity (180° out of phase), so adjacent electrodes have opposite voltages. This creates an oscillating electric field (E) in the gap between each pair of electrodes, which exerts force on the particles when they pass through, imparting energy to them by accelerating them. The particle source injects a group of particles into the first electrode once each cycle of the voltage, when the charge on the electrode is opposite to the charge on the particles. Each time the particle bunch passes through an electrode, the oscillating voltage changes polarity, so when the particles reach the gap between electrodes the electric field is in the correct direction to accelerate them. Therefore, the particles accelerate to a faster speed each time they pass between electrodes; there is little electric field inside the electrodes so the particles travel at a constant speed within each electrode. The particles are injected at the right time so that the oscillating voltage differential between electrodes is maximum as the particles cross each gap. If the peak voltage applied between the electrodes is volts, and the charge on each particle is elementary charges, the particle gains an equal increment of energy of electron volts when passing through each gap. Thus the output energy of the particles is electron volts, where is the number of accelerating electrodes in the machine. At speeds near the speed of light, the incremental velocity increase will be small, with the energy appearing as an increase in the mass of the particles. In portions of the accelerator where this occurs, the tubular electrode lengths will be almost constant. Additional magnetic or electrostatic lens elements may be included to ensure that the beam remains in the center of the pipe and its electrodes. Very long accelerators may maintain a precise alignment of their components through the use of servo systems guided by a laser beam. Concepts in development Various new concepts are in development as of 2021. The primary goal is to make linear accelerators cheaper, with better focused beams, higher energy or higher beam current. Induction linear accelerator Induction linear accelerators use the electric field induced by a time-varying magnetic field for acceleration—like the betatron. The particle beam passes through a series of ring-shaped ferrite cores standing one behind the other, which are magnetized by high-current pulses, and in turn each generate an electrical field strength pulse along the axis of the beam direction. Induction linear accelerators are considered for short high current pulses from electrons but also from heavy ions. The concept goes back to the work of Nicholas Christofilos. Its realization is highly dependent on progress in the development of more suitable ferrite materials. With electrons, pulse currents of up to 5 kiloamps at energies up to 5 MeV and pulse durations in the range of 20 to 300 nanoseconds were achieved. Energy recovery linac In previous electron linear accelerators, the accelerated particles are used only once and then fed into an absorber (beam dump), in which their residual energy is converted into heat. In an energy recovery linac (ERL), the accelerated in resonators and, for example, in undulators. The electrons used are fed back through the accelerator, out of phase by 180 degrees. They therefore pass through the resonators in the decelerating phase and thus return their remaining energy to the field. The concept is comparable to the hybrid drive of motor vehicles, where the kinetic energy released during braking is made available for the next acceleration by charging a battery. The Brookhaven National Laboratory and the Helmholtz-Zentrum Berlin with the project "bERLinPro" reported on corresponding development work. The Berlin experimental accelerator uses superconducting niobium cavity resonators. In 2014, three free-electron lasers based on ERLs were in operation worldwide: in the Jefferson Lab (US), in the Budker Institute of Nuclear Physics (Russia) and at JAEA (Japan). At the University of Mainz, an ERL called MESA is expected to begin operation in 2024. Compact Linear Collider The concept of the Compact Linear Collider (CLIC) (original name CERN Linear Collider, with the same abbreviation) for electrons and positrons provides a traveling wave accelerator for energies of the order of 1 tera-electron volt (TeV). Instead of the otherwise necessary numerous klystron amplifiers to generate the acceleration power, a second parallel electron linear accelerator of lower energy is to be used, which works with superconducting cavities in which standing waves are formed. High-frequency power is extracted from it at regular intervals and transmitted to the main accelerator. In this way, the very high acceleration field strength of 80 MV / m should be achieved. Kielfeld accelerator (plasma accelerator) In cavity resonators, the dielectric strength limits the maximum acceleration that can be achieved within a certain distance. This limit can be circumvented using accelerated waves in plasma to generate the accelerating field in Kielfeld accelerators: A laser or particle beam excites an oscillation in a plasma, which is associated with very strong electric field strengths. This means that significantly (factors of 100s to 1000s ) more compact linear accelerators can possibly be built. Experiments involving high power lasers in metal vapour plasmas suggest that a beam line length reduction from some tens of metres to a few cm is quite possible. Compact medical accelerators The LIGHT program (Linac for Image-Guided Hadron Therapy) hopes to create a design capable of accelerating protons to 200MeV or so for medical use over a distance of a few tens of metres, by optimising and nesting existing accelerator techniques The current design (2020) uses the highest practical bunch frequency (currently ~ 3 GHz) for a Radio-frequency quadrupole (RFQ) stage from injection at 50kVdC to ~5MeV bunches, a Side Coupled Drift Tube Linac (SCDTL) to accelerate from 5Mev to ~ 40MeV and a Cell Coupled Linac (CCL) stage final, taking the output to 200-230MeV. Each stage is optimised to allow close coupling and synchronous operation during the beam energy build-up. The project aim is to make proton therapy a more accessible mainstream medicine as an alternative to existing radio therapy. Modern concepts The higher the frequency of the acceleration voltage selected, the more individual acceleration thrusts per path length a particle of a given speed experiences, and the shorter the accelerator can therefore be overall. That is why accelerator technology developed in the pursuit of higher particle energies, especially towards higher frequencies. The linear accelerator concepts (often called accelerator structures in technical terms) that have been used since around 1950 work with frequencies in the range from around to a few gigahertz (GHz) and use the electric field component of electromagnetic waves. Standing waves and traveling waves When it comes to energies of more than a few MeV, accelerators for ions are different from those for electrons. The reason for this is the large mass difference between the particles. Electrons are already close to the speed of light, the absolute speed limit, at a few MeV; with further acceleration, as described by relativistic mechanics, almost only their energy and momentum increase. On the other hand, with ions of this energy range, the speed also increases significantly due to further acceleration. The acceleration concepts used today for ions are always based on electromagnetic standing waves that are formed in suitable resonators. Depending on the type of particle, energy range and other parameters, very different types of resonators are used; the following sections only cover some of them. Electrons can also be accelerated with standing waves above a few MeV. An advantageous alternative here, however, is a progressive wave, a traveling wave. The phase velocity the traveling wave must be roughly equal to the particle speed. Therefore, this technique is only suitable when the particles are almost at the speed of light, so that their speed only increases very little. The development of high-frequency oscillators and power amplifiers from the 1940s, especially the klystron, was essential for these two acceleration techniques . The first larger linear accelerator with standing waves - for protons - was built in 1945/46 in the Lawrence Berkeley National Laboratory under the direction of Luis W. Alvarez. The frequency used was .  The first electron accelerator with traveling waves of around was developed a little later at Stanford University by W.W. Hansen and colleagues. In the two diagrams, the curve and arrows indicate the force acting on the particles. Only at the points with the correct direction of the electric field vector, i.e. the correct direction of force, can particles absorb energy from the wave. (An increase in speed cannot be seen in the scale of these images.) Advantages The linear accelerator could produce higher particle energies than the previous electrostatic particle accelerators (the Cockcroft–Walton accelerator and Van de Graaff generator) that were in use when it was invented. In these machines, the particles were only accelerated once by the applied voltage, so the particle energy in electron volts was equal to the accelerating voltage on the machine, which was limited to a few million volts by insulation breakdown. In the linac, the particles are accelerated multiple times by the applied voltage, so the particle energy is not limited by the accelerating voltage. High power linacs are also being developed for production of electrons at relativistic speeds, required since fast electrons traveling in an arc will lose energy through synchrotron radiation; this limits the maximum power that can be imparted to electrons in a synchrotron of given size. Linacs are also capable of prodigious output, producing a nearly continuous stream of particles, whereas a synchrotron will only periodically raise the particles to sufficient energy to merit a "shot" at the target. (The burst can be held or stored in the ring at energy to give the experimental electronics time to work, but the average output current is still limited.) The high density of the output makes the linac particularly attractive for use in loading storage ring facilities with particles in preparation for particle to particle collisions. The high mass output also makes the device practical for the production of antimatter particles, which are generally difficult to obtain, being only a small fraction of a target's collision products. These may then be stored and further used to study matter-antimatter annihilation. Medical linacs Linac-based radiation therapy for cancer treatment began with the first patient treated in 1953 in London, UK, at the Hammersmith Hospital, with an 8 MV machine built by Metropolitan-Vickers and installed in 1952, as the first dedicated medical linac. A short while later in 1954, a 6 MV linac was installed in Stanford, USA, which began treatments in 1956. Medical linear accelerators accelerate electrons using a tuned-cavity waveguide, in which the RF power creates a standing wave. Some linacs have short, vertically mounted waveguides, while higher energy machines tend to have a horizontal, longer waveguide and a bending magnet to turn the beam vertically towards the patient. Medical linacs use monoenergetic electron beams between 4 and 25 MeV, giving an X-ray output with a spectrum of energies up to and including the electron energy when the electrons are directed at a high-density (such as tungsten) target. The electrons or X-rays can be used to treat both benign and malignant disease. The LINAC produces a reliable, flexible and accurate radiation beam. The versatility of LINAC is a potential advantage over cobalt therapy as a treatment tool. In addition, the device can simply be powered off when not in use; there is no source requiring heavy shielding – although the treatment room itself requires considerable shielding of the walls, doors, ceiling etc. to prevent escape of scattered radiation. Prolonged use of high powered (>18 MeV) machines can induce a significant amount of radiation within the metal parts of the head of the machine after power to the machine has been removed (i.e. they become an active source and the necessary precautions must be observed). In 2019 a Little Linac model kit, containing 82 building blocks, was developed for children undergoing radiotherapy treatment for cancer. The hope is that building the model will alleviate some of the stress experienced by the child before undergoing treatment by helping them to understand what the treatment entails. The kit was developed by Professor David Brettle, Institute of Physics and Engineering in Medicine (IPEM) in collaboration with manufacturers Best-Lock Ltd. The model can be seen at the Science Museum, London. Application for medical isotope development The expected shortages of Mo-99, and the technetium-99m medical isotope obtained from it, have also shed light onto linear accelerator technology to produce Mo-99 from non-enriched Uranium through neutron bombardment. This would enable the medical isotope industry to manufacture this crucial isotope by a sub-critical process. The aging facilities, for example the Chalk River Laboratories in Ontario, Canada, which still now produce most Mo-99 from highly enriched uranium could be replaced by this new process. In this way, the sub-critical loading of soluble uranium salts in heavy water with subsequent photo neutron bombardment and extraction of the target product, Mo-99, will be achieved. Disadvantages The device length limits the locations where one may be placed. A great number of driver devices and their associated power supplies are required, increasing the construction and maintenance expense of this portion. If the walls of the accelerating cavities are made of normally conducting material and the accelerating fields are large, the wall resistivity converts electric energy into heat quickly. On the other hand, superconductors also need constant cooling to keep them below their critical temperature, and the accelerating fields are limited by quenches. Therefore, high energy accelerators such as SLAC, still the longest in the world (in its various generations), are run in short pulses, limiting the average current output and forcing the experimental detectors to handle data coming in short bursts. See also Accelerator physics Beamline Compact Linear Collider Dielectric wall accelerator Duoplasmatron International Linear Collider Particle accelerator Particle beam SLAC National Accelerator Laboratory References External links Linear Particle Accelerator (LINAC) Animation by Ionactive 2MV Tandetron linear particle accelerator in Ljubljana, Slovenia Accelerator physics Types of magnets X-rays Cancer treatments
Linear particle accelerator
[ "Physics" ]
5,195
[ "Applied and interdisciplinary physics", "X-rays", "Spectrum (physical sciences)", "Electromagnetic spectrum", "Experimental physics", "Accelerator physics" ]
273,679
https://en.wikipedia.org/wiki/Astronomical%20spectroscopy
Astronomical spectroscopy is the study of astronomy using the techniques of spectroscopy to measure the spectrum of electromagnetic radiation, including visible light, ultraviolet, X-ray, infrared and radio waves that radiate from stars and other celestial objects. A stellar spectrum can reveal many properties of stars, such as their chemical composition, temperature, density, mass, distance and luminosity. Spectroscopy can show the velocity of motion towards or away from the observer by measuring the Doppler shift. Spectroscopy is also used to study the physical properties of many other types of celestial objects such as planets, nebulae, galaxies, and active galactic nuclei. Background Astronomical spectroscopy is used to measure three major bands of radiation in the electromagnetic spectrum: visible light, radio waves, and X-rays. While all spectroscopy looks at specific bands of the spectrum, different methods are required to acquire the signal depending on the frequency. Ozone (O3) and molecular oxygen (O2) absorb light with wavelengths under 300 nm, meaning that X-ray and ultraviolet spectroscopy require the use of a satellite telescope or rocket mounted detectors. Radio signals have much longer wavelengths than optical signals, and require the use of antennas or radio dishes. Infrared light is absorbed by atmospheric water and carbon dioxide, so while the equipment is similar to that used in optical spectroscopy, satellites are required to record much of the infrared spectrum. Optical spectroscopy Physicists have been looking at the solar spectrum since Isaac Newton first used a simple prism to observe the refractive properties of light. In the early 1800s Joseph von Fraunhofer used his skills as a glassmaker to create very pure prisms, which allowed him to observe 574 dark lines in a seemingly continuous spectrum. Soon after this, he combined telescope and prism to observe the spectrum of Venus, the Moon, Mars, and various stars such as Betelgeuse; his company continued to manufacture and sell high-quality refracting telescopes based on his original designs until its closure in 1884. The resolution of a prism is limited by its size; a larger prism will provide a more detailed spectrum, but the increase in mass makes it unsuitable for highly detailed work. This issue was resolved in the early 1900s with the development of high-quality reflection gratings by J.S. Plaskett at the Dominion Observatory in Ottawa, Canada. Light striking a mirror will reflect at the same angle, however a small portion of the light will be refracted at a different angle; this is dependent upon the indices of refraction of the materials and the wavelength of the light. By creating a "blazed" grating which utilizes a large number of parallel mirrors, the small portion of light can be focused and visualized. These new spectroscopes were more detailed than a prism, required less light, and could be focused on a specific region of the spectrum by tilting the grating. The limitation to a blazed grating is the width of the mirrors, which can only be ground a finite amount before focus is lost; the maximum is around 1000 lines/mm. In order to overcome this limitation holographic gratings were developed. Volume phase holographic gratings use a thin film of dichromated gelatin on a glass surface, which is subsequently exposed to a wave pattern created by an interferometer. This wave pattern sets up a reflection pattern similar to the blazed gratings but utilizing Bragg diffraction, a process where the angle of reflection is dependent on the arrangement of the atoms in the gelatin. The holographic gratings can have up to 6000 lines/mm and can be up to twice as efficient in collecting light as blazed gratings. Because they are sealed between two sheets of glass, the holographic gratings are very versatile, potentially lasting decades before needing replacement. Light dispersed by the grating or prism in a spectrograph can be recorded by a detector. Historically, photographic plates were widely used to record spectra until electronic detectors were developed, and today optical spectrographs most often employ charge-coupled devices (CCDs). The wavelength scale of a spectrum can be calibrated by observing the spectrum of emission lines of known wavelength from a gas-discharge lamp. The flux scale of a spectrum can be calibrated as a function of wavelength by comparison with an observation of a standard star with corrections for atmospheric absorption of light; this is known as spectrophotometry. Radio spectroscopy Radio astronomy was founded with the work of Karl Jansky in the early 1930s, while working for Bell Labs. He built a radio antenna to look at potential sources of interference for transatlantic radio transmissions. One of the sources of noise discovered came not from Earth, but from the center of the Milky Way, in the constellation Sagittarius. In 1942, JS Hey captured the Sun's radio frequency using military radar receivers. Radio spectroscopy started with the discovery of the 21-centimeter H I line in 1951. Radio interferometry Radio interferometry was pioneered in 1946, when Joseph Lade Pawsey, Ruby Payne-Scott and Lindsay McCready used a single antenna atop a sea cliff to observe 200 MHz solar radiation. Two incident beams, one directly from the sun and the other reflected from the sea surface, generated the necessary interference. The first multi-receiver interferometer was built in the same year by Martin Ryle and Vonberg. In 1960, Ryle and Antony Hewish published the technique of aperture synthesis to analyze interferometer data. The aperture synthesis process, which involves autocorrelating and discrete Fourier transforming the incoming signal, recovers both the spatial and frequency variation in flux. The result is a 3D image whose third axis is frequency. For this work, Ryle and Hewish were jointly awarded the 1974 Nobel Prize in Physics. X-ray spectroscopy Stars and their properties Chemical properties Newton used a prism to split white light into a spectrum of color, and Fraunhofer's high-quality prisms allowed scientists to see dark lines of an unknown origin. In the 1850s, Gustav Kirchhoff and Robert Bunsen described the phenomena behind these dark lines. Hot solid objects produce light with a continuous spectrum, hot gases emit light at specific wavelengths, and hot solid objects surrounded by cooler gases show a near-continuous spectrum with dark lines corresponding to the emission lines of the gases. By comparing the absorption lines of the Sun with emission spectra of known gases, the chemical composition of stars can be determined. The major Fraunhofer lines, and the elements with which they are associated, appear in the following table. Designations from the early Balmer Series are shown in parentheses. Not all of the elements in the Sun were immediately identified. Two examples are listed below: In 1868 Norman Lockyer and Pierre Janssen independently observed a line next to the sodium doublet (D1 and D2) which Lockyer determined to be a new element. He named it Helium, but it wasn't until 1895 the element was found on Earth. In 1869 the astronomers Charles Augustus Young and William Harkness independently observed a novel green emission line in the Sun's corona during an eclipse. This "new" element was incorrectly named coronium, as it was only found in the corona. It was not until the 1930s that Walter Grotrian and Bengt Edlén discovered that the spectral line at 530.3 nm was due to highly ionized iron (Fe13+). Other unusual lines in the coronal spectrum are also caused by highly charged ions, such as nickel and calcium, the high ionization being due to the extreme temperature of the solar corona. To date more than 20 000 absorption lines have been listed for the Sun between 293.5 and 877.0 nm, yet only approximately 75% of these lines have been linked to elemental absorption. By analyzing the equivalent width of each spectral line in an emission spectrum, both the elements present in a star and their relative abundances can be determined. Using this information stars can be categorized into stellar populations; Population I stars are the youngest stars and have the highest metal content (the Sun is a Pop I star), while Population III stars are the oldest stars with a very low metal content. Temperature and size In 1860 Gustav Kirchhoff proposed the idea of a black body, a material that emits electromagnetic radiation at all wavelengths. In 1894 Wilhelm Wien derived an expression relating the temperature (T) of a black body to its peak emission wavelength (λmax): b is a constant of proportionality called Wien's displacement constant, equal to This equation is called Wien's Law. By measuring the peak wavelength of a star, the surface temperature can be determined. For example, if the peak wavelength of a star is 502 nm the corresponding temperature will be 5772 kelvins. The luminosity of a star is a measure of the electromagnetic energy output in a given amount of time. Luminosity (L) can be related to the temperature (T) of a star by: , where R is the radius of the star and σ is the Stefan–Boltzmann constant, with a value of Thus, when both luminosity and temperature are known (via direct measurement and calculation) the radius of a star can be determined. Galaxies The spectra of galaxies look similar to stellar spectra, as they consist of the combined light of billions of stars. Doppler shift studies of galaxy clusters by Fritz Zwicky in 1937 found that the galaxies in a cluster were moving much faster than seemed to be possible from the mass of the cluster inferred from the visible light. Zwicky hypothesized that there must be a great deal of non-luminous matter in the galaxy clusters, which became known as dark matter. Since his discovery, astronomers have determined that a large portion of galaxies (and most of the universe) is made up of dark matter. In 2003, however, four galaxies (NGC 821, NGC 3379, NGC 4494, and NGC 4697) were found to have little to no dark matter influencing the motion of the stars contained within them; the reason behind the lack of dark matter is unknown. In the 1950s, strong radio sources were found to be associated with very dim, very red objects. When the first spectrum of one of these objects was taken there were absorption lines at wavelengths where none were expected. It was soon realised that what was observed was a normal galactic spectrum, but highly red shifted. These were named quasi-stellar radio sources, or quasars, by Hong-Yee Chiu in 1964. Quasars are now thought to be galaxies formed in the early years of our universe, with their extreme energy output powered by super-massive black holes. The properties of a galaxy can also be determined by analyzing the stars found within them. NGC 4550, a galaxy in the Virgo Cluster, has a large portion of its stars rotating in the opposite direction as the other portion. It is believed that the galaxy is the combination of two smaller galaxies that were rotating in opposite directions to each other. Bright stars in galaxies can also help determine the distance to a galaxy, which may be a more accurate method than parallax or standard candles. Interstellar medium The interstellar medium is matter that occupies the space between star systems in a galaxy. 99% of this matter is gaseous – hydrogen, helium, and smaller quantities of other ionized elements such as oxygen. The other 1% is dust particles, thought to be mainly graphite, silicates, and ices. Clouds of the dust and gas are referred to as nebulae. There are three main types of nebula: absorption, reflection, and emission nebulae. Absorption (or dark) nebulae are made of dust and gas in such quantities that they obscure the starlight behind them, making photometry difficult. Reflection nebulae, as their name suggest, reflect the light of nearby stars. Their spectra are the same as the stars surrounding them, though the light is bluer; shorter wavelengths scatter better than longer wavelengths. Emission nebulae emit light at specific wavelengths depending on their chemical composition. Gaseous emission nebulae In the early years of astronomical spectroscopy, scientists were puzzled by the spectrum of gaseous nebulae. In 1864 William Huggins noticed that many nebulae showed only emission lines rather than a full spectrum like stars. From the work of Kirchhoff, he concluded that nebulae must contain "enormous masses of luminous gas or vapour." However, there were several emission lines that could not be linked to any terrestrial element, brightest among them lines at 495.9 nm and 500.7 nm. These lines were attributed to a new element, nebulium, until Ira Bowen determined in 1927 that the emission lines were from highly ionised oxygen (O+2). These emission lines could not be replicated in a laboratory because they are forbidden lines; the low density of a nebula (one atom per cubic centimetre) allows for metastable ions to decay via forbidden line emission rather than collisions with other atoms. Not all emission nebulae are found around or near stars where solar heating causes ionisation. The majority of gaseous emission nebulae are formed of neutral hydrogen. In the ground state neutral hydrogen has two possible spin states: the electron has either the same spin or the opposite spin of the proton. When the atom transitions between these two states, it releases an emission or absorption line of 21 cm. This line is within the radio range and allows for very precise measurements: Velocity of the cloud can be measured via Doppler shift The intensity of the 21 cm line gives the density and number of atoms in the cloud The temperature of the cloud can be calculated Using this information, the shape of the Milky Way has been determined to be a spiral galaxy, though the exact number and position of the spiral arms is the subject of ongoing research. Complex molecules Dust and molecules in the interstellar medium not only obscures photometry, but also causes absorption lines in spectroscopy. Their spectral features are generated by transitions of component electrons between different energy levels, or by rotational or vibrational spectra. Detection usually occurs in radio, microwave, or infrared portions of the spectrum. The chemical reactions that form these molecules can happen in cold, diffuse clouds or in dense regions illuminated with ultraviolet light. Most known compounds in space are organic, ranging from small molecules e.g. acetylene C2H2 and acetone (CH3)2CO; to entire classes of large molecule e.g. fullerenes and polycyclic aromatic hydrocarbons; to solids, such as graphite or other sooty material. Motion in the universe Stars and interstellar gas are bound by gravity to form galaxies, and groups of galaxies can be bound by gravity in galaxy clusters. With the exception of stars in the Milky Way and the galaxies in the Local Group, almost all galaxies are moving away from Earth due to the expansion of the universe. Doppler effect and redshift The motion of stellar objects can be determined by looking at their spectrum. Because of the Doppler effect, objects moving towards someone are blueshifted, and objects moving away are redshifted. The wavelength of redshifted light is longer, appearing redder than the source. Conversely, the wavelength of blueshifted light is shorter, appearing bluer than the source light: where is the emitted wavelength, is the velocity of the object, and is the observed wavelength. Note that v<0 corresponds to λ<λ0, a blueshifted wavelength. A redshifted absorption or emission line will appear more towards the red end of the spectrum than a stationary line. In 1913 Vesto Slipher determined the Andromeda Galaxy was blueshifted, meaning it was moving towards the Milky Way. He recorded the spectra of 20 other galaxies — all but four of which were redshifted — and was able to calculate their velocities relative to the Earth. Edwin Hubble would later use this information, as well as his own observations, to define Hubble's law: The further a galaxy is from the Earth, the faster it is moving away. Hubble's law can be generalised to: where is the velocity (or Hubble Flow), is the Hubble Constant, and is the distance from Earth. Redshift (z) can be expressed by the following equations: In these equations, frequency is denoted by and wavelength by . The larger the value of z, the more redshifted the light and the farther away the object is from the Earth. As of January 2013, the largest galaxy redshift of z~12 was found using the Hubble Ultra-Deep Field, corresponding to an age of over 13 billion years (the universe is approximately 13.82 billion years old). The Doppler effect and Hubble's law can be combined to form the equation , where c is the speed of light. Peculiar motion Objects that are gravitationally bound will rotate around a common center of mass. For stellar bodies, this motion is known as peculiar velocity and can alter the Hubble Flow. Thus, an extra term for the peculiar motion needs to be added to Hubble's law: This motion can cause confusion when looking at a solar or galactic spectrum, because the expected redshift based on the simple Hubble law will be obscured by the peculiar motion. For example, the shape and size of the Virgo Cluster has been a matter of great scientific scrutiny due to the very large peculiar velocities of the galaxies in the cluster. Binary stars Just as planets can be gravitationally bound to stars, pairs of stars can orbit each other. Some binary stars are visual binaries, meaning they can be observed orbiting each other through a telescope. Some binary stars, however, are too close together to be resolved. These two stars, when viewed through a spectrometer, will show a composite spectrum: the spectrum of each star will be added together. This composite spectrum becomes easier to detect when the stars are of similar luminosity and of different spectral class. Spectroscopic binaries can be also detected due to their radial velocity; as they orbit around each other one star may be moving towards the Earth whilst the other moves away, causing a Doppler shift in the composite spectrum. The orbital plane of the system determines the magnitude of the observed shift: if the observer is looking perpendicular to the orbital plane there will be no observed radial velocity. For example, a person looking at a carousel from the side will see the animals moving toward and away from them, whereas if they look from directly above they will only be moving in the horizontal plane. Planets, asteroids, and comets Planets, asteroids, and comets all reflect light from their parent stars and emit their own light. For cooler objects, including Solar System planets and asteroids, most of the emission is at infrared wavelengths we cannot see, but that are routinely measured with spectrometers. For objects surrounded by gas, such as comets and planets with atmospheres, further emission and absorption happens at specific wavelengths in the gas, imprinting the spectrum of the gas on that of the solid object. In the case of worlds with thick atmospheres or complete cloud or haze cover (such as the four giant planets, Venus, and Saturn's satellite Titan), the spectrum is mostly or completely due to the atmosphere alone. Planets The reflected light of a planet contains absorption bands due to minerals in the rocks present for rocky bodies, or due to the elements and molecules present in the atmosphere. To date over 3,500 exoplanets have been discovered. These include so-called Hot Jupiters, as well as Earth-like planets. Using spectroscopy, compounds such as alkali metals, water vapor, carbon monoxide, carbon dioxide, and methane have all been discovered. Asteroids Asteroids can be classified into three major types according to their spectra. The original categories were created by Clark R. Chapman, David Morrison, and Ben Zellner in 1975, and further expanded by David J. Tholen in 1984. In what is now known as the Tholen classification, the C-types are made of carbonaceous material, S-types consist mainly of silicates, and X-types are 'metallic'. There are other classifications for unusual asteroids. C- and S-type asteroids are the most common asteroids. In 2002 the Tholen classification was further "evolved" into the SMASS classification, expanding the number of categories from 14 to 26 to account for more precise spectroscopic analysis of the asteroids. Comets The spectra of comets consist of a reflected solar spectrum from the dusty clouds surrounding the comet, as well as emission lines from gaseous atoms and molecules excited to fluorescence by sunlight and/or chemical reactions. For example, the chemical composition of Comet ISON was determined by spectroscopy due to the prominent emission lines of cyanogen (CN), as well as two- and three-carbon atoms (C2 and C3). Nearby comets can even be seen in X-ray as solar wind ions flying to the coma are neutralized. The cometary X-ray spectra therefore reflect the state of the solar wind rather than that of the comet. See also References Spectroscopy Observational astronomy
Astronomical spectroscopy
[ "Physics", "Chemistry", "Astronomy" ]
4,316
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Observational astronomy", "Astrophysics", "Astronomical spectroscopy", "Spectroscopy", "Astronomical sub-disciplines" ]
273,831
https://en.wikipedia.org/wiki/Weber%E2%80%93Fechner%20law
The Weber–Fechner laws are two related scientific laws in the field of psychophysics, known as Weber's law and Fechner's law. Both relate to human perception, more specifically the relation between the actual change in a physical stimulus and the perceived change. This includes stimuli to all senses: vision, hearing, taste, touch, and smell. Ernst Heinrich Weber states that "the minimum increase of stimulus which will produce a perceptible increase of sensation is proportional to the pre-existent stimulus," while Gustav Fechner's law is an inference from Weber's law (with additional assumptions) which states that the intensity of our sensation increases as the logarithm of an increase in energy rather than as rapidly as the increase. History and formulation of the laws Both Weber's law and Fechner's law were formulated by Gustav Theodor Fechner (1801–1887). They were first published in 1860 in the work Elemente der Psychophysik (Elements of Psychophysics). This publication was the first work ever in this field, and where Fechner coined the term psychophysics to describe the interdisciplinary study of how humans perceive physical magnitudes. He made the claim that "...psycho-physics is an exact doctrine of the relation of function or dependence between body and soul." Weber's law Ernst Heinrich Weber (1795–1878) was one of the first persons to approach the study of the human response to a physical stimulus in a quantitative fashion. Fechner was a student of Weber and named his first law in honor of his mentor, since it was Weber who had conducted the experiments needed to formulate the law. Fechner formulated several versions of the law, all communicating the same idea. One formulation states: What this means is that the perceived change in stimuli is inversely proportional to the initial stimuli. Weber's law also incorporates the just-noticeable difference (JND). This is the smallest change in stimuli that can be perceived. As stated above, the JND is proportional to the initial stimuli intensity . Mathematically, it can be described as where is the reference stimulus and is a constant. It may be written as , with being the sensation, being a constant, and being the physical intensity of the stimulus. Weber's law always fails at low intensities, near and below the absolute detection threshold, and often also at high intensities, but may be approximately true across a wide middle range of intensities. Weber contrast Although Weber's law includes a statement of the proportionality of a perceived change to initial stimuli, Weber only refers to this as a rule of thumb regarding human perception. It was Fechner who formulated this statement as a mathematical expression referred to as Weber contrast. Weber contrast is not part of Weber's law. Fechner's law Fechner noticed in his own studies that different individuals have different sensitivity to certain stimuli. For example, the ability to perceive differences in light intensity could be related to how good that individual's vision is. He also noted that how the human sensitivity to stimuli changes depends on which sense is affected. He used this to formulate another version of Weber's law that he named die Maßformel, the "measurement formula". Fechner's law states that the subjective sensation is proportional to the logarithm of the stimulus intensity. According to this law, human perceptions of sight and sound work as follows: Perceived loudness/brightness is proportional to logarithm of the actual intensity measured with an accurate nonhuman instrument. The relationship between stimulus and perception is logarithmic. This logarithmic relationship means that if a stimulus varies as a geometric progression (i.e., multiplied by a fixed factor), the corresponding perception is altered in an arithmetic progression (i.e., in additive constant amounts). For example, if a stimulus is tripled in strength (i.e., ), the corresponding perception may be two times as strong as its original value (i.e., ). If the stimulus is again tripled in strength (i.e., ), the corresponding perception will be three times as strong as its original value (i.e., ). Hence, for multiplications in stimulus strength, the strength of perception only adds. The mathematical derivations of the torques on a simple beam balance produce a description that is strictly compatible with Weber's law. Since Weber's law fails at low intensity, so does Fechner's law. An early reference to "Fechner's ... law" was in 1875 by Ludimar Hermann in Elements of Human Physiology. Deriving Fechner's law Fechner's law is a mathematical derivation of Weber contrast. Integrating the mathematical expression for Weber contrast gives: where is a constant of integration and ln is the natural logarithm. To solve for , assume that the perceived stimulus becomes zero at some threshold stimulus . Using this as a constraint, set and . This gives: Substituting in the integrated expression for Weber's law, the expression can be written as: The constant k is sense-specific and must be determined depending on the sense and type of stimulus. Types of perception Weber and Fechner conducted research on differences in light intensity and the perceived difference in weight. Other sense modalities provide only mixed support for either Weber's law or Fechner's law. Weight perception Weber found that the just noticeable difference (JND) between two weights was approximately proportional to the weights. Thus, if the weight of 105 g can (only just) be distinguished from that of 100 g, the JND (or differential threshold) is 5 g. If the mass is doubled, the differential threshold also doubles to 10 g, so that 210 g can be distinguished from 200 g. In this example, a weight (any weight) seems to have to increase by 5% for someone to be able to reliably detect the increase, and this minimum required fractional increase (of 5/100 of the original weight) is referred to as the "Weber fraction" for detecting changes in weight. Other discrimination tasks, such as detecting changes in brightness, or in tone height (pure tone frequency), or in the length of a line shown on a screen, may have different Weber fractions, but they all obey Weber's law in that observed values need to change by at least some small but constant proportion of the current value to ensure human observers will reliably be able to detect that change. Fechner did not conduct any experiments on how perceived heaviness increased with the mass of the stimulus. Instead, he assumed that all JNDs are subjectively equal, and argued mathematically that this would produce a logarithmic relation between the stimulus intensity and the sensation. These assumptions have both been questioned. Following the work of S. S. Stevens, many researchers came to believe in the 1960s that the Stevens's power law was a more general psychophysical principle than Fechner's logarithmic law. Sound Weber's law does not quite hold for loudness. It is a fair approximation for higher intensities, but not for lower amplitudes. Limitation of Weber's law in the auditory system Weber's law does not hold at perception of higher intensities. Intensity discrimination improves at higher intensities. The first demonstration of the phenomena was presented by Riesz in 1928, in Physical Review. This deviation of the Weber's law is known as the "near miss" of the Weber's law. This term was coined by McGill and Goldberg in their paper of 1968 in Perception & Psychophysics. Their study consisted of intensity discrimination in pure tones. Further studies have shown that the near miss is observed in noise stimuli as well. Jesteadt et al. (1977) demonstrated that the near miss holds across all the frequencies, and that the intensity discrimination is not a function of frequency, and that the change in discrimination with level can be represented by a single function across all frequencies: . Vision The eye senses brightness approximately logarithmically over a moderate range and stellar magnitude is measured on a logarithmic scale. This magnitude scale was invented by the ancient Greek astronomer Hipparchus in about 150 B.C. He ranked the stars he could see in terms of their brightness, with 1 representing the brightest down to 6 representing the faintest, though now the scale has been extended beyond these limits; an increase in 5 magnitudes corresponds to a decrease in brightness by a factor of 100. Modern researchers have attempted to incorporate such perceptual effects into mathematical models of vision. Limitations of Weber's law in visual regularity perception Perception of Glass patterns and mirror symmetries in the presence of noise follows Weber's law in the middle range of regularity-to-noise ratios (S), but in both outer ranges, sensitivity to variations is disproportionally lower. As Maloney, Mitchison, & Barlow (1987) showed for Glass patterns, and as van der Helm (2010) showed for mirror symmetries, perception of these visual regularities in the whole range of regularity-to-noise ratios follows the law p = g/(2+1/S) with parameter g to be estimated using experimental data. Limitation of Weber's law at low light levels For vision, Weber's law implies constancy of luminance contrast. Suppose a target object is set against a background luminance . In order to be just visible, the target must be brighter or fainter than the background by some small amount . The Weber contrast is defined as , and Weber's law says that should be constant for all . Human vision follows Weber's law closely at normal daylight levels (i.e. in the photopic range) but begins to break down at twilight levels (the mesopic range) and is completely inapplicable at low light levels (scotopic vision). This can be seen in data collected by Blackwell and plotted by Crumey, showing threshold increment versus background luminance for various targets sizes. At daylight levels, the curves are approximately straight with slope 1, i.e. = , implying is constant. At the very darkest background levels ( ≲ 10− 5 cd m−2, approximately 25 mag arcsec−2) the curves are flat - this is where the only visual perception is the observer's own neural noise ('dark light'). In the intermediate range, a portion can be approximated by the De Vries - Rose law, related to Ricco's law. Logarithmic coding schemes for neurons Lognormal distributions Activation of neurons by sensory stimuli in many parts of the brain is by a proportional law: neurons change their spike rate by about 10–30%, when a stimulus (e.g. a natural scene for vision) has been applied. However, as Scheler (2017) showed, the population distribution of the intrinsic excitability or gain of a neuron is a heavy tail distribution, more precisely a lognormal shape, which is equivalent to a logarithmic coding scheme. Neurons may therefore spike with 5–10 fold different mean rates. Obviously, this increases the dynamic range of a neuronal population, while stimulus-derived changes remain small and linear proportional. An analysis of the length of comments in internet discussion boards across several languages shows that comment lengths obey the lognormal distribution with great precision. The authors explain the distribution as a manifestation of the Weber–Fechner law. Other applications The Weber–Fechner law has been applied in other fields of research than just the human senses. Numerical cognition Psychological studies show that it becomes increasingly difficult to discriminate between two numbers as the difference between them decreases. This is called the distance effect. This is important in areas of magnitude estimation, such as dealing with large scales and estimating distances. It may also play a role in explaining why consumers neglect to shop around to save a small percentage on a large purchase, but will shop around to save a large percentage on a small purchase which represents a much smaller absolute dollar amount. Pharmacology It has been hypothesized that dose-response relationships can follow Weber's Law which suggests this law – which is often applied at the sensory level – originates from underlying chemoreceptor responses to cellular signaling dose relationships within the body. Dose response can be related to the Hill equation, which is closer to a power law. Public finance There is a new branch of the literature on public finance hypothesizing that the Weber–Fechner law can explain the increasing levels of public expenditures in mature democracies. Election after election, voters demand more public goods to be effectively impressed; therefore, politicians try to increase the magnitude of this "signal" of competence – the size and composition of public expenditures – in order to collect more votes. Emotion Preliminary research has found that pleasant emotions adhere to Weber’s Law, with accuracy in judging their intensity decreasing as pleasantness increases. However, this pattern wasn't observed for unpleasant emotions, suggesting a survival-related need for accurately discerning high-intensity negative emotions. See also Human nature Level (logarithmic quantity) Nervous system Ricco's law Stevens's power law Sone Neural coding References Further reading (135 pages) External links Perception Behavioral concepts Psychophysics Mathematical psychology
Weber–Fechner law
[ "Physics", "Mathematics", "Biology" ]
2,747
[ "Behavior", "Mathematical psychology", "Applied and interdisciplinary physics", "Behavioral concepts", "Applied mathematics", "Psychophysics", "Behaviorism" ]
273,854
https://en.wikipedia.org/wiki/Telomerase
Telomerase, also called terminal transferase, is a ribonucleoprotein that adds a species-dependent telomere repeat sequence to the 3' end of telomeres. A telomere is a region of repetitive sequences at each end of the chromosomes of most eukaryotes. Telomeres protect the end of the chromosome from DNA damage or from fusion with neighbouring chromosomes. The fruit fly Drosophila melanogaster lacks telomerase, but instead uses retrotransposons to maintain telomeres. Telomerase is a reverse transcriptase enzyme that carries its own RNA molecule (e.g., with the sequence 3′-CCCAAUCCC-5′ in Trypanosoma brucei) which is used as a template when it elongates telomeres. Telomerase is active in gametes and most cancer cells, but is normally absent in most somatic cells. History The existence of a compensatory mechanism for telomere shortening was first found by Soviet biologist Alexey Olovnikov in 1973, who also suggested the telomere hypothesis of aging and the telomere's connections to cancer and perhaps some neurodegenerative diseases. Telomerase in the ciliate Tetrahymena was discovered by Carol W. Greider and Elizabeth Blackburn in 1984. Together with Jack W. Szostak, Greider and Blackburn were awarded the 2009 Nobel Prize in Physiology or Medicine for their discovery. Later the cryo-EM structure of telomerase was first reported in T. thermophila, to be followed a few years later by the cryo-EM structure of telomerase in humans. The role of telomeres and telomerase in cell aging and cancer was established by scientists at biotechnology company Geron with the cloning of the RNA and catalytic components of human telomerase and the development of a polymerase chain reaction (PCR) based assay for telomerase activity called the TRAP assay, which surveys telomerase activity in multiple types of cancer. The negative stain electron microscopy (EM) structures of human and Tetrahymena telomerases were characterized in 2013. Two years later, the first cryo-electron microscopy (cryo-EM) structure of telomerase holoenzyme (Tetrahymena) was determined. In 2018, the structure of human telomerase was determined through cryo-EM by UC Berkeley scientists. Human telomerase structure The molecular composition of the human telomerase complex was determined by Scott Cohen and his team at the Children's Medical Research Institute (Sydney Australia) and consists of two molecules each of human telomerase reverse transcriptase (TERT), Telomerase RNA Component (TR or TERC), and dyskerin (DKC1). The genes of telomerase subunits, which include TERT, TERC, DKC1 and TEP1, are located on different chromosomes. The human TERT gene (hTERT) is translated into a protein of 1132 amino acids. TERT polypeptide folds with (and carries) TERC, a non-coding RNA (451 nucleotides long). TERT has a 'mitten' structure that allows it to wrap around the chromosome to add single-stranded telomere repeats. TERT is a reverse transcriptase, which is a class of enzymes that creates single-stranded DNA using single-stranded RNA as a template. The protein consists of four conserved domains (RNA-Binding Domain (TRBD), fingers, palm and thumb), organized into a "right hand" ring configuration that shares common features with retroviral reverse transcriptases, viral RNA replicases and bacteriophage B-family DNA polymerases. TERT proteins from many eukaryotes have been sequenced. Mechanism The shelterin protein TPP1 is both necessary and sufficient to recruit the telomerase enzyme to telomeres, and is the only shelterin protein in direct contact with telomerase. By using TERC, TERT can add a six-nucleotide repeating sequence, 5'-TTAGGG (in vertebrates; the sequence differs in other organisms) to the 3' strand of chromosomes. These TTAGGG repeats (with their various protein binding partners) are called telomeres. The template region of TERC is 3'-CAAUCCCAAUC-5'. Telomerase can bind the first few nucleotides of the template to the last telomere sequence on the chromosome, add a new telomere repeat (5'-GGTTAG-3') sequence, let go, realign the new 3'-end of telomere to the template, and repeat the process. Telomerase reverses telomere shortening. Clinical implications Aging Telomerase restores short bits of DNA known as telomeres, which are otherwise shortened after repeated division of a cell via mitosis. In normal circumstances, where telomerase is absent, if a cell divides recursively, at some point the progeny reach their Hayflick limit, which is believed to be between 50 and 70 cell divisions. At the limit the cells become senescent and cell division stops. Telomerase allows each offspring to replace the lost bit of DNA, allowing the cell line to divide without ever reaching the limit. This same unbounded growth is a feature of cancerous growth. Embryonic stem cells express telomerase, which allows them to divide repeatedly and form the individual. In adults, telomerase is highly expressed only in cells that need to divide regularly, especially in male sperm cells, but also in epidermal cells, in activated T cell and B cell lymphocytes, as well as in certain adult stem cells, but in the great majority of cases somatic cells do not express telomerase. A comparative biology study of mammalian telomeres indicated that telomere length of some mammalian species correlates inversely, rather than directly, with lifespan, and concluded that the contribution of telomere length to lifespan is unresolved. Telomere shortening does not occur with age in some postmitotic tissues, such as in the rat brain. In humans, skeletal muscle telomere lengths remain stable from ages 23 –74. In baboon skeletal muscle, which consists of fully differentiated postmitotic cells, less than 3% of myonuclei contain damaged telomeres and this percentage does not increase with age. Thus, telomere shortening does not appear to be a major factor in the aging of the differentiated cells of brain or skeletal muscle. In human liver, cholangiocytes and hepatocytes show no age-related telomere shortening. Another study found little evidence that, in humans, telomere length is a significant biomarker of normal aging with respect to important cognitive and physical abilities. Some experiments have raised questions on whether telomerase can be used as an anti-aging therapy, namely, the fact that mice with elevated levels of telomerase have higher cancer incidence and hence do not live longer. On the other hand, one study showed that activating telomerase in cancer-resistant mice by overexpressing its catalytic subunit extended lifespan. A study found that long-lived subjects inherited a hyperactive version of telomerase. Premature aging Premature aging syndromes including Werner syndrome, Progeria, Ataxia telangiectasia, Ataxia-telangiectasia like disorder, Bloom syndrome, Fanconi anemia and Nijmegen breakage syndrome are associated with short telomeres. However, the genes that have mutated in these diseases all have roles in the repair of DNA damage and the increased DNA damage may, itself, be a factor in the premature aging (see DNA damage theory of aging). An additional role in maintaining telomere length is an active area of investigation. Cancer In vitro, when cells approach the Hayflick limit, the time to senescence can be extended by inactivating the tumor suppressor proteins p53 and Retinoblastoma protein (pRb). Cells that have been so-altered eventually undergo an event termed a "crisis" when the majority of the cells in the culture die. Sometimes, a cell does not stop dividing once it reaches a crisis. In a typical situation, the telomeres are shortened and chromosomal integrity declines with every subsequent cell division. Exposed chromosome ends are interpreted as double-stranded breaks (DSB) in DNA; such damage is usually repaired by reattaching the broken ends together. When the cell does this due to telomere-shortening, the ends of different chromosomes can be attached to each other. This solves the problem of lacking telomeres, but during cell division anaphase, the fused chromosomes are randomly ripped apart, causing many mutations and chromosomal abnormalities. As this process continues, the cell's genome becomes unstable. Eventually, either fatal damage is done to the cell's chromosomes (killing it via apoptosis), or an additional mutation that activates telomerase occurs. With telomerase activation some types of cells and their offspring become immortal (bypass the Hayflick limit), thus avoiding cell death as long as the conditions for their duplication are met. Many cancer cells are considered 'immortal' because telomerase activity allows them to live much longer than any other somatic cell, which, combined with uncontrollable cell proliferation is why they can form tumors. A good example of immortal cancer cells is HeLa cells, which have been used in laboratories as a model cell line since 1951. While this method of modelling human cancer in cell culture is effective and has been used for many years by scientists, it is also very imprecise. The exact changes that allow for the formation of the tumorigenic clones in the above-described experiment are not clear. Scientists addressed this question by the serial introduction of multiple mutations present in a variety of human cancers. This has led to the identification of mutation combinations that form tumorigenic cells in a variety of cell types. While the combination varies by cell type, the following alterations are required in all cases: TERT activation, loss of p53 pathway function, loss of pRb pathway function, activation of the Ras or myc proto-oncogenes, and aberration of the Protein phosphatase 2 (PP2A). That is to say, the cell has an activated telomerase, eliminating the process of death by chromosome instability or loss, absence of apoptosis-induction pathways, and continued mitosis activation. This model of cancer in cell culture accurately describes the role of telomerase in actual human tumors. Telomerase activation has been observed in ~90% of all human tumors, suggesting that the immortality conferred by telomerase plays a key role in cancer development. Of the tumors without TERT activation, most employ a separate pathway to maintain telomere length termed Alternative Lengthening of Telomeres (ALT). The presence of this alternative pathway was first described in an SV40 virus-transformed human cell line, and based on the dynamics of the changes in telomere length, was proposed to result through recombination. However, the exact mechanism remains unclear. Elizabeth Blackburn et al., identified the upregulation of 70 genes known or suspected in cancer growth and spread through the body, and the activation of glycolysis, which enables cancer cells to rapidly use sugar to facilitate their programmed growth rate (roughly the growth rate of a fetus). Approaches to controlling telomerase and telomeres for cancer therapy include gene therapy, immunotherapy, small-molecule and signal pathway inhibitors. Drugs The ability to maintain functional telomeres may be one mechanism that allows cancer cells to grow in vitro for decades. Telomerase activity is necessary to preserve many cancer types and is inactive in somatic cells, creating the possibility that telomerase inhibition could selectively repress cancer cell growth with minimal side effects. If a drug can inhibit telomerase in cancer cells, the telomeres of successive generations will progressively shorten, limiting tumor growth. Telomerase is a good biomarker for cancer detection because most human cancer cells express high levels of it. Telomerase activity can be identified by its catalytic protein domain (hTERT). is the rate-limiting step in telomerase activity. It is associated with many cancer types. Various cancer cells and fibroblasts transformed with hTERT cDNA have high telomerase activity, while somatic cells do not. Cells testing positive for hTERT have positive nuclear signals. Epithelial stem cell tissue and its early daughter cells are the only noncancerous cells in which hTERT can be detected. Since hTERT expression is dependent only on the number of tumor cells within a sample, the amount of hTERT indicates the severity of cancer. The expression of hTERT can also be used to distinguish benign tumors from malignant tumors. Malignant tumors have higher hTERT expression than benign tumors. Real-time reverse transcription polymerase chain reaction (RT-PCR) quantifying hTERT expression in various tumor samples verified this varying expression. The lack of telomerase does not affect cell growth until the telomeres are short enough to cause cells to "die or undergo growth arrest". However, inhibiting telomerase alone is not enough to destroy large tumors. It must be combined with surgery, radiation, chemotherapy or immunotherapy. Cells may reduce their telomere length by only 50-252 base pairs per cell division, which can lead to a long lag phase. A telomerase activator TA-65 is commercially available and is claimed to delay aging and to provide relief from certain disease conditions. This formulation contains a molecule called cycloastragenol derived from a legume Astragalus membranaceus. Several other compounds have been found to increase telomerase activity: Centella asiatica extract 8.8-fold, oleanolic acid 5.9-fold, astragalus extract 4.3-fold, TA-65 2.2-fold, and maslinic acid 2-fold. Immunotherapy Immunotherapy successfully treats some kinds of cancer, such as melanoma. This treatment involves manipulating a human's immune system to destroy cancerous cells. Humans have two major antigen identifying lymphocytes: CD8+ cytotoxic T-lymphocytes (CTL) and CD4+ helper T-lymphocytes that can destroy cells. Antigen receptors on CTL can bind to a 9-10 amino acid chain that is presented by the major histocompatibility complex (MHC) as in Figure 4. HTERT is a potential target antigen. Immunotargeting should result in relatively few side effects since hTERT expression is associated only with telomerase and is not essential in almost all somatic cells. GV1001 uses this pathway. Experimental drug and vaccine therapies targeting active telomerase have been tested in mouse models, and clinical trials have begun. One drug, imetelstat, is being clinically researched as a means of interfering with telomerase in cancer cells. Most of the harmful cancer-related effects of telomerase are dependent on an intact RNA template. Cancer stem cells that use an alternative method of telomere maintenance are still killed when telomerase's RNA template is blocked or damaged. Telomerase Vaccines Two telomerase vaccines have been developed: GRNVAC1 and GV1001. GRNVAC1 isolates dendritic cells and the RNA that codes for the telomerase protein and puts them back into the patient to make cytotoxic T cells that kill the telomerase-active cells. GV1001 is a peptide from the active site of hTERT and is recognized by the immune system that reacts by killing the telomerase-active cells. Targeted apoptosis Another independent approach is to use oligoadenylated anti-telomerase antisense oligonucleotides and ribozymes to target telomerase RNA, leading to the dissociation of the RNA and to apoptosis (Figure 5). The fast induction of apoptosis through antisense binding may be a good alternative to the slower telomere shortening. Small interfering RNA (siRNA) siRNAs are small RNA molecules that induce the sequence-specific degradation of other RNAs. siRNA treatment can function similar to traditional gene therapy by destroying the mRNA products of particular genes, and therefore preventing the expression of those genes. A 2012 study found that targeting TERC with an siRNA reduced telomerase activity by more than 50% and resulted in decreased viability of immortal cancer cells. Treatment with both the siRNA and radiation caused a greater reduction in tumor size in mice than treatment with radiation alone, suggesting that targeting telomerase could be a way to increase the efficacy of radiation in treating radiation-resistant tumors. Heart disease, diabetes and quality of life Blackburn also discovered that mothers caring for very sick children have shorter telomeres when they report that their emotional stress is at a maximum and that telomerase was active at the site of blockages in coronary artery tissue, possibly accelerating heart attacks. In 2009, it was shown that the amount of telomerase activity significantly increased following psychological stress. Across the sample of patients telomerase activity in peripheral blood mononuclear cells increased by 18% one hour after the end of the stress. A study in 2010 found that there was "significantly greater" telomerase activity in participants than controls after a three-month meditation retreat. Telomerase deficiency has been linked to diabetes mellitus and impaired insulin secretion in mice, due to loss of pancreatic insulin-producing cells. Rare human diseases Mutations in TERT have been implicated in predisposing patients to aplastic anemia, a disorder in which the bone marrow fails to produce blood cells, in 2005. Cri du chat syndrome (CdCS) is a complex disorder involving the loss of the distal portion of the short arm of chromosome 5. TERT is located in the deleted region, and loss of one copy of TERT has been suggested as a cause or contributing factor of this disease. Dyskeratosis congenita (DC) is a disease of the bone marrow that can be caused by some mutations in the telomerase subunits. In the DC cases, about 35% cases are X-linked-recessive on the DKC1 locus and 5% cases are autosomal dominant on the TERT and TERC loci. Patients with DC have severe bone marrow failure manifesting as abnormal skin pigmentation, leucoplakia (a white thickening of the oral mucosa) and nail dystrophy, as well as a variety of other symptoms. Individuals with either TERC or DKC1 mutations have shorter telomeres and defective telomerase activity in vitro versus other individuals of the same age. In one family autosomal dominant DC was linked to a heterozygous TERT mutation. These patients also exhibited an increased rate of telomere-shortening, and genetic anticipation (i.e., the DC phenotype worsened with each generation). TERT Splice Variants See also DNA repair Imetelstat TA-65 Telomere Epitalon References Further reading The Immortal Cell, by Michael D. West, Doubleday (2003) External links Gene Ontology: Human telomerase reverse transcriptase (TERT) gene on genecards.org The Telomerase Database - A Web-based tool for telomerase research Three-dimensional model of telomerase at MUN Elizabeth Blackburn's Seminars: Telomeres and Telomerase Ribonucleoproteins Aging-related enzymes Anti-aging substances DNA replication EC 2.7.7 Senescence Telomere-related proteins
Telomerase
[ "Chemistry", "Biology" ]
4,179
[ "Genetics techniques", "Aging-related enzymes", "Anti-aging substances", "Senescence", "DNA replication", "Molecular genetics", "Cellular processes", "Metabolism" ]
274,675
https://en.wikipedia.org/wiki/Maillard%20reaction
The Maillard reaction ( ; ) is a chemical reaction between amino acids and reducing sugars to create melanoidins, the compounds that give browned food its distinctive flavor. Seared steaks, fried dumplings, cookies and other kinds of biscuits, breads, toasted marshmallows, falafel and many other foods undergo this reaction. It is named after French chemist Louis Camille Maillard, who first described it in 1912 while attempting to reproduce biological protein synthesis. The reaction is a form of non-enzymatic browning which typically proceeds rapidly from around . Many recipes call for an oven temperature high enough to ensure that a Maillard reaction occurs. At higher temperatures, caramelization (the browning of sugars, a distinct process) and subsequently pyrolysis (final breakdown leading to burning and the development of acrid flavors) become more pronounced. The reactive carbonyl group of the sugar reacts with the nucleophilic amino group of the amino acid and forms a complex mixture of poorly characterized molecules responsible for a range of aromas and flavors. This process is accelerated in an alkaline environment (e.g., lye applied to darken pretzels; see lye roll), as the amino groups () are deprotonated, and hence have an increased nucleophilicity. This reaction is the basis for many of the flavoring industry's recipes. At high temperatures, a probable carcinogen called acrylamide can form. This can be discouraged by heating at a lower temperature, adding asparaginase, or injecting carbon dioxide. In the cooking process, Maillard reactions can produce hundreds of different flavor compounds depending on the chemical constituents in the food, the temperature, the cooking time, and the presence of air. These compounds, in turn, often break down to form yet more flavor compounds. Flavor scientists have used the Maillard reaction over the years to make artificial flavors, the majority of patents being related to the production of meat-like flavors. History In 1912, Louis Camille Maillard published a paper describing the reaction between amino acids and sugars at elevated temperatures. In 1953, chemist John E. Hodge with the U.S. Department of Agriculture established a mechanism for the Maillard reaction. Foods and products The Maillard reaction is responsible for many colors and flavors in foods, such as the browning of various meats when seared or grilled, the browning and umami taste in fried onions and coffee roasting. It contributes to the darkened crust of baked goods, the golden-brown color of French fries and other crisps, browning of malted barley as found in malt whiskey and beer, and the color and taste of dried and condensed milk, dulce de leche, toffee, black garlic, chocolate, toasted marshmallows, and roasted peanuts. 6-Acetyl-2,3,4,5-tetrahydropyridine is responsible for the biscuit or cracker-like flavor present in baked goods such as bread, popcorn, and tortilla products. The structurally related compound 2-acetyl-1-pyrroline has a similar smell and also occurs naturally without heating. The compound gives varieties of cooked rice and the herb pandan (Pandanus amaryllifolius) their typical smells. Both compounds have odor thresholds below 0.06 nanograms per liter. The browning reactions that occur when meat is roasted or seared are complex and occur mostly by Maillard browning with contributions from other chemical reactions, including the breakdown of the tetrapyrrole rings of the muscle protein myoglobin. Maillard reactions also occur in dried fruit and when champagne ages in the bottle. Caramelization is an entirely different process from Maillard browning, though the results of the two processes are sometimes similar to the naked eye (and taste buds). Caramelization may sometimes cause browning in the same foods in which the Maillard reaction occurs, but the two processes are distinct. They are both promoted by heating, but the Maillard reaction involves amino acids, whereas caramelization is the pyrolysis of certain sugars. In making silage, excess heat causes the Maillard reaction to occur, which reduces the amount of energy and protein available to the animals that feed on it. Archaeology In archaeology, the Maillard process occurs when bodies are preserved in peat bogs. The acidic peat environment causes a tanning or browning of skin tones and can turn hair to a red or ginger tone. The chemical mechanism is the same as in the browning of food, but it develops slowly over time due to the acidic action on the bog body. It is typically seen on Iron Age bodies and was described by Painter in 1991 as the interaction of anaerobic, acidic, and cold (typically ) sphagnum acid on the polysaccharides. The Maillard reaction also contributes to the preservation of paleofeces. Chemical mechanism The carbonyl group of the sugar reacts with the amino group of the amino acid, producing N-substituted glycosylamine and water The unstable glycosylamine undergoes Amadori rearrangement, forming ketosamines Several ways are known for the ketosamines to react further: Produce two water molecules and reductones Diacetyl, pyruvaldehyde, and other short-chain hydrolytic fission products can be formed. Produce brown nitrogenous polymers and melanoidins The open-chain Amadori products undergo further dehydration and deamination to produce dicarbonyls. This is a crucial intermediate. Dicarbonyls react with amines to produce Strecker aldehydes through Strecker degradation. Acrylamide, a possible human carcinogen, can be generated as a byproduct of Maillard reaction between reducing sugars and amino acids, especially asparagine, both of which are present in most food products. See also Akabori amino-acid reaction Advanced glycation end-product Baking Caramelization References Further reading Van Soest, Peter J. (1982). Nutritional Ecology of the Ruminant (2nd ed.). Ithaca, NY: Cornell University Press. . . External links Food chemistry Name reactions
Maillard reaction
[ "Chemistry", "Biology" ]
1,294
[ "Name reactions", "Biochemistry", "Food chemistry", "nan" ]
274,726
https://en.wikipedia.org/wiki/Maeslantkering
The Maeslantkering ("Maeslant barrier" in Dutch) is a storm surge barrier on the Nieuwe Waterweg, in South Holland, Netherlands. It was constructed from 1991 to 1997. As part of the Delta Works the barrier responds to water level predictions calculated by a centralized computer system called BOS. It automatically closes when Rotterdam (especially the Port of Rotterdam) is threatened by floods. Maeslantkering has two 210-metre long barrier gates, with two 237-metre long steel trusses holding each. When closed, the barrier will protect the entire width (360 metres) of the Nieuwe Waterweg, the main waterway of Port of Rotterdam. It is one of the largest moving structures on Earth, rivalling the Green Bank Telescope in the United States and the Bagger 288 excavator in Germany. The Maeslant Barrier The initial plan The construction of the Maeslantkering was a part of the Europoortkering project which, in turn, was the final stage of the Delta Works. The main objective of this Europoortkering-project was to improve the safety against flooding of the Rotterdam harbour, of which the Europoort is an important part, and the surrounding towns and agricultural areas. To achieve this, the initial plan was to reinforce existing dikes as far as 50 kilometres inland. During the 1980s, it became clear that this project would take at least 30 years and would cost a huge amount of money. It would also mean that historic town centres, in some cases over four centuries old, would have to be broken down and rebuilt behind renewed, larger dikes. Therefore, the initial plan was put aside and the Ministry of Waterways and Public Works organised a competition in which construction companies could make plans for the construction of a reliable yet relatively cheap storm surge barrier. The storm surge barrier This storm surge barrier had to be located in the waterway (Nieuwe Maas – the Scheur – Nieuwe Waterweg) that connects Rotterdam with the North Sea. This played an important role in the planning stage of the construction, as this waterway is the main route to the port of Rotterdam, at that time the world's largest port. Therefore, a barrier like the Dutch Oosterscheldekering and the Thames Barrier could not be constructed, as such a barrier would block the shipping route. The winning plan called for two large floating gates on both dikes of the waterway. A major advantage of this plan was that construction of the storm surge barrier could take place under dry conditions, in dry docks. Other advantages were that no vital parts of the barrier had to be placed under water, and maintenance of the barrier would be easy because of the dry docks. Finally, there would be almost no inconvenience for passing ships. The winning plan was put forward by the BMK consortium (Bouwcombinatie Maeslantkering). This consortium included the contractors HBG (now BAM), Volker Stevin and Hollandia Kloos. The storm surge barrier project was one of the first large Design and Construct projects for which the contractor also prepares the design. Construction of the barrier The construction of the barrier started in 1991. First, the dry docks were constructed on both shores and a sill was constructed at the bottom of the Nieuwe Waterweg. Then, the two 22-metre high and 210-metre long steel gates were built. After this, 237-metre long steel trusses were welded to the gates. The arms weigh 6,800 tonnes each. The main purpose of the arms is transmitting the immense forces, exerted on the gates while closed, to one single joint at the rear of each gate. During the closing or opening process, this ball-shaped joint gives the gate the opportunity to move freely under the influences of water, wind, and waves. It acts like a ball and socket joint, such as in the human shoulder or hip. The joints were made in the Czech Republic at Škoda Works. The ball-shaped joint is the largest in the world, with a diameter of 10 metres, and weighing 680 tonnes. The construction of the barrier cost 450 million euro. The total Europoortkering-project had cost 660 million euros. A working 1:250 scale version of the barrier was constructed in the Madurodam miniature village. Its construction took six months. It took six years to construct the real barrier. Maeslantkering in operation On 10 May 1997, after six years of construction, Queen Beatrix opened the Maeslantkering. The barrier is connected to a computer system which is linked to weather and sea level data. Under normal weather conditions, the two doors themselves are well protected in their dry docks and a 360-metre wide gap in the waterway gives ships enough space to pass without any inconvenience. But when a storm surge of 3 metres above normal sea level is anticipated in Rotterdam, the barrier will be closed automatically. Four hours before the actual closing procedure begins, incoming and outgoing ships are warned. Two hours before closing, the traffic at the Nieuwe Waterweg comes to a standstill. Thirty minutes before closing, the dry docks that contain the gates are flooded. After this, the gates start to float and two so-called "locomobiles" move the gates towards each other. When the gap between the gates is about 1.5 metres wide, water is let inside the hollows of the gates, so that they submerge to the bottom of the waterway. The bottom has been elaborately dug and then laid with layers of broken stone, so that the gates are able to form a relatively watertight fit when submerged. In cases where the gates have to be shut for a prolonged period, which would cause the waters of the Rhine to rise behind them, the gate hollows are partly emptied and floated, so that excess river water runs out to sea, before they are submerged again. The decision-making algorithm that sequences storm surge-triggered events in the Maeslantkering is run entirely by computer. The Maeslantkering is expected to be closed once every ten years due to storm surge. With the rise in sea levels, the storm surge barrier will need to close more frequently in 50 years time, namely once every five years. In its first 10 years of operation, the barrier was never closed due to a storm. There was one incident when a storm surge of 3 metres was predicted and the protective sequence was initiated. However, during the course of the storm, predictions were revised to a 2.99 m surge and the computer cancelled closure. Eventually, the surge passed harmlessly and the waterway remained open. During the evening of 8 November 2007, the barrier was closed due to a storm surge for the first time. The barrier is closed for testing once a year, usually at the end of September or the beginning of October, just before the beginning of the storm season in mid-October. Activities are held during the closing for the visiting public. The information center publishes information about the closing time and activities on its website. The , located just south of the Nieuwe Waterweg and the Scheuer, visible in some images on this page, is protected by the much smaller Hartelkering storm surge barrier. This barrier is constructed some 5 kilometres further inland. The software that drives it is written in C++ and consists of 200,000 lines of code for the operational system and 250,000 lines of code for the simulation systems. The barrier is designed to withstand a storm that has an occurrence of only once in 10,000 years (based on the climate at the time, but this may have to be adjusted for climate change). 8 November 2007 storm In order to test the barrier in actual stormy conditions, the water level threshold at which the computer system would start the closing procedure was lowered from 3.0 m over NAP to 2.6 m, for the duration of the 2007 storm season. On 8 November 2007, a storm from the northwest hit the Dutch coast. A storm surge, high enough to start the barrier's closing procedure, occurred. The barrier was closed due to a storm surge for the first time since its construction. As the Oosterscheldekering and Hartelkering storm surge barriers were also closed, the entire Dutch coast was protected against flooding for the first time since 1976. At 22:00 local time (CET), Dutch TV brought the news that maritime traffic on the Nieuwe Waterweg was shut off. The closing procedure of the Maeslantkering started at 23:10. The barrier was completely closed at 01:00 and was reopened on 9 November around 17:00. 21 December 2023 storm On December 21, 2023 the Maeslantkering was closed automatically at the 3.0 m threshold for the first time in history. During the day a forecast had been made that the water level in Rotterdam would reach critical height of 3.0 m above NAP at around 23:30. There were discussions to close the Maeslantkering manually before that time, and the Port of Rotterdam started reducing traffic at 16:00. Around 18:00 the traffic across the Nieuwe Waterweg came to a complete standstill. Before the decision was actually made to manually close the Maeslantkering, the computer systems intervened and had begun the automatic closing procedure. Closing started at 20:15, a process which takes about 2 hours. The next day at 4:45 the gates were opened again. Het Keringhuis, Publiekscentrum Water At the site of the Maeslantkering, there is a visitor center where more background information on water management and the technical details of the barrier itself can be found. In popular culture The barrier was featured on the American television program Extreme Engineering (season 1, episode 8: Holland's Barriers to the Sea). The barrier served as a major plot point for the children's book Mission Hurricane of the 39 Clues series. The barrier was also featured on the History Channel television program Modern Marvels during Levees Education (2006). The barrier was featured in the 2021 Neal Stephenson novel Termination Shock. See also Navigation Pass S-1 of Saint Petersburg Dam References External links keringhuis.nl - Comprehensive home page of the Maeslant Barrier, with a lot of interesting flash movies and visiting hours of the storm surge barrier. deltaWorks.org - DeltaWorks Online reports on Maeslantbarrier. Includes video, virtual tour and flash animations Storm Surge Warning Service - homepage of the Dutch storm surge warning service Maeslantkering YouTube Buildings and structures in Rotterdam Buildings and structures in South Holland Delta Works Flood barriers Rhine–Meuse–Scheldt delta
Maeslantkering
[ "Physics" ]
2,205
[ "Physical systems", "Hydraulics", "Delta Works" ]
274,810
https://en.wikipedia.org/wiki/Scientific%20instrument
A scientific instrument is a device or tool used for scientific purposes, including the study of both natural phenomena and theoretical research. History Historically, the definition of a scientific instrument has varied, based on usage, laws, and historical time period. Before the mid-nineteenth century such tools were referred to as "natural philosophical" or "philosophical" apparatus and instruments, and older tools from antiquity to the Middle Ages (such as the astrolabe and pendulum clock) defy a more modern definition of "a tool developed to investigate nature qualitatively or quantitatively." Scientific instruments were made by instrument makers living near a center of learning or research, such as a university or research laboratory. Instrument makers designed, constructed, and refined instruments for purposes, but if demand was sufficient, an instrument would go into production as a commercial product. In a description of the use of the eudiometer by Jan Ingenhousz to show photosynthesis, a biographer observed, "The history of the use and evolution of this instrument helps to show that science is not just a theoretical endeavor but equally an activity grounded on an instrumental basis, which is a cocktail of instruments and techniques wrapped in a social setting within a community of practitioners. The eudiometer has been shown to be one of the elements in this mix that kept a whole community of researchers together, even while they were at odds about the significance and the proper use of the thing." By World War II, the demand for improved analyses of wartime products such as medicines, fuels, and weaponized agents pushed instrumentation to new heights. Today, changes to instruments used in scientific endeavors — particularly analytical instruments — are occurring rapidly, with interconnections to computers and data management systems becoming increasingly necessary. Scope Scientific instruments vary greatly in size, shape, purpose, complication and complexity. They include relatively simple laboratory equipment like scales, rulers, chronometers, thermometers, etc. Other simple tools developed in the late 20th century or early 21st century are the Foldscope (an optical microscope), the SCALE(KAS Periodic Table), the MasSpec Pen (a pen that detects cancer), the glucose meter, etc. However, some scientific instruments can be quite large in size and significant in complexity, like particle colliders or radio-telescope antennas. Conversely, microscale and nanoscale technologies are advancing to the point where instrument sizes are shifting towards the tiny, including nanoscale surgical instruments, biological nanobots, and bioelectronics. The digital era Instruments are increasingly based upon integration with computers to improve and simplify control; enhance and extend instrumental functions, conditions, and parameter adjustments; and streamline data sampling, collection, resolution, analysis (both during and post-process), and storage and retrieval. Advanced instruments can be connected as a local area network (LAN) directly or via middleware and can be further integrated as part of an information management application such as a laboratory information management system (LIMS). Instrument connectivity can be furthered even more using internet of things (IoT) technologies, allowing for example laboratories separated by great distances to connect their instruments to a network that can be monitored from a workstation or mobile device elsewhere. Examples of scientific instruments List of scientific instruments manufacturers List of scientific instruments designers Jones, William Kipp, Petrus Jacobus Le Bon, Gustave Roelofs, Arjen Schöner, Johannes Von Reichenbach, Georg Friedrich History of scientific instruments Museums Collection of Historical Scientific Instruments (CHSI) Boerhaave Museum Chemical Heritage Foundation Deutsches Museum Royal Victoria Gallery for the Encouragement of Practical Science Whipple Museum of the History of Science Historiography Paul Bunge Prize Types of scientific instruments Optical instrument Electronic test equipment See also Instrumentation Instrumentalism, a philosophic theory List of collectibles , a suffix to denote a complex scientific instrument, like in cyclotron, phytotron, synchrotron, ... References Science-related lists
Scientific instrument
[ "Technology", "Engineering" ]
811
[ "Scientific instruments", "Measuring instruments" ]
274,816
https://en.wikipedia.org/wiki/Distributed%20control%20system
A distributed control system (DCS) is a computerized control system for a process or plant usually with many control loops, in which autonomous controllers are distributed throughout the system, but there is no central operator supervisory control. This is in contrast to systems that use centralized controllers; either discrete controllers located at a central control room or within a central computer. The DCS concept increases reliability and reduces installation costs by localizing control functions near the process plant, with remote monitoring and supervision. Distributed control systems first emerged in large, high value, safety critical process industries, and were attractive because the DCS manufacturer would supply both the local control level and central supervisory equipment as an integrated package, thus reducing design integration risk. Today the functionality of Supervisory control and data acquisition (SCADA) and DCS systems are very similar, but DCS tends to be used on large continuous process plants where high reliability and security is important, and the control room is not geographically remote. Many machine control systems exhibit similar properties as plant and process control systems do. Structure The key attribute of a DCS is its reliability due to the distribution of the control processing around nodes in the system. This mitigates a single processor failure. If a processor fails, it will only affect one section of the plant process, as opposed to a failure of a central computer which would affect the whole process. This distribution of computing power local to the field Input/Output (I/O) connection racks also ensures fast controller processing times by removing possible network and central processing delays. The accompanying diagram is a general model which shows functional manufacturing levels using computerised control. Referring to the diagram; Level 0 contains the field devices such as flow and temperature sensors, and final control elements, such as control valves Level 1 contains the industrialised Input/Output (I/O) modules, and their associated distributed electronic processors. Level 2 contains the supervisory computers, which collect information from processor nodes on the system, and provide the operator control screens. Level 3 is the production control level, which does not directly control the process, but is concerned with monitoring production and monitoring targets Level 4 is the production scheduling level. Levels 1 and 2 are the functional levels of a traditional DCS, in which all equipment are part of an integrated system from a single manufacturer. Levels 3 and 4 are not strictly process control in the traditional sense, but where production control and scheduling takes place. Technical points The processor nodes and operator graphical displays are connected over proprietary or industry standard networks, and network reliability is increased by dual redundancy cabling over diverse routes. This distributed topology also reduces the amount of field cabling by siting the I/O modules and their associated processors close to the process plant. The processors receive information from input modules, process the information and decide control actions to be signalled by the output modules. The field inputs and outputs can be analog signals e.g. 4–20 mA DC current loop or two-state signals that switch either "on" or "off", such as relay contacts or a semiconductor switch. DCSs are connected to sensors and actuators and use setpoint control to control the flow of material through the plant. A typical application is a PID controller fed by a flow meter and using a control valve as the final control element. The DCS sends the setpoint required by the process to the controller which instructs a valve to operate so that the process reaches and stays at the desired setpoint. (see 4–20 mA schematic for example). Large oil refineries and chemical plants have several thousand I/O points and employ very large DCS. Processes are not limited to fluidic flow through pipes, however, and can also include things like paper machines and their associated quality controls, variable speed drives and motor control centers, cement kilns, mining operations, ore processing facilities, and many others. DCSs in very high reliability applications can have dual redundant processors with "hot" switch over on fault, to enhance the reliability of the control system. Although 4–20 mA has been the main field signalling standard, modern DCS systems can also support fieldbus digital protocols, such as Foundation Fieldbus, profibus, HART, modbus, PC Link, etc. Modern DCSs also support neural networks and fuzzy logic applications. Recent research focuses on the synthesis of optimal distributed controllers, which optimizes a certain H-infinity or the H 2 control criterion. Typical applications Distributed control systems (DCS) are dedicated systems used in manufacturing processes that are continuous or batch-oriented. Processes where a DCS might be used include: Chemical plants Petrochemical (oil) and refineries Pulp and paper mills (see also: quality control system QCS) Boiler controls and power plant systems Nuclear power plants Environmental control systems Water management systems Water treatment plants Sewage treatment plants Food and food processing Agrochemical and fertilizer Metal and mines Automobile manufacturing Metallurgical process plants Pharmaceutical manufacturing Sugar refining plants Agriculture applications History Evolution of process control operations Process control of large industrial plants has evolved through many stages. Initially, control would be from panels local to the process plant. However this required a large amount of human oversight to attend to these dispersed panels, and there was no overall view of the process. The next logical development was the transmission of all plant measurements to a permanently-staffed central control room. Effectively this was the centralisation of all the localised panels, with the advantages of lower manning levels and easier overview of the process. Often the controllers were behind the control room panels, and all automatic and manual control outputs were transmitted back to plant. However, whilst providing a central control focus, this arrangement was inflexible as each control loop had its own controller hardware, and continual operator movement within the control room was required to view different parts of the process. With the coming of electronic processors and graphic displays it became possible to replace these discrete controllers with computer-based algorithms, hosted on a network of input/output racks with their own control processors. These could be distributed around plant, and communicate with the graphic display in the control room or rooms. The distributed control system was born. The introduction of DCSs allowed easy interconnection and re-configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other production computer systems. It enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling runs, and provided high level overviews of plant status and production levels. Origins Early minicomputers were used in the control of industrial processes since the beginning of the 1960s. The IBM 1800, for example, was an early computer that had input/output hardware to gather process signals in a plant for conversion from field contact levels (for digital points) and analog signals to the digital domain. The first industrial control computer system was built 1959 at the Texaco Port Arthur, Texas, refinery with an RW-300 of the Ramo-Wooldridge Company. In 1975, both Yamatake-Honeywell and Japanese electrical engineering firm Yokogawa introduced their own independently produced DCS's - TDC 2000 and CENTUM systems, respectively. US-based Bristol also introduced their UCS 3000 universal controller in 1975. In 1978 Valmet introduced their own DCS system called Damatic (latest generation named Valmet DNA). In 1980, Bailey (now part of ABB) introduced the NETWORK 90 system, Fisher Controls (now part of Emerson Electric) introduced the PROVoX system, Fischer & Porter Company (now also part of ABB) introduced DCI-4000 (DCI stands for Distributed Control Instrumentation). The DCS largely came about due to the increased availability of microcomputers and the proliferation of microprocessors in the world of process control. Computers had already been applied to process automation for some time in the form of both direct digital control (DDC) and setpoint control. In the early 1970s Taylor Instrument Company, (now part of ABB) developed the 1010 system, Foxboro the FOX1 system, Fisher Controls the DC2 system and Bailey Controls the 1055 systems. All of these were DDC applications implemented within minicomputers (DEC PDP-11, Varian Data Machines, MODCOMP etc.) and connected to proprietary Input/Output hardware. Sophisticated (for the time) continuous as well as batch control was implemented in this way. A more conservative approach was setpoint control, where process computers supervised clusters of analog process controllers. A workstation provided visibility into the process using text and crude character graphics. Availability of a fully functional graphical user interface was a way away. Development Central to the DCS model was the inclusion of control function blocks. Function blocks evolved from early, more primitive DDC concepts of "Table Driven" software. One of the first embodiments of object-oriented software, function blocks were self-contained "blocks" of code that emulated analog hardware control components and performed tasks that were essential to process control, such as execution of PID algorithms. Function blocks continue to endure as the predominant method of control for DCS suppliers, and are supported by key technologies such as Foundation Fieldbus today. Midac Systems, of Sydney, Australia, developed an objected-oriented distributed direct digital control system in 1982. The central system ran 11 microprocessors sharing tasks and common memory and connected to a serial communication network of distributed controllers each running two Z80s. The system was installed at the University of Melbourne. Digital communication between distributed controllers, workstations and other computing elements (peer to peer access) was one of the primary advantages of the DCS. Attention was duly focused on the networks, which provided the all-important lines of communication that, for process applications, had to incorporate specific functions such as determinism and redundancy. As a result, many suppliers embraced the IEEE 802.4 networking standard. This decision set the stage for the wave of migrations necessary when information technology moved into process automation and IEEE 802.3 rather than IEEE 802.4 prevailed as the control LAN. The network-centric era of the 1980s In the 1980s, users began to look at DCSs as more than just basic process control. A very early example of a Direct Digital Control DCS was completed by the Australian business Midac in 1981–82 using R-Tec Australian designed hardware. The system installed at the University of Melbourne used a serial communications network, connecting campus buildings back to a control room "front end". Each remote unit ran two Z80 microprocessors, while the front end ran eleven Z80s in a parallel processing configuration with paged common memory to share tasks and that could run up to 20,000 concurrent control objects. It was believed that if openness could be achieved and greater amounts of data could be shared throughout the enterprise that even greater things could be achieved. The first attempts to increase the openness of DCSs resulted in the adoption of the predominant operating system of the day: UNIX. UNIX and its companion networking technology TCP-IP were developed by the US Department of Defense for openness, which was precisely the issue the process industries were looking to resolve. As a result, suppliers also began to adopt Ethernet-based networks with their own proprietary protocol layers. The full TCP/IP standard was not implemented, but the use of Ethernet made it possible to implement the first instances of object management and global data access technology. The 1980s also witnessed the first PLCs integrated into the DCS infrastructure. Plant-wide historians also emerged to capitalize on the extended reach of automation systems. The first DCS supplier to adopt UNIX and Ethernet networking technologies was Foxboro, who introduced the I/A Series system in 1987. The application-centric era of the 1990s The drive toward openness in the 1980s gained momentum through the 1990s with the increased adoption of commercial off-the-shelf (COTS) components and IT standards. Probably the biggest transition undertaken during this time was the move from the UNIX operating system to the Windows environment. While the realm of the real time operating system (RTOS) for control applications remains dominated by real time commercial variants of UNIX or proprietary operating systems, everything above real-time control has made the transition to Windows. The introduction of Microsoft at the desktop and server layers resulted in the development of technologies such as OLE for process control (OPC), which is now a de facto industry connectivity standard. Internet technology also began to make its mark in automation and the world, with most DCS HMI supporting Internet connectivity. The 1990s were also known for the "Fieldbus Wars", where rival organizations competed to define what would become the IEC fieldbus standard for digital communication with field instrumentation instead of 4–20 milliamp analog communications. The first fieldbus installations occurred in the 1990s. Towards the end of the decade, the technology began to develop significant momentum, with the market consolidated around Ethernet I/P, Foundation Fieldbus and Profibus PA for process automation applications. Some suppliers built new systems from the ground up to maximize functionality with fieldbus, such as Rockwell PlantPAx System, Honeywell with Experion & Plantscape SCADA systems, ABB with System 800xA, Emerson Process Management with the Emerson Process Management DeltaV control system, Siemens with the SPPA-T3000 or Simatic PCS 7, Forbes Marshall with the Microcon+ control system and with the Harmonas-DEO system. Fieldbus technics have been used to integrate machine, drives, quality and condition monitoring applications to one DCS with Valmet DNA system. The impact of COTS, however, was most pronounced at the hardware layer. For years, the primary business of DCS suppliers had been the supply of large amounts of hardware, particularly I/O and controllers. The initial proliferation of DCSs required the installation of prodigious amounts of this hardware, most of it manufactured from the bottom up by DCS suppliers. Standard computer components from manufacturers such as Intel and Motorola, however, made it cost prohibitive for DCS suppliers to continue making their own components, workstations, and networking hardware. As the suppliers made the transition to COTS components, they also discovered that the hardware market was shrinking fast. COTS not only resulted in lower manufacturing costs for the supplier, but also steadily decreasing prices for the end users, who were also becoming increasingly vocal over what they perceived to be unduly high hardware costs. Some suppliers that were previously stronger in the PLC business, such as Rockwell Automation and Siemens, were able to leverage their expertise in manufacturing control hardware to enter the DCS marketplace with cost effective offerings, while the stability/scalability/reliability and functionality of these emerging systems are still improving. The traditional DCS suppliers introduced new generation DCS System based on the latest Communication and IEC Standards, which resulting in a trend of combining the traditional concepts/functionalities for PLC and DCS into a one for all solution—named "Process Automation System" (PAS). The gaps among the various systems remain at the areas such as: the database integrity, pre-engineering functionality, system maturity, communication transparency and reliability. While it is expected the cost ratio is relatively the same (the more powerful the systems are, the more expensive they will be), the reality of the automation business is often operating strategically case by case. The current next evolution step is called Collaborative Process Automation Systems. To compound the issue, suppliers were also realizing that the hardware market was becoming saturated. The life cycle of hardware components such as I/O and wiring is also typically in the range of 15 to over 20 years, making for a challenging replacement market. Many of the older systems that were installed in the 1970s and 1980s are still in use today, and there is a considerable installed base of systems in the market that are approaching the end of their useful life. Developed industrial economies in North America, Europe, and Japan already had many thousands of DCSs installed, and with few if any new plants being built, the market for new hardware was shifting rapidly to smaller, albeit faster growing regions such as China, Latin America, and Eastern Europe. Because of the shrinking hardware business, suppliers began to make the challenging transition from a hardware-based business model to one based on software and value-added services. It is a transition that is still being made today. The applications portfolio offered by suppliers expanded considerably in the '90s to include areas such as production management, model-based control, real-time optimization, plant asset management (PAM), Real-time performance management (RPM) tools, alarm management, and many others. To obtain the true value from these applications, however, often requires a considerable service content, which the suppliers also provide. Modern systems (2010 onwards) The latest developments in DCS include the following new technologies: Wireless systems and protocols Remote transmission, logging and data historian Mobile interfaces and controls Embedded web-servers Increasingly, and ironically, DCS are becoming centralised at plant level, with the ability to log into the remote equipment. This enables operator to control both at enterprise level ( macro ) and at the equipment level (micro), both within and outside the plant, because the importance of the physical location drops due to interconnectivity primarily thanks to wireless and remote access. The more wireless protocols are developed and refined, the more they are included in DCS. DCS controllers are now often equipped with embedded servers and provide on-the-go web access. Whether DCS will lead Industrial Internet of Things (IIOT) or borrow key elements from remains to be seen. Many vendors provide the option of a mobile HMI, ready for both Android and iOS. With these interfaces, the threat of security breaches and possible damage to plant and process are now very real. See also Annunciator panel Building automation EPICS Industrial control system Plant process and emergency shutdown systems Safety instrumented system (SIS) TANGO References Control engineering Applications of distributed computing Industrial automation
Distributed control system
[ "Engineering" ]
3,683
[ "Control engineering", "Industrial automation", "Automation", "Industrial engineering" ]
275,015
https://en.wikipedia.org/wiki/Building%20%28mathematics%29
In mathematics, a building (also Tits building, named after Jacques Tits) is a combinatorial and geometric structure which simultaneously generalizes certain aspects of flag manifolds, finite projective planes, and Riemannian symmetric spaces. Buildings were initially introduced by Jacques Tits as a means to understand the structure of isotropic reductive linear algebraic groups over arbitrary fields. The more specialized theory of Bruhat–Tits buildings (named also after François Bruhat) plays a role in the study of -adic Lie groups analogous to that of the theory of symmetric spaces in the theory of Lie groups. Overview The notion of a building was invented by Jacques Tits as a means of describing simple algebraic groups over an arbitrary field. Tits demonstrated how to every such group one can associate a simplicial complex with an action of , called the spherical building of . The group imposes very strong combinatorial regularity conditions on the complexes that can arise in this fashion. By treating these conditions as axioms for a class of simplicial complexes, Tits arrived at his first definition of a building. A part of the data defining a building is a Coxeter group , which determines a highly symmetrical simplicial complex , called the Coxeter complex. A building is glued together from multiple copies of , called its apartments, in a certain regular fashion. When is a finite Coxeter group, the Coxeter complex is a topological sphere, and the corresponding buildings are said to be of spherical type. When is an affine Weyl group, the Coxeter complex is a subdivision of the affine plane and one speaks of affine, or Euclidean, buildings. An affine building of type is the same as an infinite tree without terminal vertices. Although the theory of semisimple algebraic groups provided the initial motivation for the notion of a building, not all buildings arise from a group. In particular, projective planes and generalized quadrangles form two classes of graphs studied in incidence geometry which satisfy the axioms of a building, but may not be connected with any group. This phenomenon turns out to be related to the low rank of the corresponding Coxeter system (namely, two). Tits proved a remarkable theorem: all spherical buildings of rank at least three are connected with a group; moreover, if a building of rank at least two is connected with a group then the group is essentially determined by the building (). Iwahori–Matsumoto, Borel–Tits and Bruhat–Tits demonstrated that in analogy with Tits' construction of spherical buildings, affine buildings can also be constructed from certain groups, namely, reductive algebraic groups over a local non-Archimedean field. Furthermore, if the split rank of the group is at least three, it is essentially determined by its building. Tits later reworked the foundational aspects of the theory of buildings using the notion of a chamber system, encoding the building solely in terms of adjacency properties of simplices of maximal dimension; this leads to simplifications in both spherical and affine cases. He proved that, in analogy with the spherical case, every building of affine type and rank at least four arises from a group. Definition An -dimensional building is an abstract simplicial complex which is a union of subcomplexes called apartments such that every -simplex of is within at least three -simplices if ; any -simplex in an apartment lies in exactly two adjacent -simplices of and the graph of adjacent -simplices is connected; any two simplices in lie in some common apartment ; if two simplices both lie in apartments and , then there is a simplicial isomorphism of onto fixing the vertices of the two simplices. An -simplex in is called a chamber (originally chambre, i.e. room in French). The rank of the building is defined to be . Elementary properties Every apartment in a building is a Coxeter complex. In fact, for every two -simplices intersecting in an -simplex or panel, there is a unique period two simplicial automorphism of , called a reflection, carrying one -simplex onto the other and fixing their common points. These reflections generate a Coxeter group , called the Weyl group of , and the simplicial complex corresponds to the standard geometric realization of . Standard generators of the Coxeter group are given by the reflections in the walls of a fixed chamber in . Since the apartment is determined up to isomorphism by the building, the same is true of any two simplices in lying in some common apartment . When is finite, the building is said to be spherical. When it is an affine Weyl group, the building is said to be affine or Euclidean. The chamber system is the adjacency graph formed by the chambers; each pair of adjacent chambers can in addition be labelled by one of the standard generators of the Coxeter group (see ). Every building has a canonical length metric inherited from the geometric realisation obtained by identifying the vertices with an orthonormal basis of a Hilbert space. For affine buildings, this metric satisfies the comparison inequality of Alexandrov, known in this setting as the Bruhat–Tits non-positive curvature condition for geodesic triangles: the distance from a vertex to the midpoint of the opposite side is no greater than the distance in the corresponding Euclidean triangle with the same side-lengths (see ). Connection with pairs If a group acts simplicially on a building , transitively on pairs of chambers and apartments containing them, then the stabilisers of such a pair define a pair or Tits system. In fact the pair of subgroups and satisfies the axioms of a pair and the Weyl group can be identified with . Conversely the building can be recovered from the pair, so that every pair canonically defines a building. In fact, using the terminology of pairs and calling any conjugate of a Borel subgroup and any group containing a Borel subgroup a parabolic subgroup, the vertices of the building correspond to maximal parabolic subgroups; vertices form a -simplex whenever the intersection of the corresponding maximal parabolic subgroups is also parabolic; apartments are conjugates under of the simplicial subcomplex with vertices given by conjugates under of maximal parabolics containing . The same building can often be described by different pairs. Moreover, not every building comes from a pair: this corresponds to the failure of classification results in low rank and dimension (see below). The Solomon-Tits theorem is a result which states the homotopy type of a building of a group of Lie type is the same as that of a bouquet of spheres. Spherical and affine buildings for The simplicial structure of the affine and spherical buildings associated to , as well as their interconnections, are easy to explain directly using only concepts from elementary algebra and geometry (see ). In this case there are three different buildings, two spherical and one affine. Each is a union of apartments, themselves simplicial complexes. For the affine building, an apartment is a simplicial complex tessellating Euclidean space by -dimensional simplices; while for a spherical building it is the finite simplicial complex formed by all simplices with a given common vertex in the analogous tessellation in . Each building is a simplicial complex which has to satisfy the following axioms: is a union of apartments. Any two simplices in are contained in a common apartment. If a simplex is contained in two apartments, there is a simplicial isomorphism of one onto the other fixing all common points. Spherical building Let be a field and let be the simplicial complex with vertices the non-trivial vector subspaces of . Two subspaces and are connected if one of them is a subset of the other. The -simplices of are formed by sets of mutually connected subspaces. Maximal connectivity is obtained by taking proper non-trivial subspaces and the corresponding -simplex corresponds to a complete flag Lower dimensional simplices correspond to partial flags with fewer intermediary subspaces . To define the apartments in , it is convenient to define a frame in as a basis () determined up to scalar multiplication of each of its vectors ; in other words a frame is a set of one-dimensional subspaces such that any of them generate a -dimensional subspace. Now an ordered frame defines a complete flag via Since reorderings of the various also give a frame, it is straightforward to see that the subspaces, obtained as sums of the , form a simplicial complex of the type required for an apartment of a spherical building. The axioms for a building can easily be verified using the classical Schreier refinement argument used to prove the uniqueness of the Jordan–Hölder decomposition. Affine building Let be a field lying between and its -adic completion with respect to the usual non-Archimedean -adic norm on for some prime . Let be the subring of defined by When , is the localization of at and, when , , the -adic integers, i.e. the closure of in . The vertices of the building are the -lattices in , i.e. -submodules of the form where is a basis of over . Two lattices are said to be equivalent if one is a scalar multiple of the other by an element of the multiplicative group of (in fact only integer powers of need be used). Two lattices and are said to be adjacent if some lattice equivalent to lies between and its sublattice : this relation is symmetric. The -simplices of are equivalence classes of mutually adjacent lattices, The -simplices correspond, after relabelling, to chains where each successive quotient has order . Apartments are defined by fixing a basis of and taking all lattices with basis where lies in and is uniquely determined up to addition of the same integer to each entry. By definition each apartment has the required form and their union is the whole of . The second axiom follows by a variant of the Schreier refinement argument. The last axiom follows by a simple counting argument based on the orders of finite Abelian groups of the form A standard compactness argument shows that is in fact independent of the choice of . In particular taking , it follows that is countable. On the other hand, taking , the definition shows that admits a natural simplicial action on the building. The building comes equipped with a labelling of its vertices with values in . Indeed, fixing a reference lattice , the label of is given by for sufficiently large. The vertices of any -simplex in has distinct labels, running through the whole of . Any simplicial automorphism of defines a permutation of such that . In particular for in , . Thus preserves labels if lies in . Automorphisms Tits proved that any label-preserving automorphism of the affine building arises from an element of . Since automorphisms of the building permute the labels, there is a natural homomorphism . The action of gives rise to an -cycle . Other automorphisms of the building arise from outer automorphisms of associated with automorphisms of the Dynkin diagram. Taking the standard symmetric bilinear form with orthonormal basis , the map sending a lattice to its dual lattice gives an automorphism whose square is the identity, giving the permutation that sends each label to its negative modulo . The image of the above homomorphism is generated by and and is isomorphic to the dihedral group of order ; when , it gives the whole of . If is a finite Galois extension of and the building is constructed from instead of , the Galois group will also act by automorphisms on the building. Geometric relations Spherical buildings arise in two quite different ways in connection with the affine building for : The link of each vertex in the affine building corresponds to submodules of under the finite field . This is just the spherical building for . The building can be compactified by adding the spherical building for as boundary "at infinity" (see or ). Bruhat–Tits trees with complex multiplication When is an archimedean local field then on the building for the group an additional structure can be imposed of a building with complex multiplication. These were first introduced by Martin L. Brown (). These buildings arise when a quadratic extension of acts on the vector space . These building with complex multiplication can be extended to any global field. They describe the action of the Hecke operators on Heegner points on the classical modular curve as well as on the Drinfeld modular curve . These buildings with complex multiplication are completely classified for the case of in Classification Tits proved that all irreducible spherical buildings (i.e. with finite Weyl group) of rank greater than 2 are associated to simple algebraic or classical groups. A similar result holds for irreducible affine buildings of dimension greater than 2 (their buildings "at infinity" are spherical of rank greater than two). In lower rank or dimension, there is no such classification. Indeed, each incidence structure gives a spherical building of rank 2 (see ); and Ballmann and Brin proved that every 2-dimensional simplicial complex in which the links of vertices are isomorphic to the flag complex of a finite projective plane has the structure of a building, not necessarily classical. Many 2-dimensional affine buildings have been constructed using hyperbolic reflection groups or other more exotic constructions connected with orbifolds. Tits also proved that every time a building is described by a pair in a group, then in almost all cases the automorphisms of the building correspond to automorphisms of the group (see ). Applications The theory of buildings has important applications in several rather disparate fields. Besides the already mentioned connections with the structure of reductive algebraic groups over general and local fields, buildings are used to study their representations. The results of Tits on determination of a group by its building have deep connections with rigidity theorems of George Mostow and Grigory Margulis, and with Margulis arithmeticity. Special types of buildings are studied in discrete mathematics, and the idea of a geometric approach to characterizing simple groups proved very fruitful in the classification of finite simple groups. The theory of buildings of type more general than spherical or affine is still relatively undeveloped, but these generalized buildings have already found applications to construction of Kac–Moody groups in algebra, and to nonpositively curved manifolds and hyperbolic groups in topology and geometric group theory. See also Buekenhout geometry Coxeter group pair Affine Hecke algebra Bruhat decomposition Generalized polygon Mostow rigidity Coxeter complex Weyl distance function References External links Rousseau: Euclidean Buildings Group theory Algebraic combinatorics Geometric group theory Mathematical structures
Building (mathematics)
[ "Physics", "Mathematics" ]
3,090
[ "Mathematical structures", "Geometric group theory", "Group actions", "Mathematical objects", "Combinatorics", "Group theory", "Fields of abstract algebra", "Algebraic combinatorics", "Symmetry" ]
275,216
https://en.wikipedia.org/wiki/Hemodynamics
Hemodynamics or haemodynamics are the dynamics of blood flow. The circulatory system is controlled by homeostatic mechanisms of autoregulation, just as hydraulic circuits are controlled by control systems. The hemodynamic response continuously monitors and adjusts to conditions in the body and its environment. Hemodynamics explains the physical laws that govern the flow of blood in the blood vessels. Blood flow ensures the transportation of nutrients, hormones, metabolic waste products, oxygen, and carbon dioxide throughout the body to maintain cell-level metabolism, the regulation of the pH, osmotic pressure and temperature of the whole body, and the protection from microbial and mechanical harm. Blood is a non-Newtonian fluid, and is most efficiently studied using rheology rather than hydrodynamics. Because blood vessels are not rigid tubes, classic hydrodynamics and fluids mechanics based on the use of classical viscometers are not capable of explaining haemodynamics. The study of the blood flow is called hemodynamics, and the study of the properties of the blood flow is called hemorheology. Blood Blood is a complex liquid. Blood is composed of plasma and formed elements. The plasma contains 91.5% water, 7% proteins and 1.5% other solutes. The formed elements are platelets, white blood cells, and red blood cells. The presence of these formed elements and their interaction with plasma molecules are the main reasons why blood differs so much from ideal Newtonian fluids. Viscosity of plasma Normal blood plasma behaves like a Newtonian fluid at physiological rates of shear. Typical values for the viscosity of normal human plasma at 37 °C is 1.4 mN·s/m2. The viscosity of normal plasma varies with temperature in the same way as does that of its solvent water;a 3°C change in temperature in the physiological range (36.5°C to 39.5°C)reduces plasma viscosity by about 10%. Osmotic pressure of plasma The osmotic pressure of solution is determined by the number of particles present and by the temperature. For example, a 1 molar solution of a substance contains molecules per liter of that substance and at 0 °C it has an osmotic pressure of . The osmotic pressure of the plasma affects the mechanics of the circulation in several ways. An alteration of the osmotic pressure difference across the membrane of a blood cell causes a shift of water and a change of cell volume. The changes in shape and flexibility affect the mechanical properties of whole blood. A change in plasma osmotic pressure alters the hematocrit, that is, the volume concentration of red cells in the whole blood by redistributing water between the intravascular and extravascular spaces. This in turn affects the mechanics of the whole blood. Red blood cells The red blood cell is highly flexible and biconcave in shape. Its membrane has a Young's modulus in the region of 106 Pa. Deformation in red blood cells is induced by shear stress. When a suspension is sheared, the red blood cells deform and spin because of the velocity gradient, with the rate of deformation and spin depending on the shear rate and the concentration. This can influence the mechanics of the circulation and may complicate the measurement of blood viscosity. It is true that in a steady state flow of a viscous fluid through a rigid spherical body immersed in the fluid, where we assume the inertia is negligible in such a flow, it is believed that the downward gravitational force of the particle is balanced by the viscous drag force. From this force balance the speed of fall can be shown to be given by Stokes' law Where a is the particle radius, ρp, ρf are the respectively particle and fluid density μ is the fluid viscosity, g is the gravitational acceleration. From the above equation we can see that the sedimentation velocity of the particle depends on the square of the radius. If the particle is released from rest in the fluid, its sedimentation velocity Us increases until it attains the steady value called the terminal velocity (U), as shown above. Hemodilution Hemodilution is the dilution of the concentration of red blood cells and plasma constituents by partially substituting the blood with colloids or crystalloids. It is a strategy to avoid exposure of patients to the potential hazards of homologous blood transfusions. Hemodilution can be normovolemic, which implies the dilution of normal blood constituents by the use of expanders. During acute normovolemic hemodilution (ANH), blood subsequently lost during surgery contains proportionally fewer red blood cells per milliliter, thus minimizing intraoperative loss of the whole blood. Therefore, blood lost by the patient during surgery is not actually lost by the patient, for this volume is purified and redirected into the patient. On the other hand, hypervolemic hemodilution (HVH) uses acute preoperative volume expansion without any blood removal. In choosing a fluid, however, it must be assured that when mixed, the remaining blood behaves in the microcirculation as in the original blood fluid, retaining all its properties of viscosity. In presenting what volume of ANH should be applied one study suggests a mathematical model of ANH which calculates the maximum possible RCM savings using ANH, given the patients weight Hi and Hm. To maintain the normovolemia, the withdrawal of autologous blood must be simultaneously replaced by a suitable hemodilute. Ideally, this is achieved by isovolemia exchange transfusion of a plasma substitute with a colloid osmotic pressure (OP). A colloid is a fluid containing particles that are large enough to exert an oncotic pressure across the micro-vascular membrane. When debating the use of colloid or crystalloid, it is imperative to think about all the components of the starling equation: To identify the minimum safe hematocrit desirable for a given patient the following equation is useful: where EBV is the estimated blood volume; 70 mL/kg was used in this model and Hi (initial hematocrit) is the patient's initial hematocrit. From the equation above it is clear that the volume of blood removed during the ANH to the Hm is the same as the BLs. How much blood is to be removed is usually based on the weight, not the volume. The number of units that need to be removed to hemodilute to the maximum safe hematocrit (ANH) can be found by This is based on the assumption that each unit removed by hemodilution has a volume of 450 mL (the actual volume of a unit will vary somewhat since completion of collection is dependent on weight and not volume). The model assumes that the hemodilute value is equal to the Hm prior to surgery, therefore, the re-transfusion of blood obtained by hemodilution must begin when SBL begins. The RCM available for retransfusion after ANH (RCMm) can be calculated from the patient's Hi and the final hematocrit after hemodilution(Hm) The maximum SBL that is possible when ANH is used without falling below Hm(BLH) is found by assuming that all the blood removed during ANH is returned to the patient at a rate sufficient to maintain the hematocrit at the minimum safe level If ANH is used as long as SBL does not exceed BLH there will not be any need for blood transfusion. We can conclude from the foregoing that H should therefore not exceed s. The difference between the BLH and the BLs therefore is the incremental surgical blood loss (BLi) possible when using ANH. When expressed in terms of the RCM Where RCMi is the red cell mass that would have to be administered using homologous blood to maintain the Hm if ANH is not used and blood loss equals BLH. The model used assumes ANH used for a 70 kg patient with an estimated blood volume of 70 ml/kg (4900 ml). A range of Hi and Hm was evaluated to understand conditions where hemodilution is necessary to benefit the patient. Result The result of the model calculations are presented in a table given in the appendix for a range of Hi from 0.30 to 0.50 with ANH performed to minimum hematocrits from 0.30 to 0.15. Given a Hi of 0.40, if the Hm is assumed to be 0.25.then from the equation above the RCM count is still high and ANH is not necessary, if BLs does not exceed 2303 ml, since the hemotocrit will not fall below Hm, although five units of blood must be removed during hemodilution. Under these conditions, to achieve the maximum benefit from the technique if ANH is used, no homologous blood will be required to maintain the Hm if blood loss does not exceed 2940 ml. In such a case, ANH can save a maximum of 1.1 packed red blood cell unit equivalent, and homologous blood transfusion is necessary to maintain Hm, even if ANH is used. This model can be used to identify when ANH may be used for a given patient and the degree of ANH necessary to maximize that benefit. For example, if Hi is 0.30 or less it is not possible to save a red cell mass equivalent to two units of homologous PRBC even if the patient is hemodiluted to an Hm of 0.15. That is because from the RCM equation the patient RCM falls short from the equation giving above. If Hi is 0.40 one must remove at least 7.5 units of blood during ANH, resulting in an Hm of 0.20 to save two units equivalence. Clearly, the greater the Hi and the greater the number of units removed during hemodilution, the more effective ANH is for preventing homologous blood transfusion. The model here is designed to allow doctors to determine where ANH may be beneficial for a patient based on their knowledge of the Hi, the potential for SBL, and an estimate of the Hm. Though the model used a 70 kg patient, the result can be applied to any patient. To apply these result to any body weight, any of the values BLs, BLH and ANHH or PRBC given in the table need to be multiplied by the factor we will call T Basically, the model considered above is designed to predict the maximum RCM that can save ANH. In summary, the efficacy of ANH has been described mathematically by means of measurements of surgical blood loss and blood volume flow measurement. This form of analysis permits accurate estimation of the potential efficiency of the techniques and shows the application of measurement in the medical field. Blood flow Cardiac output The heart is the driver of the circulatory system, pumping blood through rhythmic contraction and relaxation. The rate of blood flow out of the heart (often expressed in L/min) is known as the cardiac output (CO). Blood being pumped out of the heart first enters the aorta, the largest artery of the body. It then proceeds to divide into smaller and smaller arteries, then into arterioles, and eventually capillaries, where oxygen transfer occurs. The capillaries connect to venules, and the blood then travels back through the network of veins to the venae cavae into the right heart. The micro-circulation — the arterioles, capillaries, and venules —constitutes most of the area of the vascular system and is the site of the transfer of O2, glucose, and enzyme substrates into the cells. The venous system returns the de-oxygenated blood to the right heart where it is pumped into the lungs to become oxygenated and CO2 and other gaseous wastes exchanged and expelled during breathing. Blood then returns to the left side of the heart where it begins the process again. In a normal circulatory system, the volume of blood returning to the heart each minute is approximately equal to the volume that is pumped out each minute (the cardiac output). Because of this, the velocity of blood flow across each level of the circulatory system is primarily determined by the total cross-sectional area of that level. Cardiac output is determined by two methods. One is to use the Fick equation: The other thermodilution method is to sense the temperature change from a liquid injected in the proximal port of a Swan-Ganz to the distal port. Cardiac output is mathematically expressed by the following equation: where CO = cardiac output (L/sec) SV = stroke volume (ml) HR = heart rate (bpm) The normal human cardiac output is 5-6 L/min at rest. Not all blood that enters the left ventricle exits the heart. What is left at the end of diastole (EDV) minus the stroke volume make up the end systolic volume (ESV). Anatomical features Circulatory system of species subjected to orthostatic blood pressure (such as arboreal snakes) has evolved with physiological and morphological features to overcome the circulatory disturbance. For instance, in arboreal snakes the heart is closer to the head, in comparison with aquatic snakes. This facilitates blood perfusion to the brain. Turbulence Blood flow is also affected by the smoothness of the vessels, resulting in either turbulent (chaotic) or laminar (smooth) flow. Smoothness is reduced by the buildup of fatty deposits on the arterial walls. The Reynolds number (denoted NR or Re) is a relationship that helps determine the behavior of a fluid in a tube, in this case blood in the vessel. The equation for this dimensionless relationship is written as: ρ: density of the blood v: mean velocity of the blood L: characteristic dimension of the vessel, in this case diameter μ: viscosity of blood The Reynolds number is directly proportional to the velocity and diameter of the tube. Note that NR is directly proportional to the mean velocity as well as the diameter. A Reynolds number of less than 2300 is laminar fluid flow, which is characterized by constant flow motion, whereas a value of over 4000, is represented as turbulent flow. Due to its smaller radius and lowest velocity compared to other vessels, the Reynolds number at the capillaries is very low, resulting in laminar instead of turbulent flow. Velocity Often expressed in cm/s. This value is inversely related to the total cross-sectional area of the blood vessel and also differs per cross-section, because in normal condition the blood flow has laminar characteristics. For this reason, the blood flow velocity is the fastest in the middle of the vessel and slowest at the vessel wall. In most cases, the mean velocity is used. There are many ways to measure blood flow velocity, like videocapillary microscoping with frame-to-frame analysis, or laser Doppler anemometry. Blood velocities in arteries are higher during systole than during diastole. One parameter to quantify this difference is the pulsatility index (PI), which is equal to the difference between the peak systolic velocity and the minimum diastolic velocity divided by the mean velocity during the cardiac cycle. This value decreases with distance from the heart. Blood vessels Vascular resistance Resistance is also related to vessel radius, vessel length, and blood viscosity. In a first approach based on fluids, as indicated by the Hagen–Poiseuille equation. The equation is as follows: ∆P: pressure drop/gradient μ: viscosity l: length of tube. In the case of vessels with infinitely long lengths, l is replaced with diameter of the vessel. Q: flow rate of the blood in the vessel r: radius of the vessel In a second approach, more realistic of the vascular resistance and coming from experimental observations on blood flows, according to Thurston, there is a plasma release-cell layering at the walls surrounding a plugged flow. It is a fluid layer in which at a distance δ, viscosity η is a function of δ written as η(δ), and these surrounding layers do not meet at the vessel centre in real blood flow. Instead, there is the plugged flow which is hyperviscous because holding high concentration of RBCs. Thurston assembled this layer to the flow resistance to describe blood flow by means of a viscosity η(δ) and thickness δ from the wall layer. The blood resistance law appears as R adapted to blood flow profile : where R = resistance to blood flow c = constant coefficient of flow L = length of the vessel η(δ) = viscosity of blood in the wall plasma release-cell layering r = radius of the blood vessel δ = distance in the plasma release-cell layer Blood resistance varies depending on blood viscosity and its plugged flow (or sheath flow since they are complementary across the vessel section) size as well, and on the size of the vessels. Assuming steady, laminar flow in the vessel, the blood vessels behavior is similar to that of a pipe. For instance if p1 and p2 are pressures are at the ends of the tube, the pressure drop/gradient is: The larger arteries, including all large enough to see without magnification, are conduits with low vascular resistance (assuming no advanced atherosclerotic changes) with high flow rates that generate only small drops in pressure. The smaller arteries and arterioles have higher resistance, and confer the main blood pressure drop across major arteries to capillaries in the circulatory system. In the arterioles blood pressure is lower than in the major arteries. This is due to bifurcations, which cause a drop in pressure. The more bifurcations, the higher the total cross-sectional area, therefore the pressure across the surface drops. This is why the arterioles have the highest pressure-drop. The pressure drop of the arterioles is the product of flow rate and resistance: ∆P=Q xresistance. The high resistance observed in the arterioles, which factor largely in the ∆P is a result of a smaller radius of about 30 μm. The smaller the radius of a tube, the larger the resistance to fluid flow. Immediately following the arterioles are the capillaries. Following the logic observed in the arterioles, we expect the blood pressure to be lower in the capillaries compared to the arterioles. Since pressure is a function of force per unit area, (P = F/A), the larger the surface area, the lesser the pressure when an external force acts on it. Though the radii of the capillaries are very small, the network of capillaries has the largest surface area in the vascular network. They are known to have the largest surface area (485 mm^2) in the human vascular network. The larger the total cross-sectional area, the lower the mean velocity as well as the pressure. Substances called vasoconstrictors can reduce the size of blood vessels, thereby increasing blood pressure. Vasodilators (such as nitroglycerin) increase the size of blood vessels, thereby decreasing arterial pressure. If the blood viscosity increases (gets thicker), the result is an increase in arterial pressure. Certain medical conditions can change the viscosity of the blood. For instance, anemia (low red blood cell concentration) reduces viscosity, whereas increased red blood cell concentration increases viscosity. It had been thought that aspirin and related "blood thinner" drugs decreased the viscosity of blood, but instead studies found that they act by reducing the tendency of the blood to clot. To determine the systemic vascular resistance (SVR) the formula for calculating all resistance is used. This translates for SVR into: Where SVR = systemic vascular resistance (mmHg/L/min) MAP = mean arterial pressure (mmHg) CVP = central venous pressure (mmHg) CO = cardiac output (L/min) To get this in Wood units the answer is multiplied by 80. Normal systemic vascular resistance is between 900 and 1440 dynes/sec/cm−5. Wall tension Regardless of site, blood pressure is related to the wall tension of the vessel according to the Young–Laplace equation (assuming that the thickness of the vessel wall is very small as compared to the diameter of the lumen): where P is the blood pressure t is the wall thickness r is the inside radius of the cylinder. is the cylinder stress or "hoop stress". For the thin-walled assumption to be valid the vessel must have a wall thickness of no more than about one-tenth (often cited as one twentieth) of its radius. The cylinder stress, in turn, is the average force exerted circumferentially (perpendicular both to the axis and to the radius of the object) in the cylinder wall, and can be described as: where: F is the force exerted circumferentially on an area of the cylinder wall that has the following two lengths as sides: t is the radial thickness of the cylinder l is the axial length of the cylinder Stress When force is applied to a material it starts to deform or move. As the force needed to deform a material (e.g. to make a fluid flow) increases with the size of the surface of the material A., the magnitude of this force F is proportional to the area A of the portion of the surface. Therefore, the quantity (F/A) that is the force per unit area is called the stress. The shear stress at the wall that is associated with blood flow through an artery depends on the artery size and geometry and can range between 0.5 and 4 Pa. . Under normal conditions, to avoid atherogenesis, thrombosis, smooth muscle proliferation and endothelial apoptosis, shear stress maintains its magnitude and direction within an acceptable range. In some cases occurring due to blood hammer, shear stress reaches larger values. While the direction of the stress may also change by the reverse flow, depending on the hemodynamic conditions. Therefore, this situation can lead to atherosclerosis disease. Capacitance Veins are described as the "capacitance vessels" of the body because over 70% of the blood volume resides in the venous system. Veins are more compliant than arteries and expand to accommodate changing volume. Blood pressure The blood pressure in the circulation is principally due to the pumping action of the heart. The pumping action of the heart generates pulsatile blood flow, which is conducted into the arteries, across the micro-circulation and eventually, back via the venous system to the heart. During each heartbeat, systemic arterial blood pressure varies between a maximum (systolic) and a minimum (diastolic) pressure. In physiology, these are often simplified into one value, the mean arterial pressure (MAP), which is calculated as follows: where: MAP = Mean Arterial Pressure DP = Diastolic blood pressure PP = Pulse pressure which is systolic pressure minus diastolic pressure. Differences in mean blood pressure are responsible for blood flow from one location to another in the circulation. The rate of mean blood flow depends on both blood pressure and the resistance to flow presented by the blood vessels. Mean blood pressure decreases as the circulating blood moves away from the heart through arteries and capillaries due to viscous losses of energy. Mean blood pressure drops over the whole circulation, although most of the fall occurs along the small arteries and arterioles. Gravity affects blood pressure via hydrostatic forces (e.g., during standing), and valves in veins, breathing, and pumping from contraction of skeletal muscles also influence blood pressure in veins. The relationship between pressure, flow, and resistance is expressed in the following equation: When applied to the circulatory system, we get: where CO = cardiac output (in L/min) MAP = mean arterial pressure (in mmHg), the average pressure of blood as it leaves the heart RAP = right atrial pressure (in mmHg), the average pressure of blood as it returns to the heart SVR = systemic vascular resistance (in mmHg * min/L) A simplified form of this equation assumes right atrial pressure is approximately 0: The ideal blood pressure in the brachial artery, where standard blood pressure cuffs measure pressure, is <120/80 mmHg. Other major arteries have similar levels of blood pressure recordings indicating very low disparities among major arteries. In the innominate artery, the average reading is 110/70 mmHg, the right subclavian artery averages 120/80 and the abdominal aorta is 110/70 mmHg. The relatively uniform pressure in the arteries indicate that these blood vessels act as a pressure reservoir for fluids that are transported within them. Pressure drops gradually as blood flows from the major arteries, through the arterioles, the capillaries until blood is pushed up back into the heart via the venules, the veins through the vena cava with the help of the muscles. At any given pressure drop, the flow rate is determined by the resistance to the blood flow. In the arteries, with the absence of diseases, there is very little or no resistance to blood. The vessel diameter is the most principal determinant to control resistance. Compared to other smaller vessels in the body, the artery has a much bigger diameter (4  mm), therefore the resistance is low. The arm–leg (blood pressure) gradient is the difference between the blood pressure measured in the arms and that measured in the legs. It is normally less than 10 mm Hg, but may be increased in e.g. coarctation of the aorta. Clinical significance Pressure monitoring Hemodynamic monitoring is the observation of hemodynamic parameters over time, such as blood pressure and heart rate. Blood pressure can be monitored either invasively through an inserted blood pressure transducer assembly (providing continuous monitoring), or noninvasively by repeatedly measuring the blood pressure with an inflatable blood pressure cuff. Hypertension is diagnosed by the presence of arterial blood pressures of 140/90 or greater for two clinical visits. Pulmonary Artery Wedge Pressure can show if there is congestive heart failure, mitral and aortic valve disorders, hypervolemia, shunts, or cardiac tamponade. Remote, indirect monitoring of blood flow by laser Doppler Noninvasive hemodynamic monitoring of eye fundus vessels can be performed by Laser Doppler holography, with near infrared light. The eye offers a unique opportunity for the non-invasive exploration of cardiovascular diseases. Laser Doppler imaging by digital holography can measure blood flow in the retina and choroid, whose Doppler responses exhibit a pulse-shaped profile with time This technique enables non invasive functional microangiography by high-contrast measurement of Doppler responses from endoluminal blood flow profiles in vessels in the posterior segment of the eye. Differences in blood pressure drive the flow of blood throughout the circulation. The rate of mean blood flow depends on both blood pressure and the hemodynamic resistance to flow presented by the blood vessels. Glossary ANHAcute Normovolemic Hemodilution ANHuNumber of Units During ANH BLHMaximum Blood Loss Possible When ANH Is Used Before Homologous Blood Transfusion Is Needed BLIIncremental Blood Loss Possible with ANH.(BLH – BLs) BLsMaximum blood loss without ANH before homologous blood transfusion is required EBVEstimated Blood Volume(70 mL/kg) HctHaematocrit Always Expressed Here As A Fraction HiInitial Haematocrit HmMinimum Safe Haematocrit PRBCPacked Red Blood Cell Equivalent Saved by ANH RCMRed cell mass. RCMHCell Mass Available For Transfusion after ANH RCMIRed Cell Mass Saved by ANH SBLSurgical Blood Loss Etymology and pronunciation The word hemodynamics () uses combining forms of hemo- (which comes from the ancient Greek haima, meaning blood) and dynamics, thus "the dynamics of blood". The vowel of the hemo- syllable is variously written according to the ae/e variation. Blood hammer Blood pressure Cardiac output Cardiovascular System Dynamics Society Electrical cardiometry Esophogeal doppler Hemodynamics of the aorta Impedance cardiography Photoplethysmogram Laser Doppler imaging Windkessel effect Functional near-infrared spectroscopy Notes and references Bibliography Berne RM, Levy MN. Cardiovascular physiology. 7th Ed Mosby 1997 Rowell LB. Human Cardiovascular Control. Oxford University press 1993 Braunwald E (Editor). Heart Disease: A Textbook of Cardiovascular Medicine. 5th Ed. W.B.Saunders 1997 Siderman S, Beyar R, Kleber AG. Cardiac Electrophysiology, Circulation and Transport. Kluwer Academic Publishers 1991 American Heart Association Otto CM, Stoddard M, Waggoner A, Zoghbi WA. Recommendations for Quantification of Doppler Echocardiography: A Report from the Doppler Quantification Task Force of the Nomenclature and Standards Committee of the American Society of Echocardiography. J Am Soc Echocardiogr 2002;15:167-184 Peterson LH, The Dynamics of Pulsatile Blood Flow, Circ. Res. 1954;2;127-139 Hemodynamic Monitoring, Bigatello LM, George E., Minerva Anestesiol, 2002 Apr;68(4):219-25 Claude Franceschi L'investigation vasculaire par ultrasonographie Doppler Masson 1979 ISBN Nr 2-225-63679-6 Claude Franceschi; Paolo Zamboni Principles of Venous Hemodynamics Nova Science Publishers 2009-01 ISBN Nr 1606924850/9781606924853 Claude Franceschi Venous Insufficiency of the pelvis and lower extremities-Hemodynamic Rationale WR Milnor: Hemodynamics, Williams & Wilkins, 1982 B Bo Sramek: Systemic Hemodynamics and Hemodynamic Management, 4th Edition, ESBN 1-59196-046-0 External links Learn hemodynamics Fluid mechanics Computational fluid dynamics Cardiovascular physiology Exercise physiology Blood Mathematics in medicine Fluid dynamics
Hemodynamics
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
6,360
[ "Computational fluid dynamics", "Applied mathematics", "Chemical engineering", "Computational physics", "Civil engineering", "Piping", "Mathematics in medicine", "Fluid mechanics", "Fluid dynamics" ]
275,473
https://en.wikipedia.org/wiki/Control%20system
A control system manages, commands, directs, or regulates the behavior of other devices or systems using control loops. It can range from a single home heating controller using a thermostat controlling a domestic boiler to large industrial control systems which are used for controlling processes or machines. The control systems are designed via control engineering process. For continuously modulated control, a feedback controller is used to automatically control a process or operation. The control system compares the value or status of the process variable (PV) being controlled with the desired value or setpoint (SP), and applies the difference as a control signal to bring the process variable output of the plant to the same value as the setpoint. For sequential and combinational logic, software logic, such as in a programmable logic controller, is used. Open-loop and closed-loop control Feedback control systems Logic control Logic control systems for industrial and commercial machinery were historically implemented by interconnected electrical relays and cam timers using ladder logic. Today, most such systems are constructed with microcontrollers or more specialized programmable logic controllers (PLCs). The notation of ladder logic is still in use as a programming method for PLCs. Logic controllers may respond to switches and sensors and can cause the machinery to start and stop various operations through the use of actuators. Logic controllers are used to sequence mechanical operations in many applications. Examples include elevators, washing machines and other systems with interrelated operations. An automatic sequential control system may trigger a series of mechanical actuators in the correct sequence to perform a task. For example, various electric and pneumatic transducers may fold and glue a cardboard box, fill it with the product and then seal it in an automatic packaging machine. PLC software can be written in many different ways – ladder diagrams, SFC (sequential function charts) or statement lists. On–off control On–off control uses a feedback controller that switches abruptly between two states. A simple bi-metallic domestic thermostat can be described as an on-off controller. When the temperature in the room (PV) goes below the user setting (SP), the heater is switched on. Another example is a pressure switch on an air compressor. When the pressure (PV) drops below the setpoint (SP) the compressor is powered. Refrigerators and vacuum pumps contain similar mechanisms. Simple on–off control systems like these can be cheap and effective. Linear control Fuzzy logic Fuzzy logic is an attempt to apply the easy design of logic controllers to the control of complex continuously varying systems. Basically, a measurement in a fuzzy logic system can be partly true. The rules of the system are written in natural language and translated into fuzzy logic. For example, the design for a furnace would start with: "If the temperature is too high, reduce the fuel to the furnace. If the temperature is too low, increase the fuel to the furnace." Measurements from the real world (such as the temperature of a furnace) are fuzzified and logic is calculated arithmetic, as opposed to Boolean logic, and the outputs are de-fuzzified to control equipment. When a robust fuzzy design is reduced to a single, quick calculation, it begins to resemble a conventional feedback loop solution and it might appear that the fuzzy design was unnecessary. However, the fuzzy logic paradigm may provide scalability for large control systems where conventional methods become unwieldy or costly to derive. Fuzzy electronics is an electronic technology that uses fuzzy logic instead of the two-value logic more commonly used in digital electronics. Physical implementation The range of control system implementation is from compact controllers often with dedicated software for a particular machine or device, to distributed control systems for industrial process control for a large physical plant. Logic systems and feedback controllers are usually implemented with programmable logic controllers. The Broadly Reconfigurable and Expandable Automation Device (BREAD) is a recent framework that provides many open-source hardware devices which can be connected to create more complex data acquisition and control systems. See also Building automation Coefficient diagram method Control theory Cybernetics Distributed control system Droop speed control Education and training of electrical and electronics engineers EPICS Good regulator Guidance, navigation, and control Hierarchical control system HVAC control system Industrial control system Motion control Networked control system Numerical control Perceptual control theory PID controller Process control Process optimization Programmable logic controller Real-time computing Sampled data system SCADA VisSim References External links SystemControl Create, simulate or HWIL control loops with Python. Includes Kalman filter, LQG control among others. Semiautonomous Flight Direction - Reference unmannedaircraft.org Control System Toolbox for design and analysis of control systems. Control Systems Manufacturer Design and Manufacture of control systems. Mathematica functions for the analysis, design, and simulation of control systems Python Control System (PyConSys) Create and simulate control loops with Python. AI for setting PID parameters. Control theory Control engineering Systems engineering Systems theory &
Control system
[ "Mathematics", "Engineering" ]
1,014
[ "Systems engineering", "Applied mathematics", "Control theory", "Automation", "Control engineering", "Dynamical systems" ]
275,602
https://en.wikipedia.org/wiki/Microscope%20image%20processing
Microscope image processing is a broad term that covers the use of digital image processing techniques to process, analyze and present images obtained from a microscope. Such processing is now commonplace in a number of diverse fields such as medicine, biological research, cancer research, drug testing, metallurgy, etc. A number of manufacturers of microscopes now specifically design in features that allow the microscopes to interface to an image processing system. Image acquisition Until the early 1990s, most image acquisition in video microscopy applications was typically done with an analog video camera, often simply closed circuit TV cameras. While this required the use of a frame grabber to digitize the images, video cameras provided images at full video frame rate (25-30 frames per second) allowing live video recording and processing. While the advent of solid state detectors yielded several advantages, the real-time video camera was actually superior in many respects. Today, acquisition is usually done using a CCD camera mounted in the optical path of the microscope. The camera may be full colour or monochrome. Very often, very high resolution cameras are employed to gain as much direct information as possible. Cryogenic cooling is also common, to minimise noise. Often digital cameras used for this application provide pixel intensity data to a resolution of 12-16 bits, much higher than is used in consumer imaging products. Ironically, in recent years, much effort has been put into acquiring data at video rates, or higher (25-30 frames per second or higher). What was once easy with off-the-shelf video cameras now requires special, high speed electronics to handle the vast digital data bandwidth. Higher speed acquisition allows dynamic processes to be observed in real time, or stored for later playback and analysis. Combined with the high image resolution, this approach can generate vast quantities of raw data, which can be a challenge to deal with, even with a modern computer system. While current CCD detectors allow very high image resolution, often this involves a trade-off because, for a given chip size, as the pixel count increases, the pixel size decreases. As the pixels get smaller, their well depth decreases, reducing the number of electrons that can be stored. In turn, this results in a poorer signal-to-noise ratio. For best results, one must select an appropriate sensor for a given application. Because microscope images have an intrinsic limiting resolution, it often makes little sense to use a noisy, high resolution detector for image acquisition. A more modest detector, with larger pixels, can often produce much higher quality images because of reduced noise. This is especially important in low-light applications such as fluorescence microscopy. Moreover, one must also consider the temporal resolution requirements of the application. A lower resolution detector will often have a significantly higher acquisition rate, permitting the observation of faster events. Conversely, if the observed object is motionless, one may wish to acquire images at the highest possible spatial resolution without regard to the time required to acquire a single image. 2D image techniques Image processing for microscopy application begins with fundamental techniques intended to most accurately reproduce the information contained in the microscopic sample. This might include adjusting the brightness and contrast of the image, averaging images to reduce image noise and correcting for illumination non-uniformities. Such processing involves only basic arithmetic operations between images (i.e. addition, subtraction, multiplication and division). The vast majority of processing done on microscope image is of this nature. Another class of common 2D operations called image convolution are often used to reduce or enhance image details. Such "blurring" and "sharpening" algorithms in most programs work by altering a pixel's value based on a weighted sum of that and the surrounding pixels (a more detailed description of kernel based convolution deserves an entry for itself) or by altering the frequency domain function of the image using Fourier Transform. Most image processing techniques are performed in the Frequency domain. Other basic two dimensional techniques include operations such as image rotation, warping, color balancing etc. At times, advanced techniques are employed with the goal of "undoing" the distortion of the optical path of the microscope, thus eliminating distortions and blurring caused by the instrumentation. This process is called deconvolution, and a variety of algorithms have been developed, some of great mathematical complexity. The end result is an image far sharper and clearer than could be obtained in the optical domain alone. This is typically a 3-dimensional operation, that analyzes a volumetric image (i.e. images taken at a variety of focal planes through the sample) and uses this data to reconstruct a more accurate 3-dimensional image. 3D image techniques Another common requirement is to take a series of images at a fixed position, but at different focal depths. Since most microscopic samples are essentially transparent, and the depth of field of the focused sample is exceptionally narrow, it is possible to capture images "through" a three-dimensional object using 2D equipment like confocal microscopes. Software is then able to reconstruct a 3D model of the original sample which may be manipulated appropriately. The processing turns a 2D instrument into a 3D instrument, which would not otherwise exist. In recent times this technique has led to a number of scientific discoveries in cell biology. Analysis Analysis of images will vary considerably according to application. Typical analysis includes determining where the edges of an object are, counting similar objects, calculating the area, perimeter length and other useful measurements of each object. A common approach is to create an image mask which only includes pixels that match certain criteria, then perform simpler scanning operations on the resulting mask. It is also possible to label objects and track their motion over a series of frames in a video sequence. See also Image processing References Jan-Mark Geusebroek, Color and Geometrical Structure in Images, Applications in Microscopy, Young Ian T., Not just pretty pictures: Digital quantitative microscopy, Proc. Royal Microscopical Society, 1996, 31(4), pp. 311–313. Young Ian T., Quantitative Microscopy, IEEE Engineering in Medicine and Biology, 1996, 15(1), pp. 59–66. Young Ian T., Sampling density and quantitative microscopy, Analytical and Quantitative Cytology and Histology, vol. 10, 1988, pp. 269–275 External links Quantitative imaging (broken link) Image processing Microscopy
Microscope image processing
[ "Chemistry" ]
1,283
[ "Microscopy" ]
275,636
https://en.wikipedia.org/wiki/Color%20gel
A color gel or color filter (Commonwealth spelling: colour gel or colour filter), also known as lighting gel or simply gel, is a transparent colored material that is used in theater, event production, photography, videography and cinematography to color light and for color correction. Modern gels are thin sheets of polycarbonate, polyester or other heat-resistant plastics, placed in front of a lighting fixture in the path of the beam. Gels have a limited life, especially in saturated colors (lower light transmission) and shorter wavelength (blues). The color will fade or even melt, depending upon the energy absorption of the color, and the sheet will have to be replaced. In permanent installations and some theatrical uses, colored glass filters or dichroic filters are used. The main drawbacks are additional expense and a more limited selection. History In Shakespearean-era theater, red wine was used in a glass container as a light filter. In later days, colored water or silk was used to filter light in the theater. Later, a gelatin base became the material of choice. Gelatin gel was available at least until 1979. The name gel has continued to be used to the present day. Gelatin-based color media had no melting point, and the color was cast in the media as opposed to being coated on the surface. It would, however, char at high temperatures and become brittle once heated, so that it could not be handled once used in the lighting instrument. By 1945, more heat-tolerant and self-extinguishing acetate-based through-dyed materials were being manufactured (marketed as Chromoid then Cinemoid by Strand Electric). In the U.S., Roscolene (acetate) was developed to deal with higher output light sources. Though cheaper, the acetate filters eventually fell out of favor with professional organizations since they could not withstand the higher temperatures produced by the tungsten halogen lamps that came into widespread use in the late 1960s. The acetate-based material was replaced by polycarbonates like Roscolar (mylar polycarbonate) and polyester-based filters. These materials have superior heat tolerance. Polyester having the highest melting point of . Often a surface coating was applied on a transparent film. The first dyed polyester gels were introduced by Berkey Colortran in 1969 as Gelatran, the original deep-dyed polyester. The Gelatran process is still used today to produce GAMColor (100% of the line) and Roscolux (about 30% of the line). Other color manufacturers, such as Lee Filters and Apollo Design Technology, use a surface applied dye. (Roscolux is 70% polycarbonate and 30% deep-dyed polyester.) Almost every color manufacturer today uses either polycarbonate or polyester to manufacture their gels. Even today's gels can burn out (to lighten in color starting in the center) easily, rendering them useless. As instrument design improves, it has become a selling point on many lights to have as little heat radiating from the front of the fixture as possible to prevent burn-through, and keep stage equipment and actors cooler. In the 1930s, Strand Electric of London provided the first numbering system for their swatches and with their agents in New York and Sydney, the numbering system went round the world. Remnants of this original filter color system exist in the color swatches of today (such as Deep Amber = No. 3; Primary Red = No. 6; Middle Rose = No. 10; Peacock Blue = No. 15; Primary Blue = No. 20; Primary Green = No. 39). In the theater, gels are typically available in single sheets, which are then cut down to the appropriate size before use. The size originates from the gelatin days: it is the same as a standard baker's sheet, which was used to cast the sheets. In the film industry, gels are usually cut straight from rolls wide and long, as the size required may vary from a single practical halogen spotlight in a ceiling to a whole window. Colors Similar colors may vary between different companies' formulations. For example, many have a color named "bastard amber", but the transmitted color spectrum may be different. For this reason it is often misleading to refer to gel colors by name. Even a familiar color name, like Steel Blue, transmits widely differing colored light in each manufacturer's line. By necessity, color gels are selected by specifying the manufacturer, line, color number, and name: Rosco Cinegel #3202 Full Blue CTB. Apollo Design Technology uses a four digit number based on the visible spectrum to designate and locate specific color transmissions. The GAMColor line from Rosco employs a three digit numbering system, organized by the wavelength of the principle color in the family, i.e.: Blues in the 800's with primary blue at 850 (though the manufacturer's numbers do not relate directly to any wavelength, transmission, or frequency). The same applies to Greens in the 600's, Reds in the 200's, etc. Rosco's Roscolux line is currently the oldest major line of color media, . They started using only a two-digit numbering system, listing colors in no particular order. As the range demanded by designers increased and many more colors were offered in the 1970's and 1980's, two digits quickly proved inadequate. As a result the original scheme was overlayed by three-digit and eventually four-digit numbers in between the original two-digit colors in the line. Manufacturers produce swatch books, which contain a small sample of each color, along with the color name and manufacturer's catalogue number. Many manufacturers also provide spectral analysis for each color and transmission values, expressed as a percentage of light allowed to pass through the filter from the light source. Swatch books enable designers and technicians to have a true representation of the manufacturers' range of colors. Many designers choose a limited color palette for generic applications because it is financially and logistically difficult to have access to all colors for a single show. There are also gels for color correction, such as CTB (color temperature blue) and CTO (color temperature orange). Color correction gels alter or correct the color temperature of a light to more closely match the color temperature of a film negative or the white balance of a digital imager. Specifically CTB, which is blue in appearance, will correct tungsten lights that typically have a color temperature in the range of 3,200 to 5,700 kelvins to more closely match the color temperature of "daylight" negative, which is usually around 5,400 K (nominal daylight). CTO, which is orange in appearance, will correct a "daylight"-balanced light source (such as many common HMI bulbs) to match the color temperature of tungsten negative, which is typically 3,200 K. There are "half" and "quarter" variations of the common color correction gels. It is common to use color correction gels for artistic purposes and not just for negative-to-lightsource correction. Most ranges of gels also include non-colored media, such as a variety of diffusion and directional "silk" materials to produce special lighting effects. "Opal" for example is an opalescent or translucent diffusion filter. It is common for a gel manufacturer to publish the transmission coefficient or even the spectral transmittance curve in the swatch book and catalogs. A low transmittance gel will produce relatively little light on stage, but will cast a much more vivid color than a high transmission gel, because the colorfulness of a light source is directly related to narrowness of its spectral linewidth. Conversely, the flatter its curve becomes, the closer the gel is to a neutral density filter. See also Photographic filter Wratten number References Cinematography Optical filters Stage lighting
Color gel
[ "Chemistry" ]
1,628
[ "Optical filters", "Filters" ]
275,651
https://en.wikipedia.org/wiki/Shared-nothing%20architecture
A shared-nothing architecture (SN) is a distributed computing architecture in which each update request is satisfied by a single node (processor/memory/storage unit) in a computer cluster. The intent is to eliminate contention among nodes. Nodes do not share (independently access) the same memory or storage. One alternative architecture is shared everything, in which requests are satisfied by arbitrary combinations of nodes. This may introduce contention, as multiple nodes may seek to update the same data at the same time. It also contrasts with shared-disk and shared-memory architectures. SN eliminates single points of failure, allowing the overall system to continue operating despite failures in individual nodes and allowing individual nodes to upgrade hardware or software without a system-wide shutdown. A SN system can scale simply by adding nodes, since no central resource bottlenecks the system. In databases, a term for the part of a database on a single node is a shard. A SN system typically partitions its data among many nodes. A refinement is to replicate commonly used but infrequently modified data across many nodes, allowing more requests to be resolved on a single node. History Michael Stonebraker at the University of California, Berkeley used the term in a 1986 database paper. Teradata delivered the first SN database system in 1983. Tandem Computers NonStop systems, a shared-nothing implementation of hardware and software was released to market in 1976. Tandem Computers later released NonStop SQL, a shared-nothing relational database, in 1984. Applications Shared-nothing is popular for web development. Shared-nothing architectures are prevalent for data warehousing applications, although requests that require data from multiple nodes can dramatically reduce throughput. See also References Data partitioning Distributed computing architecture
Shared-nothing architecture
[ "Engineering" ]
358
[ "Data engineering", "Data partitioning" ]
4,840,944
https://en.wikipedia.org/wiki/Presheaf%20%28category%20theory%29
In category theory, a branch of mathematics, a presheaf on a category is a functor . If is the poset of open sets in a topological space, interpreted as a category, then one recovers the usual notion of presheaf on a topological space. A morphism of presheaves is defined to be a natural transformation of functors. This makes the collection of all presheaves on into a category, and is an example of a functor category. It is often written as and it is called the category of presheaves on . A functor into is sometimes called a profunctor. A presheaf that is naturally isomorphic to the contravariant hom-functor Hom(–, A) for some object A of C is called a representable presheaf. Some authors refer to a functor as a -valued presheaf. Examples A simplicial set is a Set-valued presheaf on the simplex category . Properties When is a small category, the functor category is cartesian closed. The poset of subobjects of form a Heyting algebra, whenever is an object of for small . For any morphism of , the pullback functor of subobjects has a right adjoint, denoted , and a left adjoint, . These are the universal and existential quantifiers. A locally small category embeds fully and faithfully into the category of set-valued presheaves via the Yoneda embedding which to every object of associates the hom functor . The category admits small limits and small colimits. See limit and colimit of presheaves for further discussion. The density theorem states that every presheaf is a colimit of representable presheaves; in fact, is the colimit completion of (see #Universal property below.) Universal property The construction is called the colimit completion of C because of the following universal property: Proof: Given a presheaf F, by the density theorem, we can write where are objects in C. Then let which exists by assumption. Since is functorial, this determines the functor . Succinctly, is the left Kan extension of along y; hence, the name "Yoneda extension". To see commutes with small colimits, we show is a left-adjoint (to some functor). Define to be the functor given by: for each object M in D and each object U in C, Then, for each object M in D, since by the Yoneda lemma, we have: which is to say is a left-adjoint to . The proposition yields several corollaries. For example, the proposition implies that the construction is functorial: i.e., each functor determines the functor . Variants A presheaf of spaces on an ∞-category C is a contravariant functor from C to the ∞-category of spaces (for example, the nerve of the category of CW-complexes.) It is an ∞-category version of a presheaf of sets, as a "set" is replaced by a "space". The notion is used, among other things, in the ∞-category formulation of Yoneda's lemma that says: is fully faithful (here C can be just a simplicial set.) A copresheaf of a category C is a presheaf of Cop. In other words, it is a covariant functor from C to Set. See also Topos Category of elements Simplicial presheaf (this notion is obtained by replacing "set" with "simplicial set") Presheaf with transfers Notes References Further reading Daniel Dugger, Sheaves and Homotopy Theory, the pdf file provided by nlab. Functors Sheaf theory Topos theory
Presheaf (category theory)
[ "Mathematics" ]
838
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Sheaf theory", "Topology", "Mathematical relations", "Category theory", "Functors", "Topos theory" ]
4,842,312
https://en.wikipedia.org/wiki/X-ray%20absorption%20near%20edge%20structure
X-ray absorption near edge structure (XANES), also known as near edge X-ray absorption fine structure (NEXAFS), is a type of absorption spectroscopy that indicates the features in the X-ray absorption spectra (XAS) of condensed matter due to the photoabsorption cross section for electronic transitions from an atomic core level to final states in the energy region of 50–100 eV above the selected atomic core level ionization energy, where the wavelength of the photoelectron is larger than the interatomic distance between the absorbing atom and its first neighbour atoms. Terminology Both XANES and NEXAFS are acceptable terms for the same technique. XANES name was invented in 1980 by Antonio Bianconi to indicate strong absorption peaks in X-ray absorption spectra in condensed matter due to multiple scattering resonances above the ionization energy. The name NEXAFS was introduced in 1983 by Jo Stohr and is synonymous with XANES, but is generally used when applied to surface and molecular science. Theory The fundamental phenomenon underlying XANES is the absorption of an x-ray photon by condensed matter with the formation of many body excited states characterized by a core hole in a selected atomic core level (refer to the first Figure). In the single-particle theory approximation, the system is separated into one electron in the core levels of the selected atomic species of the system and N-1 passive electrons. In this approximation the final state is described by a core hole in the atomic core level and an excited photoelectron. The final state has a very short life time because of the short life-time of the core hole and the short mean free path of the excited photoelectron with kinetic energy in the range around 20-50 eV. The core hole is filled either via an Auger process or by capture of an electron from another shell followed by emission of a fluorescent photon. The difference between NEXAFS and traditional photoemission experiments is that in photoemission, the initial photoelectron itself is measured, while in NEXAFS the fluorescent photon or Auger electron or an inelastically scattered photoelectron may also be measured. The distinction sounds trivial but is actually significant: in photoemission the final state of the emitted electron captured in the detector must be an extended, free-electron state. By contrast, in NEXAFS the final state of the photoelectron may be a bound state such as an exciton since the photoelectron itself need not be detected. The effect of measuring fluorescent photons, Auger electrons, and directly emitted electrons is to sum over all possible final states of the photoelectrons, meaning that what NEXAFS measures is the total joint density of states of the initial core level with all final states, consistent with conservation rules. The distinction is critical because in spectroscopy final states are more susceptible to many-body effects than initial states, meaning that NEXAFS spectra are more easily calculable than photoemission spectra. Due to the summation over final states, various sum rules are helpful in the interpretation of NEXAFS spectra. When the x-ray photon energy resonantly connects a core level with a narrow final state in a solid, such as an exciton, readily identifiable characteristic peaks will appear in the spectrum. These narrow characteristic spectral peaks give the NEXAFS technique a lot of its analytical power as illustrated by the B 1s π* exciton shown in the second Figure. Synchrotron radiation has a natural polarization that can be utilized to great advantage in NEXAFS studies. The commonly studied molecular adsorbates have sigma and pi bonds that may have a particular orientation on a surface. The angle dependence of the x-ray absorption tracks the orientation of resonant bonds due to dipole selection rules. Experimental considerations Soft x-ray absorption spectra are usually measured either through the fluorescent yield, in which emitted photons are monitored, or total electron yield, in which the sample is connected to ground through an ammeter and the neutralization current is monitored. Because NEXAFS measurements require an intense tunable source of soft x-rays, they are performed at synchrotrons. Because soft x-rays are absorbed by air, the synchrotron radiation travels from the ring in an evacuated beam-line to the end-station where the specimen to be studied is mounted. Specialized beam-lines intended for NEXAFS studies often have additional capabilities such as heating a sample or exposing it to a dose of reactive gas. Energy range Edge energy range In the absorption edge region of metals, the photoelectron is excited to the first unoccupied level above the Fermi level. Therefore, its mean free path in a pure single crystal at zero temperature is as large as infinite, and it remains very large, increasing the energy of the final state up to about 5 eV above the Fermi level. Beyond the role of the unoccupied density of states and matrix elements in single electron excitations, many-body effects appear as an "infrared singularity" at the absorption threshold in metals. In the absorption edge region of insulators the photoelectron is excited to the first unoccupied level above the chemical potential but the unscreened core hole forms a localized bound state called core exciton. EXAFS energy range The fine structure in the x-ray absorption spectra in the high energy range extending from about 150 eV beyond the ionization potential is a powerful tool to determine the atomic pair distribution (i.e. interatomic distances) with a time scale of about 10−15 s. In fact the final state of the excited photoelectron in the high kinetic energy range (150-2000 eV ) is determined only by single backscattering events due to the low amplitude photoelectron scattering. NEXAFS energy range In the NEXAFS region, starting about 5 eV beyond the absorption threshold, because of the low kinetic energy range (5-150 eV) the photoelectron backscattering amplitude by neighbor atoms is very large so that multiple scattering events become dominant in the NEXAFS spectra. The different energy range between NEXAFS and EXAFS can be also explained in a very simple manner by the comparison between the photoelectron wavelength and the interatomic distance of the photoabsorber-backscatterer pair. The photoelectron kinetic energy is connected with the wavelength by the following relation: which means that for high energy the wavelength is shorter than interatomic distances and hence the EXAFS region corresponds to a single scattering regime; while for lower E, is larger than interatomic distances and the XANES region is associated with a multiple scattering regime. Final states The absorption peaks of NEXAFS spectra are determined by multiple scattering resonances of the photoelectron excited at the atomic absorption site and scattered by neighbor atoms. The local character of the final states is determined by the short photoelectron mean free path, that is strongly reduced (down to about 0.3 nm at 50 eV) in this energy range because of inelastic scattering of the photoelectron by electron-hole excitations (excitons) and collective electronic oscillations of the valence electrons called plasmons. Applications The great power of NEXAFS derives from its elemental specificity. Because the various elements have different core level energies, NEXAFS permits extraction of the signal from a surface monolayer or even a single buried layer in the presence of a huge background signal. Buried layers are very important in engineering applications, such as magnetic recording media buried beneath a surface lubricant or dopants below an electrode in an integrated circuit. Because NEXAFS can also determine the chemical state of elements which are present in bulk in minute quantities, it has found widespread use in environmental chemistry and geochemistry. The ability of NEXAFS to study buried atoms is due to its integration over all final states including inelastically scattered electrons, as opposed to photoemission and Auger spectroscopy, which study atoms only with a layer or two of the surface. Much chemical information can be extracted from the NEXAFS region: formal valence (very difficult to experimentally determine in a nondestructive way); coordination environment (e.g., octahedral, tetrahedral coordination) and subtle geometrical distortions of it. Transitions to bound vacant states just above the Fermi level can be seen. Thus NEXAFS spectra can be used as a probe of the unoccupied band structure of a material. The near-edge structure is characteristic of an environment and valence state hence one of its more common uses is in fingerprinting: if you have a mixture of sites/compounds in a sample you can fit the measured spectra with a linear combinations of NEXAFS spectra of known species and determine the proportion of each site/compound in the sample. One example of such a use is the determination of the oxidation state of the plutonium in the soil at Rocky Flats. History The acronym XANES was first used in 1980 during interpretation of multiple scattering resonances spectra measured at the Stanford Synchrotron Radiation Laboratory (SSRL) by A. Bianconi. In 1982 the first paper on the application of XANES for determination of local structural geometrical distortions using multiple scattering theory was published by A. Bianconi, P. J. Durham and J. B. Pendry. In 1983 the first NEXAFS paper examining molecules adsorbed on surfaces appeared. The first XAFS paper, describing the intermediate region between EXAFS and XANES, appeared in 1987. Software for NEXAFS analysis ADF Calculation of NEXAFS using spin-orbit coupling TDDFT or the Slater-TS method. FDMNES Calculation of NEXAFS using finite difference method and full multiple scattering theory. FEFF8 Calculation of NEXAFS using full multiple scattering theory. MXAN NEXAFS fitting using full multiple scattering theory. FitIt NEXAFS fitting using multidimensional interpolation approximation. PARATEC NEXAFS calculation using plane-wave pseudopotential approach WIEN2k NEXAFS calculation on the basis of full-potential (linearized) augmented plane-wave approach. References Bibliography "X-ray Absorption Near-Edge Structure (XANES) Spectroscopy", G. S. Henderson, F. M. F. de Groot, B. J. A. Moulton in Spectroscopic Methods in Mineralogy and Materials Sciences, (G.S. Henderson, D. R. Neuville, R. T. Downs, Eds) Reviews in Mineralogy & Geochemistry vol. 78, p 75, 2014. DOI:10.2138/rmg.2014.78.3. "X-ray Absorption: Principles, Applications, Techniques of EXAFS, SEXAFS, and XANES", D. C. Koningsberger, R. Prins; A. Bianconi, P.J. Durham Chapters, Chemical Analysis 92, John Wiley & Sons, 1988. "Principles and Applications of EXAFS" Chapter 10 in Handbook of Synchrotron Radiation, pp 995–1014. E. A. Stern and S. M. Heald, E. E. Koch, ed., North-Holland, 1983. NEXAFS Spectroscopy by J. Stöhr, Springer 1992, . External links M. Newville, Fundamentals of XAFS S. Bare, XANES measurements and interpretation B. Ravel, A practical introduction to multiple scattering X-ray absorption spectroscopy
X-ray absorption near edge structure
[ "Chemistry", "Materials_science", "Engineering" ]
2,408
[ "X-ray absorption spectroscopy", "Materials science", "Laboratory techniques in condensed matter physics" ]
4,843,202
https://en.wikipedia.org/wiki/Elementary%20abelian%20group
In mathematics, specifically in group theory, an elementary abelian group is an abelian group in which all elements other than the identity have the same order. This common order must be a prime number, and the elementary abelian groups in which the common order is p are a particular kind of p-group. A group for which p = 2 (that is, an elementary abelian 2-group) is sometimes called a Boolean group. Every elementary abelian p-group is a vector space over the prime field with p elements, and conversely every such vector space is an elementary abelian group. By the classification of finitely generated abelian groups, or by the fact that every vector space has a basis, every finite elementary abelian group must be of the form (Z/pZ)n for n a non-negative integer (sometimes called the group's rank). Here, Z/pZ denotes the cyclic group of order p (or equivalently the integers mod p), and the superscript notation means the n-fold direct product of groups. In general, a (possibly infinite) elementary abelian p-group is a direct sum of cyclic groups of order p. (Note that in the finite case the direct product and direct sum coincide, but this is not so in the infinite case.) Examples and properties The elementary abelian group (Z/2Z)2 has four elements: . Addition is performed componentwise, taking the result modulo 2. For instance, . This is in fact the Klein four-group. In the group generated by the symmetric difference on a (not necessarily finite) set, every element has order 2. Any such group is necessarily abelian because, since every element is its own inverse, xy = (xy)−1 = y−1x−1 = yx. Such a group (also called a Boolean group), generalizes the Klein four-group example to an arbitrary number of components. (Z/pZ)n is generated by n elements, and n is the least possible number of generators. In particular, the set , where ei has a 1 in the ith component and 0 elsewhere, is a minimal generating set. Every finite elementary abelian group has a fairly simple finite presentation: Vector space structure Suppose V (Z/pZ)n is a finite elementary abelian group. Since Z/pZ Fp, the finite field of p elements, we have V = (Z/pZ)n Fpn, hence V can be considered as an n-dimensional vector space over the field Fp. Note that an elementary abelian group does not in general have a distinguished basis: choice of isomorphism V (Z/pZ)n corresponds to a choice of basis. To the observant reader, it may appear that Fpn has more structure than the group V, in particular that it has scalar multiplication in addition to (vector/group) addition. However, V as an abelian group has a unique Z-module structure where the action of Z corresponds to repeated addition, and this Z-module structure is consistent with the Fp scalar multiplication. That is, c⋅g = g + g + ... + g (c times) where c in Fp (considered as an integer with 0 ≤ c < p) gives V a natural Fp-module structure. Automorphism group As a finite-dimensional vector space V has a basis {e1, ..., en} as described in the examples, if we take {v1, ..., vn} to be any n elements of V, then by linear algebra we have that the mapping T(ei) = vi extends uniquely to a linear transformation of V. Each such T can be considered as a group homomorphism from V to V (an endomorphism) and likewise any endomorphism of V can be considered as a linear transformation of V as a vector space. If we restrict our attention to automorphisms of V we have Aut(V) = { T : V → V | ker T = 0 } = GLn(Fp), the general linear group of n × n invertible matrices on Fp. The automorphism group GL(V) = GLn(Fp) acts transitively on V \ {0} (as is true for any vector space). This in fact characterizes elementary abelian groups among all finite groups: if G is a finite group with identity e such that Aut(G) acts transitively on G \ {e}, then G is elementary abelian. (Proof: if Aut(G) acts transitively on G \ {e}, then all nonidentity elements of G have the same (necessarily prime) order. Then G is a p-group. It follows that G has a nontrivial center, which is necessarily invariant under all automorphisms, and thus equals all of G.) A generalisation to higher orders It can also be of interest to go beyond prime order components to prime-power order. Consider an elementary abelian group G to be of type (p,p,...,p) for some prime p. A homocyclic group (of rank n) is an abelian group of type (m,m,...,m) i.e. the direct product of n isomorphic cyclic groups of order m, of which groups of type (pk,pk,...,pk) are a special case. Related groups The extra special groups are extensions of elementary abelian groups by a cyclic group of order p, and are analogous to the Heisenberg group. See also Elementary group Hamming space References Abelian group theory Finite groups P-groups
Elementary abelian group
[ "Mathematics" ]
1,199
[ "Mathematical structures", "Algebraic structures", "Finite groups" ]
4,846,345
https://en.wikipedia.org/wiki/Electron%20electric%20dipole%20moment
The electron electric dipole moment is an intrinsic property of an electron such that the potential energy is linearly related to the strength of the electric field: The electron's electric dipole moment (EDM) must be collinear with the direction of the electron's magnetic moment (spin). Within the Standard Model, such a dipole is predicted to be non-zero but very small, at most , where e stands for the elementary charge. The discovery of a substantially larger electron electric dipole moment would imply a violation of both parity invariance and time reversal invariance. Implications for Standard Model and extensions In the Standard Model, the electron EDM arises from the CP-violating components of the CKM matrix. The moment is very small because the CP violation involves quarks, not electrons directly, so it can only arise by quantum processes where virtual quarks are created, interact with the electron, and then are annihilated. If neutrinos are Majorana particles, a larger EDM (around ) is possible in the Standard Model. Many extensions to the Standard Model have been proposed in the past two decades. These extensions generally predict larger values for the electron EDM. For instance, the various technicolor models predict that ranges from 10−27 to 10−29 e⋅cm. Some supersymmetric models predict that but some other parameter choices or other supersymmetric models lead to smaller predicted values. The present experimental limit therefore eliminates some of these technicolor/supersymmetric theories, but not all. Further improvements, or a positive result, would place further limits on which theory takes precedence. Formal definition As the electron has a net charge, the definition of its electric dipole moment is ambiguous in that depends on the point about which the moment of the charge distribution is taken. If we were to choose to be the center of charge, then would be identically zero. A more interesting choice would be to take as the electron's center of mass evaluated in the frame in which the electron is at rest. Classical notions such as the center of charge and mass are, however, hard to make precise for a quantum elementary particle. In practice the definition used by experimentalists comes from the form factors appearing in the matrix element of the electromagnetic current operator between two on-shell states with Lorentz invariant phase space normalization in which Here and are 4-spinor solutions of the Dirac equation normalized so that , and is the momentum transfer from the current to the electron. The form factor is the electron's charge, is its static magnetic dipole moment, and provides the formal definition of the electron's electric dipole moment. The remaining form factor would, if nonzero, be the anapole moment. Experimental measurements Electron EDMs are usually not measured on free electrons, but instead on bound, unpaired valence electrons inside atoms and molecules. In these, one can observe the effect of as a slight shift of spectral lines. The sensitivity to scales approximately with the nuclear charge cubed. For this reason, electron EDM searches almost always are conducted on systems involving heavy elements. To date, no experiment has found a non-zero electron EDM. As of 2020 the Particle Data Group publishes its value as . Here is a list of some electron EDM experiments after 2000 with published results: The ACME collaboration is, as of 2020, developing a further version of the ACME experiment series. The latest experiment is called Advanced ACME or ACME III and it aims to improve the limit on electron EDM by one to two orders of magnitude. Future proposed experiments Besides the above groups, electron EDM experiments are being pursued or proposed by the following groups: University of Groningen: BaF molecular beam John Doyle (Harvard University), Nicholas Hutzler (California Institute of Technology), and Timothy Steimle (Arizona State University): YbOH molecular trap EDMcubed collaboration, Amar Vutha (University of Toronto), Eric Hessels (York University): oriented polar molecules in an inert gas matrix David Weiss (Pennsylvania State University): Cs and Rb atoms trapped inside an optical lattice TRIUMF: Fountain of laser cooled Fr EDMMA collaboration: Cs in an inert gas matrix See also Neutron electric dipole moment Electron magnetic moment Anomalous electric dipole moment Anomalous magnetic dipole moment Electric dipole spin resonance CP violation Charge conjugation T-symmetry Footnotes References Electric dipole moment Electromagnetism Particle physics
Electron electric dipole moment
[ "Physics", "Mathematics" ]
924
[ "Electromagnetism", "Physical phenomena", "Electric dipole moment", "Physical quantities", "Quantity", "Fundamental interactions", "Particle physics", "Moment (physics)" ]
22,423,074
https://en.wikipedia.org/wiki/Linde%E2%80%93Frank%E2%80%93Caro%20process
The Linde–Frank–Caro process is a method for hydrogen production by removing hydrogen and carbon dioxide from water gas by condensation. The process was invented in 1909 by Adolf Frank and developed with Carl von Linde and Heinrich Caro. Process description Water gas is compressed to 20 bar and pumped into the Linde–Frank–Caro reactor. A water column removes most of the carbon dioxide and sulfur. Tubes with caustic soda then remove the remaining carbon dioxide, sulphur, and water from the gas stream. The gas enters a chamber and is cooled to −190 °C, resulting in the condensation of most of the gas to a liquid. The remaining gas is pumped to the next vessel where the nitrogen is liquefied by cooling to −205 °C, resulting in hydrogen gas as an end product. See also Water gas shift reaction Timeline of hydrogen technologies Frank–Caro process, another process used to produce cyanamide from calcium carbide and nitrogen gas in an electric furnace References Hydrogen production Chemical processes Industrial gases 1909 in science 1909 in Germany
Linde–Frank–Caro process
[ "Chemistry" ]
221
[ "Chemical process engineering", "Chemical processes", "Industrial gases", "nan" ]
22,423,865
https://en.wikipedia.org/wiki/Nuclear%20magnetic%20resonance%20spectroscopy%20of%20carbohydrates
Carbohydrate NMR spectroscopy is the application of nuclear magnetic resonance (NMR) spectroscopy to structural and conformational analysis of carbohydrates. This method allows the scientists to elucidate structure of monosaccharides, oligosaccharides, polysaccharides, glycoconjugates and other carbohydrate derivatives from synthetic and natural sources. Among structural properties that could be determined by NMR are primary structure (including stereochemistry), saccharide conformation, stoichiometry of substituents, and ratio of individual saccharides in a mixture. Modern high field NMR instruments used for carbohydrate samples, typically 500 MHz or higher, are able to run a suite of 1D, 2D, and 3D experiments to determine a structure of carbohydrate compounds. Carbohydrate NMR observables Chemical shift Common chemical shift ranges for nuclei within carbohydrate residues are: Typical 1H NMR chemical shifts of carbohydrate ring protons are 3–6 ppm (4.5–5.5 ppm for anomeric protons). Typical 13C NMR chemical shifts of carbohydrate ring carbons are 60–110 ppm In the case of simple mono- and oligosaccharide molecules, all proton signals are typically separated from one another (usually at 500 MHz or better NMR instruments) and can be assigned using 1D NMR spectrum only. However, bigger molecules exhibit significant proton signal overlap, especially in the non-anomeric region (3-4 ppm). Carbon-13 NMR overcomes this disadvantage by larger range of chemical shifts and special techniques allowing to block carbon-proton spin coupling, thus making all carbon signals high and narrow singlets distinguishable from each other. The typical ranges of specific carbohydrate carbon chemical shifts in the unsubstituted monosaccharides are: Anomeric carbons: 90-100 ppm Sugar ring carbons bearing a hydroxy function: 68-77 Open-form sugar carbons bearing a hydroxy function: 71-75 Sugar ring carbons bearing an amino function: 50-56 Exocyclic hydroxymethyl groups: 60-64 Exocyclic carboxy groups: 172-176 Desoxygenated sugar ring carbons: 31-40 A carbon at pyranose ring closure: 71-73 (α-anomers), 74-76 (β-anomers) A carbon at furanose ring closure: 80-83 (α-anomers), 83-86 (β-anomers) Coupling constants Direct carbon-proton coupling constants are used to study the anomeric configuration of a sugar. Vicinal proton-proton coupling constants are used to study stereo orientation of protons relatively to the other protons within a sugar ring, thus identifying a monosaccharide. Vicinal heteronuclear H-C-O-C coupling constants are used to study torsional angles along glycosidic bond between sugars or along exocyclic fragments, thus revealing a molecular conformation. Sugar rings are relatively rigid molecular fragments, thus vicinal proton-proton couplings are characteristic: Equatorial to axial: 1–4 Hz Equatorial to equatorial: 0–2 Hz Axial to axial non-anomeric: 9–11 Hz Axial to axial anomeric: 7–9 Hz Axial to exocyclic hydroxymethyl: 5 Hz, 2 Hz Geminal between hydroxymethyl protons: 12 Hz Nuclear Overhauser effects (NOEs) NOEs are sensitive to interatomic distances, allowing their usage as a conformational probe, or proof of a glycoside bond formation. It's a common practice to compare calculated to experimental proton-proton NOEs in oligosaccharides to confirm a theoretical conformational map. Calculation of NOEs implies an optimization of molecular geometry. Other NMR observables Relaxivities, nuclear relaxation rates, line shape and other parameters were reported useful in structural studies of carbohydrates. Elucidation of carbohydrate structure by NMR spectroscopy Structural parameters of carbohydrates The following is a list of structural features that can be elucidated by NMR: Chemical structure of each carbohydrate residue in a molecule, including carbon skeleton size and sugar type (aldose/ketose) cycle size (pyranose/furanose/linear) stereo configuration of all carbons (monosaccharide identification) stereo configuration of anomeric carbon (α/β) absolute configuration (D/L) location of amino-, carboxy-, deoxy- and other functions Chemical structure of non-carbohydrate residues in molecule (amino acids, fatty acids, alcohols, organic aglycons etc.) Substitution positions in residues Sequence of residues Stoichiometry of terminal residues and side chains Location of phosphate and sulfate diester bonds Polymerization degree and frame positioning (for polysaccharides) NMR spectroscopy vs. other methods Widely known methods of structural investigation, such as mass-spectrometry and X-ray analysis are only limitedly applicable to carbohydrates. Such structural studies, such as sequence determination or identification of new monosaccharides, benefit the most from the NMR spectroscopy. Absolute configuration and polymerization degree are not always determinable using NMR only, so the process of structural elucidation may require additional methods. Although monomeric composition can be solved by NMR, chromatographic and mass-spectroscopic methods provide this information sometimes easier. The other structural features listed above can be determined solely by the NMR spectroscopic methods. The limitation of the NMR structural studies of carbohydrates is that structure elucidation can hardly be automatized and require a human expert to derive a structure from NMR spectra. Application of various NMR techniques to carbohydrates Complex glycans possess a multitude of overlapping signals, especially in a proton spectrum. Therefore, it is advantageous to utilize 2D experiments for the assignment of signals. The table and figures below list most widespread NMR techniques used in carbohydrate studies. Research scheme NMR spectroscopic research includes the following steps: Extraction of carbohydrate material (for natural glycans) Chemical removal of moieties masking regularity (for polymers) Separation and purification of carbohydrate material (for 2D NMR experiments, 10 mg or more is recommended) Sample preparation (usually in D2O) Acquisition of 1D spectra Planning, acquisition and processing of other NMR experiments (usually requires from 5 to 20 hours) Assignment and interpretation of spectra (see exemplary figure) If a structural problem could not be solved: chemical modification/degradation and NMR analysis of products Acquisition of spectra of the native (unmasked) compound and their interpretation based on modified structure Presentation of results Carbohydrate NMR databases and tools Multiple chemical shift databases and related services have been created to aid structural elucidation of and expert analysis of their NMR spectra. Of them, several informatics tools are dedicated solely to carbohydrates: GlycoSCIENCES.de over 4,000 NMR spectra of mammalian glycans search of structure by NMR signals and vice versa CSDB (carbohydrate structure database) contains: over 20,000 NMR spectra (as of 2024) of bacterial, plant, fungal and protistal glycans, search of structure by NMR signals and vice versa empirical spectra simulation routine optimized for carbohydrates, statistical chemical shift estimation based on HOSE algorithm optimized for carbohydrates, structure generation and NMR-based ranking tool. CASPER (computer assisted spectrum evaluation of regular polysaccharides). contains: chemical shift database, empirical spectra simulation routine optimized for carbohydrates, online interface. structure matching tool. Both proton and carbon C and H chemical shifts can be used to access structural information. Simulation of the NMR observables Several approaches to simulate NMR observables of carbohydrates has been reviewed. They include: Universal statistical database approaches (ACDLabs, Modgraph, etc.) Usage of neural networks to refine the predictions Regression based methods CHARGE Carbohydrate-optimized empirical schemes (CSDB/BIOPSEL, CASPER). Combined molecular mechanics/dynamics geometry calculation and quantum-mechanical simulation/iteration of NMR observables (PERCH NMR Software) ONIOM approaches (optimization of different parts of molecule with different accuracy) Ab initio calculations. Growing computational power allows usage of thorough quantum-mechanical calculations at high theory levels and large basis sets for refining the molecular geometry of carbohydrates and subsequent prediction of NMR observables using GIAO and other methods with or without solvent effect account. Among combinations of theory level and a basis set reported as sufficient for NMR predictions were B3LYP/6-311G++(2d,2p) and PBE/PBE (see review). It was shown for saccharides that carbohydrate-optimized empirical schemes provide significantly better accuracy (0.0-0.5 ppm per 13C resonance) than quantum chemical methods (above 2.0 ppm per resonance) reported as best for NMR simulations, and work thousands times faster. However, these methods can predict only chemical shifts and perform poor for non-carbohydrate parts of molecules. As a representative example, see figure on the right. See also Methods of 1D and 2D NMR spectroscopy in structural studies of natural glycopolymers (lection) Carbohydrate databases in the recent decade (lection; includes NMR simulation data) Carbohydrate Glycan Nuclear magnetic resonance Nuclear magnetic resonance spectroscopy of nucleic acids Nuclear magnetic resonance spectroscopy of proteins NMR spectroscopy Nuclear Overhauser effect References Further reading External links Glycobiology Carbohydrate chemistry Carbohydrates Carbohydrates
Nuclear magnetic resonance spectroscopy of carbohydrates
[ "Physics", "Chemistry", "Biology" ]
2,160
[ "Biomolecules by chemical classification", "Carbohydrates", "Nuclear magnetic resonance", "Spectrum (physical sciences)", "Nuclear magnetic resonance spectroscopy", "Organic compounds", "Carbohydrate chemistry", "nan", "Chemical synthesis", "Biochemistry", "Glycobiology", "Spectroscopy" ]
22,423,972
https://en.wikipedia.org/wiki/Protein%E2%80%93carbohydrate%20interaction
Carbohydrate–protein interactions are the intermolecular and intramolecular interactions between protein and carbohydrate moieties. These interactions form the basis of specific recognition of carbohydrates by lectins. Carbohydrates are important biopolymers and have a variety of functions. Often carbohydrates serve a function as a recognition element. That is, they are specifically recognized by other biomolecules. Proteins which bind carbohydrate structures are known as lectins. Compared to the study of protein–protein and protein–DNA interaction, it is relatively recent that scientists get to know the protein–carbohydrate binding. Many of these interactions involved carbohydrates found at the cell surface, as part of a membrane glycoprotein or glycolipid. These interactions can play a role in cellular adhesion and other cellular recognition events. Intramolecular carbohydrate–protein interactions refer to interactions between glycan and polypeptide moieties in glycoproteins or the glycosylated proteins. Classification Generally, there are two types of protein carbohydrate binding important in biological processes: Lectin and antibody. Lectin Lectin is a kind of protein that can bind to carbohydrate with their carbohydrate recognition domains (CRDs). We could use different CRD to classify them. C-type Ca2+ is required to activate the binding. Ca2+ binds to the protein and carbohydrate by non covalent bond. Mannose-binding protein (MBP) contains the C-type CRD. P-type Two types mannose-6-phosphate can recognize phosphorylated saccharide. One is cation-dependent and the other does not require cation to activate. I-type I-type lectin named from the immunoglobulin-like domain. Sialoadhesin is one of the I-type lectin, which binds specifically to sialic acid. Antibody Most antibodies have the similar structure except the hypervariable region which is called the antigen binding site. This region is constituted by the combination of various amino acids. When the antigen is a kind of carbohydrate (Polysaccharide), the binding could be regarded as a protein-carbohydrate interaction. Biological function Protein–carbohydrate interactions play an important role in biological function. Cell adhesion Signal Transduction Host-Pathogen Recognition Inflammation Stabilization of protein structure Methods of study X-ray crystallography Just like other organic molecule study, X-ray crystallography is a very useful tool to know the detail information on the interaction between carbohydrate and protein. NMR Study By using titration, NOESY(Nuclear Overhauser Effect SpectroscopY), CIDNP experiments, the specificity and affinity of binding, association constants and equilibrium thermodynamic parameters of carbohydrate–protein binding can be studied. Molecular Modeling In many cases, the conformation information is required, however, sometimes it is not able to get directly from the experiments. So the knowledge-based model building approach is used. Fluorescence Spectrometry Fluorescence spectrometry is a useful tool and has its advantages: no procedure for separation and plenty of ways to get fluorophore source: there are some of amino acids and ligands that have fluorophore after they are activated. Dual polarisation interferometry Dual polarisation interferometry is a label free analytical technique for measuring interactions and associated conformational changes. Advances in the study of protein–carbohydrate binding Microarray-Based Study by Metal Nanoparticle Probes Recently, studies by using metal nanoparticle probes to detect the carbohydrate–protein interactions were reported. Use of gold and silver nanoparticle probes in resonant light scattering (RLS) gives particular high sensitivity. Zhenxin Wang and coworker the same principle applied this method to detect the interaction between carbohydrate and protein. Carbohydrate biosensor As Lectin can strongly bind to specific carbohydrate, scientists develop several lectin-based carbohydrate biosensors. Designed lectin contains specific groups can be detected by analytical method. Isothermal Titration Calorimetry References Carbohydrate chemistry Carbohydrates Glycobiology
Protein–carbohydrate interaction
[ "Chemistry", "Biology" ]
958
[ "Biomolecules by chemical classification", "Carbohydrates", "Organic compounds", "Carbohydrate chemistry", "Chemical synthesis", "nan", "Biochemistry", "Glycobiology" ]
22,424,589
https://en.wikipedia.org/wiki/Trimethylsulfonium
Trimethylsulfonium (systematically named trimethylsulfanium) is an organic cation with the chemical formula (also written as ). Compounds Several salts of trimethylsulfonium are known. X-ray crystallography reveals that the ion has trigonal pyramidal molecular geometry at sulfur atom, with C-S-C angles near 102° and C-S bond distance of 177 picometers. Unless the counteranion is colored, all trimethylsulfonium salts are white or colorless. Preparation Sulfonium compounds can be synthesised by treating a suitable alkyl halide with a thioether. For example, the reaction of dimethyl sulfide with iodomethane yields trimethylsulfonium iodide: Related An extra oxygen atom can bond to the sulfur atom to yield the trimethylsulfoxonium ion , where the sulfur atom is tetravalent and tetracoordinated. Use Glyphosate herbicide is often supplied as a trimethylsulfonium salt, referred to as trimesium. When mixed with aluminium bromide, or aluminium chloride or even hydrogen bromide, trimethylsulfonium bromide forms an ionic liquid, which melts at temperatures below standard conditions. References See also Onium compounds Cations Sulfur ions Methyl compounds
Trimethylsulfonium
[ "Physics", "Chemistry" ]
279
[ "Cations", "Ions", "Sulfur ions", "Matter" ]
22,426,870
https://en.wikipedia.org/wiki/Numerical%20renormalization%20group
The numerical renormalization group (NRG) is a technique devised by Kenneth Wilson to solve certain many-body problems where quantum impurity physics plays a key role. History The numerical renormalization group is an inherently non-perturbative procedure, which was originally used to solve the Kondo model. The Kondo model is a simplified theoretical model which describes a system of magnetic spin-1/2 impurities which couple to metallic conduction electrons (e.g. iron impurities in gold). This problem is notoriously difficult to tackle theoretically, since perturbative techniques break down at low-energy. However, Wilson was able to prove for the first time using the numerical renormalization group that the ground state of the Kondo model is a singlet state. But perhaps more importantly, the notions of renormalization, fixed points, and renormalization group flow were introduced to the field of condensed matter theory — it is for this that Wilson won the Nobel Prize in 1982. The complete behaviour of the Kondo model, including both the high-temperature 'local moment' regime and the low-temperature 'strong coupling' regime are captured by the numerical renormalization group; an exponentially small energy scale TK (not accessible from straight perturbation theory) was shown to govern all properties at low-energies, with all physical observables such as resistivity, thermodynamics, dynamics etc. exhibiting universal scaling. This is a characteristic feature of many problems in condensed matter physics, and is a central theme of quantum impurity physics in particular. In the original example of the Kondo model, the impurity local moment is completely screened below TK by the conduction electrons via the celebrated Kondo effect; and one famous consequence is that such materials exhibit a resistivity minimum at low temperatures, contrary to expectations based purely on the standard phonon contribution, where the resistivity is predicted to decrease monotonically with temperature. The very existence of local moments in real systems of course presupposes strong electron-electron correlations. The Anderson impurity model describes a quantum level with an onsite Coulomb repulsion between electrons (rather than a spin), which is tunnel-coupled to metallic conduction electrons. In the singly occupied regime of the impurity, one can derive the Kondo model from the Anderson model, but the latter contains other physics associated with charge fluctuations. The numerical renormalization group was extended to deal with the Anderson model (capturing thereby both Kondo physics and valence fluctuation physics) by H. R. Krishnamurthy et al. in 1980. Indeed, various important developments have been made since: a comprehensive modern review has been compiled by Bulla et al. Technique The numerical renormalization group is an iterative procedure, which is an example of a renormalization group technique. The technique consists of first dividing the conduction band into logarithmic intervals (i.e. intervals which get smaller exponentially as you move closer to the Fermi energy). One conduction band state from each interval is retained, this being the totally symmetric combination of all the states in that interval. The conduction band has now been "logarithmically discretized". The Hamiltonian is now in a position to be transformed into so-called linear chain form, in which the impurity is coupled to only one conduction band state, which is coupled to one other conduction band state and so on. Crucially, these couplings decrease exponentially along the chain, so that, even though the transformed Hamiltonian is for an infinite chain, one can consider a chain of finite length and still obtain useful results. The only restriction to the conduction-band is that it is non-interacting. Recent developments make it possible for mapping a general multi-channel conduction-band with channel mixing to a Wilson chain, and here is the python implementation. Once the Hamiltonian is in linear chain form, one can begin the iterative process. First the isolated impurity is considered, which will have some characteristic set of energy levels. One then considers adding the first conduction band orbital to the chain. This causes a splitting in the energy levels for the isolated impurity. One then considers the effect of adding further orbitals along the chain, doing which splits the hitherto derived energy levels further. Because the couplings decrease along the chain, the successive splittings caused by adding orbitals to the chain decrease. When a particular number of orbitals have been added to the chain, we have a set of energy levels for that finite chain. This is obviously not the true set of energy levels for the infinite chain, but it is a good approximation to the true set in the temperature range where: the further splittings caused by adding more orbitals is negligible, and we have enough orbitals in the chain to account for splittings which are relevant in this temperature range. The results of this is that the results derived for a chain of any particular length are valid only in a particular temperature range, a range which moves to lower temperatures as the chain length increases. This means that by considering the results at many different chain lengths, one can build up a picture of the behavior of the system over a wide temperature range. The Hamiltonian for a linear chain of finite length is an example of an effective Hamiltonian. It is not the full Hamiltonian of the infinite linear chain system, but in a certain temperature range it gives similar results to the full Hamiltonian. References Renormalization group
Numerical renormalization group
[ "Physics", "Materials_science" ]
1,138
[ "Physical phenomena", "Materials science stubs", "Statistical mechanics", "Critical phenomena", "Quantum mechanics", "Renormalization group", "Condensed matter physics", "Condensed matter stubs", "Quantum physics stubs" ]
19,780,855
https://en.wikipedia.org/wiki/Micro-X-ray%20fluorescence
Micro x-ray fluorescence (μXRF) is an elemental analysis technique that relies on the same principles as x-ray fluorescence (XRF). Synchrotron X-rays may be used to provide elemental imaging with biological samples. The spatial resolution diameter of micro x-ray fluorescence is many orders of magnitude smaller than that of conventional XRF. While a smaller excitation spot can be achieved by restricting x-ray beam using a pinhole aperture, this method blocks much of the x-ray flux which has an adverse effect on the sensitivity of trace elemental analysis. Two types of x-ray optics, polycapillary and doubly curved crystal focusing optics, are able to create small focal spots of just a few micrometers in diameter. By using x-ray optics, the irradiation of the focal spot is much more intense and allows for enhanced trace element analysis and better resolution of small features. Micro x-ray fluorescence using x-ray optics has been used in applications such as forensics, small feature evaluations, elemental mapping, mineralogy, electronics, multi-layered coating analysis, micro-contamination detection, film and plating thickness, biology and environment. Application in forensic science Micro x-ray fluorescence is among the newest technologies used to detect fingerprints. It is a new visualization technique which rapidly reveals the elemental composition of a sample by irradiating it with a thin beam of X-rays without disturbing the sample. It was discovered recently by scientists at the Los Alamos National Laboratory. The newly discovered technique was then first revealed at the 229th national meeting of the American Chemical Society (March, 2005). This new discovery could prove to be very beneficial to the law enforcement world, because it is expected that μXRF will be able to detect the most complex molecules in fingerprints. Michael Bernstein of the American Chemical Society describes how the process works "Salts such as sodium chloride and potassium chloride excreted in sweat are sometimes present in detectable quantities in fingerprints. Using μXRF, the researchers showed that they could detect the sodium, potassium and chlorine from such salts. And since these salts are deposited along the patterns present in a fingerprint, an image of the fingerprint can be visualized producing an elemental image for analysis." This basically means that we can “see” a fingerprint because the salts are deposited mainly along the patterns present in a fingerprint. Since μXRF technology uses X-ray technology to detect fingerprints, instead of traditional techniques, the image comes out much clearer. Traditional fingerprints are performed by a technique using powders, liquids or vapors to add color to the fingerprint so it can be distinguished. But sometimes this process may alter the fingerprint or may not be able to detect some of the more complex molecules. Another μXRF application in forensics is GSR (gunshot residue) determination. Some specific elements, as antimony, barium and lead, can be identified on a cotton passed on the hands and clothes of the suspect of using a gun. References Scientific techniques Chemistry X-rays
Micro-X-ray fluorescence
[ "Physics" ]
630
[ "X-rays", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
19,783,072
https://en.wikipedia.org/wiki/Antiplane%20shear
Antiplane shear or antiplane strain is a special state of strain in a body. This state of strain is achieved when the displacements in the body are zero in the plane of interest but nonzero in the direction perpendicular to the plane. For small strains, the strain tensor under antiplane shear can be written as where the plane is the plane of interest and the direction is perpendicular to that plane. Displacements The displacement field that leads to a state of antiplane shear is (in rectangular Cartesian coordinates) where are the displacements in the directions. Stresses For an isotropic, linear elastic material, the stress tensor that results from a state of antiplane shear can be expressed as where is the shear modulus of the material. Equilibrium equation for antiplane shear The conservation of linear momentum in the absence of inertial forces takes the form of the equilibrium equation. For general states of stress there are three equilibrium equations. However, for antiplane shear, with the assumption that body forces in the 1 and 2 directions are 0, these reduce to one equilibrium equation which is expressed as where is the body force in the direction and . Note that this equation is valid only for infinitesimal strains. Applications The antiplane shear assumption is used to determine the stresses and displacements due to a screw dislocation. References See also Infinitesimal strain theory Deformation (mechanics) Elasticity (physics) Solid mechanics
Antiplane shear
[ "Physics", "Materials_science" ]
286
[ "Solid mechanics", "Physical phenomena", "Elasticity (physics)", "Deformation (mechanics)", "Mechanics", "Physical properties" ]
485,424
https://en.wikipedia.org/wiki/Householder%20transformation
In linear algebra, a Householder transformation (also known as a Householder reflection or elementary reflector) is a linear transformation that describes a reflection about a plane or hyperplane containing the origin. The Householder transformation was used in a 1958 paper by Alston Scott Householder. Its analogue over general inner product spaces is the Householder operator. Definition Transformation The reflection hyperplane can be defined by its normal vector, a unit vector (a vector with length ) that is orthogonal to the hyperplane. The reflection of a point about this hyperplane is the linear transformation: where is given as a column unit vector with conjugate transpose . Householder matrix The matrix constructed from this transformation can be expressed in terms of an outer product as: is known as the Householder matrix, where is the identity matrix. Properties The Householder matrix has the following properties: it is Hermitian: , it is unitary: (via the Sherman-Morrison formula), hence it is involutory: . A Householder matrix has eigenvalues . To see this, notice that if is orthogonal to the vector which was used to create the reflector, then , i.e., is an eigenvalue of multiplicity , since there are independent vectors orthogonal to . Also, notice (since is by definition a unit vector), and so is an eigenvalue with multiplicity . The determinant of a Householder reflector is , since the determinant of a matrix is the product of its eigenvalues, in this case one of which is with the remainder being (as in the previous point), or via the Matrix determinant lemma. Applications Geometric optics In geometric optics, specular reflection can be expressed in terms of the Householder matrix (see ). Numerical linear algebra Householder transformations are widely used in numerical linear algebra, for example, to annihilate the entries below the main diagonal of a matrix, to perform QR decompositions and in the first step of the QR algorithm. They are also widely used for transforming to a Hessenberg form. For symmetric or Hermitian matrices, the symmetry can be preserved, resulting in tridiagonalization. QR decomposition Householder transformations can be used to calculate a QR decomposition. Consider a matrix tridiangularized up to column , then our goal is to construct such Householder matrices that act upon the principal submatrices of a given matrix via the matrix . (note that we already established before that Householder transformations are unitary matrices, and since the multiplication of unitary matrices is itself a unitary matrix, this gives us the unitary matrix of the QR decomposition) If we can find a so that we could accomplish this. Thinking geometrically speaking, we are looking for a plane so that the reflection of about the plane happens to land directly on the basis vector. In other words, for some constant . However, this also shows that . And since is a unit vector, this means that Now if we apply equation () back into equation (). Or, in other words, by comparing the scalars in front of the vector we must have . Or Which means that we can solve for as This completes the construction; however, in practice we want to avoid catastrophic cancellation in equation (). To do so, we choose the sign of so that Tridiagonalization This procedure is presented in Numerical Analysis by Burden and Faires. It uses a slightly altered function with . In the first step, to form the Householder matrix in each step we need to determine and , which are: From and , construct vector : where , , and for each Then compute: Having found and computed the process is repeated for as follows: Continuing in this manner, the tridiagonal and symmetric matrix is formed. Examples In this example, also from Burden and Faires, the given matrix is transformed to the similar tridiagonal matrix A3 by using the Householder method. Following those steps in the Householder method, we have: The first Householder matrix: Used to form As we can see, the final result is a tridiagonal symmetric matrix which is similar to the original one. The process is finished after two steps. Quantum Computation As unitary matrices are useful in quantum computation, and Householder transformations are unitary, they are very useful in quantum computing. One of the central algorithms where they're useful is Grover's algorithm, where we are trying to solve for a representation of an oracle function represented by what turns out to be a Householder transformation: (here the is part of the Bra-ket notation and is analogous to which we were using previously) This is done via an algorithm that iterates via the oracle function and another operator known as the Grover diffusion operator defined by and . Computational and theoretical relationship to other unitary transformations The Householder transformation is a reflection about a hyperplane with unit normal vector , as stated earlier. An -by- unitary transformation satisfies . Taking the determinant (-th power of the geometric mean) and trace (proportional to arithmetic mean) of a unitary matrix reveals that its eigenvalues have unit modulus. This can be seen directly and swiftly: Since arithmetic and geometric means are equal if the variables are constant (see inequality of arithmetic and geometric means), we establish the claim of unit modulus. For the case of real valued unitary matrices we obtain orthogonal matrices, . It follows rather readily (see orthogonal matrix) that any orthogonal matrix can be decomposed into a product of 2 by 2 rotations, called Givens Rotations, and Householder reflections. This is appealing intuitively since multiplication of a vector by an orthogonal matrix preserves the length of that vector, and rotations and reflections exhaust the set of (real valued) geometric operations that render invariant a vector's length. The Householder transformation was shown to have a one-to-one relationship with the canonical coset decomposition of unitary matrices defined in group theory, which can be used to parametrize unitary operators in a very efficient manner. Finally we note that a single Householder transform, unlike a solitary Givens transform, can act on all columns of a matrix, and as such exhibits the lowest computational cost for QR decomposition and tridiagonalization. The penalty for this "computational optimality" is, of course, that Householder operations cannot be as deeply or efficiently parallelized. As such Householder is preferred for dense matrices on sequential machines, whilst Givens is preferred on sparse matrices, and/or parallel machines. See also Block reflector Givens rotation Jacobi rotation Notes References (Herein Householder Transformation is cited as a top 10 algorithm of this century) Transformation (function) Matrices Numerical linear algebra
Householder transformation
[ "Mathematics" ]
1,340
[ "Matrices (mathematics)", "Mathematical objects", "Geometry", "Transformation (function)" ]