id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
3,533,912 | https://en.wikipedia.org/wiki/Glufosinate | Glufosinate (also known as phosphinothricin and often sold as an ammonium salt) is a naturally occurring broad-spectrum herbicide produced by several species of Streptomyces soil bacteria. Glufosinate is a non-selective, contact herbicide, with some systemic action. Plants may also metabolize bialaphos and phosalacine, other naturally occurring herbicides, directly into glufosinate. The compound irreversibly inhibits glutamine synthetase, an enzyme necessary for the production of glutamine and for ammonia detoxification, giving it antibacterial, antifungal and herbicidal properties. Application of glufosinate to plants leads to reduced glutamine and elevated ammonia levels in tissues, halting photosynthesis and resulting in plant death.
Discovery
In the 1960s and early 1970s, scientists at University of Tübingen and at the Meiji Seika Kaisha Company independently discovered that species of Streptomyces bacteria produce a tripeptide they called bialaphos that inhibits bacteria; it consists of two alanine residues and a unique amino acid that is an analog of glutamate that they named "phosphinothricin". They determined that phosphinothricin irreversibly inhibits glutamine synthetase. Phosphinothricin was first synthesized by scientists at Hoechst in the 1970s as a racemic mixture; this racemic mixture is called glufosinate and is the commercially relevant version of the chemical.
In the late 1980s scientists discovered enzymes in these Streptomyces species that selectively inactivate free phosphinothricin; the gene encoding the enzyme that was isolated from Streptomyces hygroscopicus was called the "bialaphos resistance" or "bar" gene, and the gene encoding the enzyme in Streptomyces viridochromogenes was called "phosphinothricin acetyltransferase" or "pat". The two genes and their proteins have 80% homology on the DNA level and 86% amino acid homology, and are each 158 amino acids long.
Use
Glufosinate is a broad-spectrum herbicide that is used to control important weeds such as morning glories, hemp sesbania (Sesbania bispinosa), Pennsylvania smartweed (Polygonum pensylvanicum) and yellow nutsedge similar to glyphosate. It is applied to young plants during early development for full effectiveness. It is sold in formulations under brands including Basta, Rely, Finale, Challenge and Liberty.
Glufosinate is typically used in three situations as an herbicide:
directed sprays for weed control, including in genetically modified crops
use as a crop desiccation to facilitate harvesting
Glufosinate also has shown to provide some protection against various plant diseases, as it also acts to kill fungi and bacteria on contact.
Genetically modified crops
Genetically modified crops resistant to glufosinate were created by genetically engineering the bar or pat genes from streptomyces into the relevant crop seeds. In 1995 the first glufosinate-resistant crop, canola, was brought to market, and it was followed by corn in 1997, cotton in 2004, and soybeans in 2011.
Mode of action
Phosphinothricin is a glutamine synthetase inhibitor that binds to the glutamate site. Glufosinate-treated plants die due to a buildup of ammonia in the thylakoid lumen, leading to the uncoupling of photophosphorylation. The uncoupling of photophosphorylation causes the production of reactive oxygen species, lipid peroxidation, and membrane destruction.
Elevated levels of ammonia are detectable within one hour after application of phosphinothricin.
Toxicity
Exposure to humans in foods
As glufosinate is often used as a pre-harvest desiccant, it can be found in foods that humans ingest. Such foods include potatoes, peas, beans, corn, wheat, and barley. In addition, the chemical can be passed to humans through animals who are fed contaminated straw. Flour processed from wheat grain that contained traces of glufosinate was found to retain 10-100% of the chemicals' residues.
The herbicide is also persistent; it has been found to be prevalent in spinach, radishes, wheat and carrots that were planted 120 days after the treatment of the herbicide. Its persistent nature can also be observed by its half-life which varies from 3 to 70 days depending on the soil type and organic matter content. Residues can remain in frozen food for up to two years and the chemical is not easily destroyed by cooking the food item in boiling water. The EPA classifies the chemical as 'persistent' and 'mobile' based on its lack of degradation and ease of transport through soil.
A study revealed the presence of circulating PAGMF in women with and without pregnancy, paving the way for a new field in reproductive toxicology including nutrition and utero-placental toxicities
Exposure limits
There are no exposure limits established by the Occupational Safety & Health Administration or the American Conference of Governmental Industrial Hygienists. The WHO/FAO recommended acceptable daily intake (ADI) for glufosinate is 0.02 mg/kg. The European Food Safety Authority has set an ADI of 0.021 mg/kg. The Acute reference dose (ARfD) for child-bearing women is 0.021 mg/kg.
Regulation
Glufosinate is a United States Environmental Protection Agency EPA registered chemical. It is also a California registered chemical. It is not banned in the country and it is not a PIC pesticide. There are no exposure limits established by OSHA or the American Conference of Governmental Industrial Hygienists.
Glufosinate is not approved for use as an herbicide in Europe; it was last reviewed in 2007 and that registration expired in 2018. It has been withdrawn from the French market since October 24, 2017 by the Agence nationale de sécurité sanitaire de l'alimentation, de l'environnement et du travail due to its classification as a possible reprotoxic chemical (R1b).
References
External links
BASF's site of LibertyLink crops
Herbicides
Phosphinic acids
Ammonium compounds
Alpha-Amino acids
Glutamine synthetase inhibitors
Eukaryotic selection compounds | Glufosinate | [
"Chemistry",
"Biology"
] | 1,377 | [
"Herbicides",
"Biocides",
"Ammonium compounds",
"Salts"
] |
3,536,654 | https://en.wikipedia.org/wiki/Suspension%20%28mechanics%29 | In mechanics, suspension is a system of components allowing a machine (normally a vehicle) to move smoothly with reduced shock.
Types may include:
car suspension, four-wheeled motor vehicle suspension
motorcycle suspension, two-wheeled motor vehicle suspension
Motorcycle fork, a component of motorcycle suspension system
bicycle suspension
Related concepts include:
Shock absorber
Shock mount
Vibration isolation
Magnetic suspension
Electrodynamic suspension
Electromagnetic suspension
See also
Cardan suspension
Seismic base isolation
Mechanics | Suspension (mechanics) | [
"Physics",
"Engineering"
] | 88 | [
"Mechanics",
"Mechanical engineering"
] |
1,244,292 | https://en.wikipedia.org/wiki/Electromagnetically%20induced%20transparency | Electromagnetically induced transparency (EIT) is a coherent optical nonlinearity which renders a medium transparent within a narrow spectral range around an absorption line. Extreme dispersion is also created within this transparency "window" which leads to "slow light", described below. It is in essence a quantum interference effect that permits the propagation of light through an otherwise opaque atomic medium.
Observation of EIT involves two optical fields (highly coherent light sources, such as lasers) which are tuned to interact with three quantum states of a material. The "probe" field is tuned near resonance between two of the states and measures the absorption spectrum of the transition. A much stronger "coupling" field is tuned near resonance at a different transition. If the states are selected properly, the presence of the coupling field will create a spectral "window" of transparency which will be detected by the probe. The coupling laser is sometimes referred to as the "control" or "pump", the latter in analogy to incoherent optical nonlinearities such as spectral hole burning or saturation.
EIT is based on the destructive interference of the transition probability amplitude between atomic states. Closely related to EIT are coherent population trapping (CPT) phenomena.
The quantum interference in EIT can be exploited to laser cool atomic particles, even down to the quantum mechanical ground state of motion. This was used in 2015 to directly image individual atoms trapped in an optical lattice.
Medium requirements
There are specific restrictions on the configuration of the three states. Two of the three possible transitions between the states must be "dipole allowed", i.e. the transitions can be induced by an oscillating electric field. The third transition must be "dipole forbidden." One of the three states is connected to the other two by the two optical fields. The three types of EIT schemes are differentiated by the energy differences between this state and the other two. The schemes are the ladder, vee, and lambda. Any real material system may contain many triplets of states which could theoretically support EIT, but there are several practical limitations on which levels can actually be used.
Also important are the dephasing rates of the individual states. In any real system at non-zero temperature there are processes which cause a scrambling of the phase of the quantum states. In the gas phase, this means usually collisions. In solids, dephasing is due to interaction of the electronic states with the host lattice. The dephasing of state is especially important; ideally should be a robust, metastable state.
Currently EIT research uses atomic systems in dilute gases, solid solutions, or more exotic states such as Bose–Einstein condensate. EIT has been demonstrated in electromechanical and optomechanical systems, where it is known as optomechanically induced transparency. Work is also being done in semiconductor nanostructures such as quantum wells, quantum wires and quantum dots.
Theory
EIT was first proposed theoretically by professor Jakob Khanin and graduate student Olga Kocharovskaya at Gorky State University (renamed to Nizhny Novgorod in 1990), Russia; there are now several different approaches to a theoretical treatment of EIT. One approach is to extend the density matrix treatment used to drive Rabi oscillation of a two-state, single field system. In this picture the probability amplitude for the system to transfer between states can interfere destructively, preventing absorption. In this context, "interference" refers to interference between quantum events (transitions) and not optical interference of any kind. As a specific example, consider the lambda scheme shown above. Absorption of the probe is defined by transition from to . The fields can drive population from - directly or from ---. The probability amplitudes for the different paths interfere destructively. If has a comparatively long lifetime, then the result will be a transparent window completely inside of the - absorption line.
Another approach is the "dressed state" picture, wherein the system + coupling field Hamiltonian is diagonalized and the effect on the probe is calculated in the new basis. In this picture EIT resembles a combination of Autler-Townes splitting and Fano interference between the dressed states. Between the doublet peaks, in the center of the transparency window, the quantum probability amplitudes for the probe to cause a transition to either state cancel.
A polariton picture is particularly important in describing stopped light schemes. Here, the photons of the probe are coherently "transformed" into "dark state polaritons" which are excitations of the medium. These excitations exist (or can be "stored") for a length of time dependent only on the dephasing rates.
Slow light and stopped light
EIT is only one of many diverse mechanisms which can produce slow light. The Kramers–Kronig relations dictate that a change in absorption (or gain) over a narrow spectral range must be accompanied by a change in refractive index over a similarly narrow region. This rapid and positive change in refractive index produces an extremely low group velocity. The first experimental observation of the low group velocity produced by EIT was by Boller, İmamoğlu, and Harris at Stanford University in 1991 in strontium. In 1999 Lene Hau reported slowing light in a medium of ultracold sodium atoms, achieving this by using quantum interference effects responsible for electromagnetically induced transparency (EIT). Her group performed copious research regarding EIT with Stephen E. Harris. "Using detailed numerical simulations, and analytical theory, we study properties of micro-cavities which incorporate materials that exhibit Electro-magnetically Induced Transparency (EIT) or Ultra Slow Light (USL). We find that such systems, while being miniature in size (order wavelength), and integrable, can have some outstanding properties. In particular, they could have lifetimes orders of magnitude longer than other existing systems, and could exhibit non-linear all-optical switching at single photon power levels. Potential applications include miniature atomic clocks, and all-optical quantum information processing." The current record for slow light in an EIT medium is held by Budker, Kimball, Rochester, and Yashchuk at U.C. Berkeley in 1999. Group velocities as low as 8 m/s were measured in a warm thermal rubidium vapor.
Stopped light, in the context of an EIT medium, refers to the coherent transfer of photons to the quantum system and back again. In principle, this involves switching off the coupling beam in an adiabatic fashion while the probe pulse is still inside of the EIT medium. There is experimental evidence of trapped pulses in EIT medium. Authors created a stationary light pulse inside the atomic coherent media. In 2009 researchers from Harvard University and MIT demonstrated a few-photon optical switch for quantum optics based on the slow light ideas. Lene Hau and a team from Harvard University were the first to demonstrate stopped light.
EIT cooling
EIT has been used to laser cool long strings of atoms to their motional ground state in an ion trap. To illustrate the cooling technique, consider a three level atom as shown with a ground state , an excited state , and a stable or metastable state that lies in between them. The excited state is dipole coupled to and . An intense "coupling" laser drives the transition at detuning above resonance. Due to the quantum interference of transition amplitudes, a weaker "cooling" laser driving the transition at detuning above resonance sees a Fano-like feature on the absorption profile. EIT cooling is realized when , such that the carrier transition lies on the dark resonance of the Fano-like feature, where is used to label the quantized motional state of the atom. The Rabi frequency of the coupling laser is chosen such that the "red" sideband lies on the narrow maximum of the Fano-like feature. Conversely the "blue" sideband lies in a region of low excitation probability, as shown in the figure below. Due to the large ratio of the excitation probabilities, the cooling limit is lowered in comparison to doppler or sideband cooling (assuming the same cooling rate).
See also
Atomic coherence
Electromagnetically Induced Grating
References
Primary work
O.Kocharovskaya, Ya.I.Khanin, Sov. Phys. JETP, 63, p945 (1986)
K.J. Boller, A. İmamoğlu, S. E. Harris, Physical Review Letters 66, p2593 (1991)
Eberly, J. H., M. L. Pons, and H. R. Haq, Phys. Rev. Lett. 72, 56 (1994)
D. Budker, D. F. Kimball, S. M. Rochester, and V. V. Yashchuk, Physical Review Letters, 83, p1767 (1999)
Lene Vestergaard Hau, S.E. Harris, Zachary Dutton, Cyrus H. Behroozi, Nature v.397, p594 (1999)
D.F. Phillips, A. Fleischhauer, A. Mair, R.L. Walsworth, M.D. Lukin, Physical Review Letters 86, p783 (2001)
Naomi S. Ginsberg, Sean R. Garner, Lene Vestergaard Hau, Nature 445, 623 (2007)
Review
Harris, Steve (July, 1997). Electromagnetically Induced Transparency . Physics Today, 50 (7), pp. 36–42 (PDF Format)
Zachary Dutton, Naomi S. Ginsberg, Christopher Slowe, and Lene Vestergaard Hau (2004) The art of taming light: ultra-slow and stopped light. Europhysics News Vol. 35 No. 2
M. Fleischhauer, A. İmamoğlu, and J. P. Marangos (2005), "Electromagnetically induced transparency: Optics in Coherent Media", Reviews Modern Physics, 77, 633
Wave mechanics
Molecular physics
Lasers
Quantum optics | Electromagnetically induced transparency | [
"Physics",
"Chemistry"
] | 2,069 | [
"Physical phenomena",
"Molecular physics",
"Quantum optics",
"Quantum mechanics",
"Classical mechanics",
"Waves",
"Wave mechanics",
" molecular",
"nan",
"Atomic",
" and optical physics"
] |
1,244,992 | https://en.wikipedia.org/wiki/Moment%20problem | In mathematics, a moment problem arises as the result of trying to invert the mapping that takes a measure to the sequence of moments
More generally, one may consider
for an arbitrary sequence of functions .
Introduction
In the classical setting, is a measure on the real line, and is the sequence . In this form the question appears in probability theory, asking whether there is a probability measure having specified mean, variance and so on, and whether it is unique.
There are three named classical moment problems: the Hamburger moment problem in which the support of is allowed to be the whole real line; the Stieltjes moment problem, for ; and the Hausdorff moment problem for a bounded interval, which without loss of generality may be taken as .
The moment problem also extends to complex analysis as the trigonometric moment problem in which the Hankel matrices are replaced by Toeplitz matrices and the support of is the complex unit circle instead of the real line.
Existence
A sequence of numbers is the sequence of moments of a measure if and only if a certain positivity condition is fulfilled; namely, the Hankel matrices ,
should be positive semi-definite. This is because a positive-semidefinite Hankel matrix corresponds to a linear functional such that and (non-negative for sum of squares of polynomials). Assume can be extended to . In the univariate case, a non-negative polynomial can always be written as a sum of squares. So the linear functional is positive for all the non-negative polynomials in the univariate case. By Haviland's theorem, the linear functional has a measure form, that is . A condition of similar form is necessary and sufficient for the existence of a measure supported on a given interval .
One way to prove these results is to consider the linear functional that sends a polynomial
to
If are the moments of some measure supported on , then evidently
Vice versa, if () holds, one can apply the M. Riesz extension theorem and extend to a functional on the space of continuous functions with compact support ), so that
By the Riesz representation theorem, () holds iff there exists a measure supported on , such that
for every .
Thus the existence of the measure is equivalent to (). Using a representation theorem for positive polynomials on , one can reformulate () as a condition on Hankel matrices.
Uniqueness (or determinacy)
The uniqueness of in the Hausdorff moment problem follows from the Weierstrass approximation theorem, which states that polynomials are dense under the uniform norm in the space of continuous functions on . For the problem on an infinite interval, uniqueness is a more delicate question. There are distributions, such as log-normal distributions, which have finite moments for all the positive integers but where other distributions have the same moments.
Formal solution
When the solution exists, it can be formally written using derivatives of the Dirac delta function as
.
The expression can be derived by taking the inverse Fourier transform of its characteristic function.
Variations
An important variation is the truncated moment problem, which studies the properties of measures with fixed first moments (for a finite ). Results on the truncated moment problem have numerous applications to extremal problems, optimisation and limit theorems in probability theory.
Probability
The moment problem has applications to probability theory. The following is commonly used:
By checking Carleman's condition, we know that the standard normal distribution is a determinate measure, thus we have the following form of the central limit theorem:
See also
Carleman's condition
Hamburger moment problem
Hankel matrix
Hausdorff moment problem
Moment (mathematics)
Stieltjes moment problem
Trigonometric moment problem
Notes
References
(translated from the Russian by N. Kemmer)
Mathematical analysis
Hilbert spaces
Probability problems
Moment (mathematics)
Mathematical problems
Real algebraic geometry
Optimization in vector spaces | Moment problem | [
"Physics",
"Mathematics"
] | 779 | [
"Mathematical analysis",
"Moments (mathematics)",
"Physical quantities",
"Quantum mechanics",
"Probability problems",
"Hilbert spaces",
"Mathematical problems",
"Moment (physics)"
] |
1,245,135 | https://en.wikipedia.org/wiki/Argo%20%28oceanography%29 | Argo is an international programme for researching the ocean. It uses profiling floats to observe temperature, salinity and currents. Recently it has observed bio-optical properties in the Earth's oceans. It has been operating since the early 2000s. The real-time data it provides support climate and oceanographic research. A special research interest is to quantify the ocean heat content (OHC). The Argo fleet consists of almost 4000 drifting "Argo floats" (as profiling floats used by the Argo program are often called) deployed worldwide. Each float weighs 20–30 kg. In most cases probes drift at a depth of 1000 metres. Experts call this the parking depth. Every 10 days, by changing their buoyancy, they dive to a depth of 2000 metres and then move to the sea-surface. As they move they measure conductivity and temperature profiles as well as pressure. Scientists calculate salinity and density from these measurements. Seawater density is important in determining large-scale motions in the ocean.
Average current velocities at 1000 metres are directly measured by the distance and direction a float drifts while parked at that depth, which is determined by GPS or Argos system positions at the surface. The data is transmitted to shore via satellite, and is freely available to everyone, without restrictions.
The Argo program is named after the Greek mythical ship Argo to emphasize the complementary relationship of Argo with the Jason satellite altimeters. Both the standard Argo floats and the 4 satellites launched so far to monitor changing sea-level all operate on a 10-day duty cycle.
International collaboration
The Argo program is a collaborative partnership of more than 30 nations from all continents (most shown on the graphic map in this article) that maintains a global array and provides a dataset anyone can use to explore the ocean environment. Argo is a component of the Global Ocean Observing System (GOOS), and is coordinated by the Argo Steering Team, an international body of scientists and technical experts that meets once per year. The Argo data stream is managed by the Argo Data Management Team. Argo is also supported by the Group on Earth Observations, and has been endorsed since its beginnings by the World Climate Research Programme's CLIVAR Project (Variability and predictability of the ocean-atmosphere system), and by the Global Ocean Data Assimilation Experiment (GODAE OceanView).
History
A program called Argo was first proposed at OceanObs 1999 which was a conference organised by international agencies with the aim of creating a coordinated approach to ocean observations. The original Argo prospectus was created by a small group of scientists, chaired by Dean Roemmich, who described a program that would have a global array of about 3000 floats in place by sometime in 2007. The 3000-float array was achieved in November 2007 and was global. The Argo Steering Team met for the first time in 1999 in Maryland (USA) and outlined the principles of global data sharing.
The Argo Steering Team made a 10-year report to OceanObs-2009 and received suggestions on how the array might be improved. These suggestions included enhancing the array at high latitudes, in marginal seas (such as the Gulf of Mexico and the Mediterranean) and along the equator, improved observation of strong boundary currents (such as the Gulf Stream and Kuroshio), extension of observations into deep water and the addition of sensors for monitoring biological and chemical changes in the oceans. In November 2012 an Indian float in the Argo array gathered the one-millionth profile (twice the number collected by research vessels during all of the 20th century) an event that was reported in several press releases. As can be seen in the plot opposite, by early 2018 the Bio-Argo program is expanding rapidly.
Float design and operation
The critical capability of an Argo float is its ability to rise and descend in the ocean on a programmed schedule. The floats do this by changing their effective density. The density of any object is given by its mass divided by its volume. The Argo float keeps its mass constant, but by altering its volume, it changes its density. To do this, mineral oil is forced out of the float's pressure case and expands a rubber bladder at the bottom end of the float. As the bladder expands, the float becomes less dense than seawater and rises to the surface. Upon finishing its tasks at the surface, the float withdraws the oil and descends again.
A handful of companies and organizations manufacture profiling floats used in the Argo program. APEX floats, made by Teledyne Webb Research, are the most common element of the current array. SOLO and SOLO-II floats (the latter use a reciprocating pump for buoyancy changes, unlike screw-driven pistons in other floats) were developed at Scripps Institution of Oceanography. Other types include the NINJA float, made by the Tsurumi Seiki Co. of Japan, and the ARVOR, DEEP-ARVOR & PROVOR floats developed by IFREMER in France, in industrial partnership with French Company nke instrumentation. Most floats use sensors made by Sea-Bird Scientific (https://www.seabird.com/)
, which also makes a profiling float called Navis. A typical Argo float is a cylinder just over 1 metre long and 14 cm across with a hemispherical cap. Thus it has a minimum volume of about 16,600 cubic centimetres (cm3). At Ocean Station Papa in the Gulf of Alaska the temperature and salinity at the surface might be about 6°C and 32.55 parts per thousand giving a density of sea-water of 1.0256 g/cm3. At a depth of 2000 metres (pressure of 2000 decibars) the temperature might be 2°C and the salinity 34.58 parts per thousand. Thus, including the effect of pressure (water is slightly compressible) the density of sea-water is about 1.0369 g/cm3. The change in density divided by the deep density is 0.0109.
The float has to match these densities if it is to reach 2000 metres depth and then rise to the surface. Since the density of the float is its mass divided by volume, it needs to change its volume by 0.0109 × 16,600 = 181 cm3 to drive that excursion; a small amount of that volume change is provided by the compressibility of the float itself, and excess buoyancy is required at the surface in order to keep the antenna above water. All Argo floats carry sensors to measure the temperature and salinity of the ocean as they vary with depth, but an increasing number of floats also carry other sensors, such as for measuring dissolved oxygen and ultimately other variables of biological and chemical interest such as chlorophyll, nutrients and pH. An extension to the Argo project called BioArgo is being developed and, when implemented, will add a biological and chemical component to this method of sampling the oceans.
The antenna for satellite data collection is mounted at the top of the float which extends clear of the sea surface after it completes its ascent. The ocean is saline, hence an electrical conductor, so that radio communications from under the sea surface are not possible. Early in the program Argo floats exclusively used slow mono-directional satellite communications but the majority of floats being deployed in mid-2013 use rapid bi-directional communications. The result of this is that Argo floats now transmit much more data than was previously possible and they spend only about 20 minutes on the sea surface rather than 8–12 hours, greatly reducing problems such as grounding and bio-fouling.
The average life span of Argo floats has increased greatly since the program began, first exceeding 4-year mean lifetime for floats deployed in 2005. Ongoing improvements should result in further extensions to 6 years and longer.
As of June 2014, new types of floats were being tested to collect measurements much deeper than can be reached by standard Argo floats. These "Deep Argo" floats are designed to reach depths of 4000 or 6000 metres, versus 2000 metres for standard floats. This will allow a much greater volume of the ocean to be sampled. Such measurements are important for developing a comprehensive understanding of the ocean, such as trends in heat content.
Array design
The original plan advertised in the Argo prospectus called for a nearest-neighbour distance between floats, on average, of 3° latitude by 3° longitude. This allowed for higher resolution (in kilometres) at high latitudes, both north and south, and was considered necessary because of the decrease in the Rossby radius of deformation which governs the scale of oceanographic features, such as eddies. By 2007 this was largely achieved, but the target resolution has never yet been completely achieved in the deep southern ocean.
Efforts are being made to complete the original plan in all parts of the world oceans but this is difficult in the deep Southern Ocean as deployment opportunities occur only very rarely.
As mentioned in the history section, enhancements are now planned in the equatorial regions of the oceans, in boundary currents and in marginal seas. This requires that the total number of floats be increased from the original plan of 3000 floats to a 4000-float array.
One consequence of the use of profiling floats to sample the ocean is that seasonal bias can be removed. The diagram opposite shows the count of all float profiles acquired each month by Argo south of 30°S (upper curve) from the start of the program to November 2012 compared with the same diagram for all other data available. The lower curve shows a strong annual bias with four times as many profiles being collected in austral summer than in austral winter. For the upper (Argo) plot, there is no bias apparent.
Data access
One of the critical features of the Argo model is that of global and unrestricted access to data in near real-time. When a float transmits a profile it is quickly converted to a format that can be inserted on the Global Telecommunications System (GTS). The GTS is operated by the World Meteorological Organisation, or WMO, specifically for the purpose of sharing data needed for weather forecasting. Thus all nations who are members of the WMO receive all Argo profiles within a few hours of the acquisition of the profile. Data are also made available through ftp and WWW access via two Argo Global Data Centres (or GDACs), one in France and one in the US.
About 90% of all profiles acquired are made available to global access within 24 hours, with the remaining profiles becoming available soon thereafter.
For a researcher to use data acquired via the GTS or from the Argo Global Data Centres (GDACs) does require programming skills. The GDACs supply multi-profile files that are a native file format for Ocean DataView. For any day there are files with names like 20121106_prof.nc that are called multi-profile files. This example is a file specific to 6 November 2012 and contains all profiles in a single NetCDF file for one ocean basin. The GDACs identify three ocean basins, Atlantic, Indian and Pacific. Thus three multi-profile files will carry every Argo profile acquired on that specific day.
A user who wants to explore Argo data but lacks programming skills might like to download the Argo Global Marine Atlas
which is an easy-to-use utility that allows the creation of products based on Argo data such as the salinity section shown above, but also horizontal maps of ocean properties, time series at any location etc. This Atlas also carries an "update" button that allows data to be updated periodically. The Argo Global Marine Atlas is maintained at the Scripps Institution of Oceanography in La Jolla, California.
Argo data can also be displayed in Google Earth with a layer developed by the Argo Technical Coordinator.
Data results
Argo is now the dominant source of information about the climatic state of the oceans and is being widely used in many publications as seen in the diagram opposite. Topics addressed include air-sea interaction, ocean currents, interannual variability, El Niño, mesoscale eddies, water mass properties and transformation. Argo is also now permitting direct computations of the global ocean heat content.
They determine that areas of the world with high surface salinity are getting saltier and areas of the world with relatively low surface salinity are getting fresher. This has been described as 'the rich get richer and the poor get poorer'. Scientifically speaking, the distributions of salt are governed by the difference between precipitation and evaporation. Areas, such as the northern North Pacific Ocean, where precipitation dominates evaporation are fresher than average. The implication of their result is that the Earth is seeing an intensification of the global hydrological cycle. Argo data are also being used to drive computer models of the climate system leading to improvements in the ability of nations to forecast seasonal climate variations.
Argo data were critical in the drafting of Chapter 3 (Working Group 1) of the IPCC Fifth Assessment Report (released September 2013) and an appendix was added to that chapter to emphasize the profound change that had taken place in the quality and volume of ocean data since the IPCC Fourth Assessment Report and the resulting improvement in confidence in the description of surface salinity changes and upper-ocean heat content.
Argo data were used along with sea level change data from satellite altimetry in a new approach to analyzing global warming, reported in Eos in 2017. David Morrison reports that "[b]oth of these data sets show clear signatures of heat deposition in the ocean, from the temperature changes in the top 2 km of water and from the expansion of the ocean water due to heating. These two measures are less noisy than land and atmospheric temperatures."
Argo and CERES data collected between 2005 and 2019 have been compared as independent measures of the global change in Earth's energy imbalance. Both data sets showed similar behavior at annualized resolution, as well as a doubling of the linear trend in planet's heating rate during that 14-year span.
See also
Ocean acoustic tomography
Underwater gliders
Integrated Ocean Observing System
References
External links
The Argo Portal
International Argo Information Centre
Argo at the Scripps Institution of Oceanography, San Diego
Sea-Bird Scientific SBE 41CP Argo CTD
Realtime Interactive Map
Realtime Google Earth File
Coriolis Global Argo Data Server - EU Mirror
FNMOC Global Argo Data server - US Mirror
NOAA/Pacific Marine Environmental Laboratory profiling float project deploys floats as part of the Argo program, provides data on-line, and is active in delayed-mode salinity calibration and quality control for US Argo floats.
Sea-Bird Scientific Navis BGCi Float
Changing conditions in the Gulf of Alaska as seen by Argo
Government of Canada, Department of Fisheries and Oceans, Argo Project
A New World View Argo explorations article by Scripps Institution of Oceanography
JCOMMOPS
Argo on NOSA
"Argo Floats: How do we measure the ocean" (animation for children)
Fisheries science
Oceanography
Oceanographic instrumentation
Physical oceanography
Research projects | Argo (oceanography) | [
"Physics",
"Technology",
"Engineering",
"Environmental_science"
] | 3,116 | [
"Hydrology",
"Oceanographic instrumentation",
"Applied and interdisciplinary physics",
"Oceanography",
"Measuring instruments",
"Physical oceanography"
] |
1,245,372 | https://en.wikipedia.org/wiki/Analytic%20hierarchy%20process | In the theory of decision making, the analytic hierarchy process (AHP), also analytical hierarchy process, is a structured technique for organizing and analyzing complex decisions, based on mathematics and psychology. It was developed by Thomas L. Saaty in the 1970s; Saaty partnered with Ernest Forman to develop Expert Choice software in 1983, and AHP has been extensively studied and refined since then. It represents an accurate approach to quantifying the weights of decision criteria. Individual experts’ experiences are utilized to estimate the relative magnitudes of factors through pair-wise comparisons. Each of the respondents compares the relative importance of each pair of items using a specially designed questionnaire. The relative importance of the criteria can be determined with the help of the AHP by comparing the criteria and, if applicable, the sub-criteria in pairs by experts or decision-makers. On this basis, the best alternative can be found.
Uses and applications
AHP is targeted at group decision making, and is used for decision situations, in fields such as government, business, industry, healthcare and education.
Rather than prescribing a "correct" decision, the AHP helps decision makers find the decision that best suits their goal and their understanding of the problem. It provides a comprehensive and rational framework for structuring a decision problem, for representing and quantifying its elements, for relating those elements to overall goals, and for evaluating alternative solutions.
Users of the AHP first decompose their decision problem into a hierarchy of more easily comprehended sub-problems, each of which can be analyzed independently. The elements of the hierarchy can relate to any aspect of the decision problem—tangible or intangible, carefully measured or roughly estimated, well or poorly understood—anything at all that applies to the decision at hand.
Once the hierarchy is built, the decision makers evaluate its various elements by comparing them to each other two at a time, with respect to their impact on an element above them in the hierarchy. In making the comparisons, the decision makers can use concrete data about the elements, and they can also use their judgments about the elements' relative meaning and importance. Human judgments, and not just the underlying information, can be used in performing the evaluations.
The AHP converts these evaluations to numerical values that can be processed and compared over the entire range of the problem. A numerical weight or priority is derived for each element of the hierarchy, allowing diverse and often incommensurable elements to be compared to one another in a rational and consistent way. This capability distinguishes the AHP from other decision making techniques.
In the final step of the process, numerical priorities are calculated for each of the decision alternatives. These numbers represent the alternatives' relative ability to achieve the decision goal, so they allow a straightforward consideration of the various courses of action.
While it can be used by individuals working on straightforward decisions, the Analytic Hierarchy Process (AHP) is most useful where teams of people are working on complex problems, especially those with high stakes, involving human perceptions and judgments, whose resolutions have long-term repercussions.
Decision situations to which the AHP can be applied include:
Choice – The selection of one alternative from a given set of alternatives, usually where there are multiple decision criteria involved.
Ranking – Putting a set of alternatives in order from most to least desirable.
Prioritization – Determining the relative merit of members of a set of alternatives, as opposed to selecting a single one or merely ranking them
Resource allocation – Apportioning resources among a set of alternatives
Benchmarking – Comparing the processes in one's own organization with those of other best-of-breed organizations
Quality management – Dealing with the multidimensional aspects of quality and quality improvement
Conflict resolution – Settling disputes between parties with apparently incompatible goals or positions
The applications of AHP include planning, resource allocation, priority setting, and selection among alternatives. Other areas have included forecasting, total quality management, business process reengineering, quality function deployment, and the balanced scorecard. Other uses of AHP are discussed in the literature:
Deciding how best to reduce the impact of global climate change (Fondazione Eni Enrico Mattei)
Quantifying the overall quality of software systems (Microsoft Corporation)
Selecting university faculty (Bloomsburg University of Pennsylvania)
Deciding where to locate offshore manufacturing plants (University of Cambridge)
Assessing risk in operating cross-country petroleum pipelines (American Society of Civil Engineers)
Deciding how best to manage U.S. watersheds (U.S. Department of Agriculture)
More Effectively Define and Evaluate SAP Implementation Approaches (SAP Experts)
Integrated evaluation of a community's sustanaibility in terms of environment, economy, society, institution, and culture.
Accelerated Bridge Construction Decision Making Tool to assist in determining the viability of accelerated bridge construction (ABC) over traditional construction methods and in selecting appropriate construction and contracting strategies on a case-by-case basis.
AHP is sometimes used in designing highly specific procedures for particular situations, such as the rating of buildings by historical significance. It was recently applied to a project that uses video footage to assess the condition of highways in Virginia. Highway engineers first used it to determine the optimum scope of the project, and then to justify its budget to lawmakers.
The weights of the AHP judgement matrix may be corrected with the ones calculated through the Entropy Method. This variant of the AHP method is called AHP-EM.
Education and scholarly research
Though using the analytic hierarchy process requires no specialized academic training, it is considered an important subject in many institutions of higher learning, including schools of engineering and graduate schools of business. It is a particularly important subject in the quality field, and is taught in many specialized courses including Six Sigma, Lean Six Sigma, and QFD.
The International Symposium on the Analytic Hierarchy Process (ISAHP) holds biennial meetings of academics and practitioners interested in the field. A wide range of topics is covered. Those in 2005 ranged from "Establishing Payment Standards for Surgical Specialists", to "Strategic Technology Roadmapping", to "Infrastructure Reconstruction in Devastated Countries".
At the 2007 meeting in Valparaíso, Chile, 90 papers were presented from 19 countries, including the US, Germany, Japan, Chile, Malaysia, and Nepal. A similar number of papers were presented at the 2009 symposium in Pittsburgh, Pennsylvania, when 28 countries were represented. Subjects of the papers included Economic Stabilization in Latvia, Portfolio Selection in the Banking Sector, Wildfire Management to Help Mitigate Global Warming, and Rural Microprojects in Nepal.
Use
As can be seen in the material that follows, using the AHP involves the mathematical synthesis of numerous judgments about the decision problem at hand. It is not uncommon for these judgments to number in the dozens or even the hundreds. While the math can be done by hand or with a calculator, it is far more common to use one of several computerized methods for entering and synthesizing the judgments. The simplest of these involve standard spreadsheet software, while the most complex use custom software, often augmented by special devices for acquiring the judgments of decision makers gathered in a meeting room.
The procedure for using the AHP can be summarized as:
Model the problem as a hierarchy containing the decision goal, the alternatives for reaching it, and the criteria for evaluating the alternatives.
Establish priorities among the elements of the hierarchy by making a series of judgments based on pairwise comparisons of the elements. For example, when comparing potential purchases of commercial real estate, the investors might say they prefer location over price and price over timing.
Synthesize these judgments to yield a set of overall priorities for the hierarchy. This would combine the investors' judgments about location, price and timing for properties A, B, C, and D into overall priorities for each property.
Check the consistency of the judgments.
Come to a final decision based on the results of this process.
These steps are more fully described below.
Model the problem as a hierarchy
The first step in the analytic hierarchy process is to model the problem as a hierarchy. In doing this, participants explore the aspects of the problem at levels from general to detailed, then express it in the multileveled way that the AHP requires. As they work to build the hierarchy, they increase their understanding of the problem, of its context, and of each other's thoughts and feelings about both.
Hierarchies defined
A hierarchy is a stratified system of ranking and organizing people, things, ideas, etc., where each element of the system, except for the top one, is subordinate to one or more other elements. Though the concept of hierarchy is easily grasped intuitively, it can also be described mathematically. Diagrams of hierarchies are often shaped roughly like pyramids, but other than having a single element at the top, there is nothing necessarily pyramid-shaped about a hierarchy.
Human organizations are often structured as hierarchies, where the hierarchical system is used for assigning responsibilities, exercising leadership, and facilitating communication. Familiar hierarchies of "things" include a desktop computer's tower unit at the "top", with its subordinate monitor, keyboard, and mouse "below."
In the world of ideas, we use hierarchies to help us acquire detailed knowledge of complex reality: we structure the reality into its constituent parts, and these in turn into their own constituent parts, proceeding down the hierarchy as many levels as we care to. At each step, we focus on understanding a single component of the whole, temporarily disregarding the other components at this and all other levels. As we go through this process, we increase our global understanding of whatever complex reality we are studying.
Think of the hierarchy that medical students use while learning anatomy—they separately consider the musculoskeletal system (including parts and subparts like the hand and its constituent muscles and bones), the circulatory system (and its many levels and branches), the nervous system (and its numerous components and subsystems), etc., until they've covered all the systems and the important subdivisions of each. Advanced students continue the subdivision all the way to the level of the cell or molecule. In the end, the students understand the "big picture" and a considerable number of its details. Not only that, but they understand the relation of the individual parts to the whole. By working hierarchically, they've gained a comprehensive understanding of anatomy.
Similarly, when we approach a complex decision problem, we can use a hierarchy to integrate large amounts of information into our understanding of the situation. As we build this information structure, we form a better and better picture of the problem as a whole.
Hierarchies in the AHP
An AHP hierarchy is a structured means of modeling the decision at hand. It consists of an overall goal, a group of options or alternatives for reaching the goal, and a group of factors or criteria that relate the alternatives to the goal. The criteria can be further broken down into subcriteria, sub-subcriteria, and so on, in as many levels as the problem requires. A criterion may not apply uniformly, but may have graded differences like a little sweetness is enjoyable but too much sweetness can be harmful. In that case, the criterion is divided into subcriteria indicating different intensities of the criterion, like: little, medium, high and these intensities are prioritized through comparisons under the parent criterion, sweetness.
Published descriptions of AHP applications often include diagrams and descriptions of their hierarchies; some simple ones are shown throughout this article. More complex AHP hierarchies have been collected and reprinted in at least one book. More complex hierarchies can be found on a special talk page for this article.
The design of any AHP hierarchy will depend not only on the nature of the problem at hand, but also on the knowledge, judgments, values, opinions, needs, wants, etc. of the participants in the decision-making process. Constructing a hierarchy typically involves significant discussion, research, and discovery by those involved. Even after its initial construction, it can be changed to accommodate newly-thought-of criteria or criteria not originally considered to be important; alternatives can also be added, deleted, or changed.
To better understand AHP hierarchies, consider a decision problem with a goal to be reached, three alternative ways of reaching the goal, and four criteria against which the alternatives need to be measured.
Such a hierarchy can be visualized as a diagram like the one immediately below, with the goal at the top, the three alternatives at the bottom, and the four criteria in between. There are useful terms for describing the parts of such diagrams: Each box is called a node. A node that is connected to one or more nodes in a level below it is called a parent node. The nodes to which it is so connected are called its children.
Applying these definitions to the diagram below, the goal is the parent of the four criteria, and the four criteria are children of the goal. Each criterion is a parent of the three Alternatives. Note that there are only three Alternatives, but in the diagram, each of them is repeated under each of its parents.
To reduce the size of the drawing required, it is common to represent AHP hierarchies as shown in the diagram below, with only one node for each alternative, and with multiple lines connecting the alternatives and the criteria that apply to them. To avoid clutter, these lines are sometimes omitted or reduced in number. Regardless of any such simplifications in the diagram, in the actual hierarchy each criterion is individually connected to the alternatives. The lines may be thought of as being directed downward from the parent in one level to its children in the level below.
Evaluate the hierarchy
Once the hierarchy has been constructed, the participants analyze it through a series of pairwise comparisons that derive numerical scales of measurement for the nodes. The criteria are pairwise compared against the goal for importance. The alternatives are pairwise compared against each of the criteria for preference. The comparisons are processed mathematically, and priorities are derived for each node.
Consider the "Choose a Leader" example above. An important task of the decision makers is to determine the weight to be given each criterion in making the choice of a leader. Another important task is to determine the weight to be given to each candidate with regard to each of the criteria. The AHP not only lets them do that, but it lets them put a meaningful and objective numerical value on each of the four criteria.
Unlike most surveys which adopt the five point Likert scale, AHP's questionnaire is 9 to 1 to 9.
Establish priorities
This section explains priorities, shows how they are established, and provides a simple example.
Priorities defined and explained
Priorities are numbers associated with the nodes of an AHP hierarchy. They represent the relative weights of the nodes in any group.
Like probabilities, priorities are absolute numbers between zero and one, without units or dimensions. A node with priority .200 has twice the weight in reaching the goal as one with priority .100, ten times the weight of one with priority .020, and so forth. Depending on the problem at hand, "weight" can refer to importance, or preference, or likelihood, or whatever factor is being considered by the decision makers.
Priorities are distributed over a hierarchy according to its architecture, and their values depend on the information entered by users of the process. Priorities of the Goal, the Criteria, and the Alternatives are intimately related, but need to be considered separately.
By definition, the priority of the Goal is 1.000. The priorities of the alternatives always add up to 1.000. Things can become complicated with multiple levels of Criteria, but if there is only one level, their priorities also add to 1.000. All this is illustrated by the priorities in the example below.
Observe that the priorities on each level of the example—the goal, the criteria, and the alternatives—all add up to 1.000.
The priorities shown are those that exist before any information has been entered about weights of the criteria or alternatives, so the priorities within each level are all equal. They are called the hierarchy's default priorities. If a fifth Criterion were added to this hierarchy, the default priority for each Criterion would be .200. If there were only two Alternatives, each would have a default priority of .500.
Two additional concepts apply when a hierarchy has more than one level of criteria: local priorities and global priorities. Consider the hierarchy shown below, which has several Subcriteria under each Criterion.
The local priorities, shown in gray, represent the relative weights of the nodes within a group of siblings with respect to their parent. The local priorities of each group of Criteria and their sibling Subcriteria add up to 1.000. The global priorities, shown in black, are obtained by multiplying the local priorities of the siblings by their parent's global priority. The global priorities for all the subcriteria in the level add up to 1.000.
The rule is this: Within a hierarchy, the global priorities of child nodes always add up to the global priority of their parent. Within a group of children, the local priorities add up to 1.000.
So far, we have looked only at default priorities. As the Analytical Hierarchy Process moves forward, the priorities will change from their default values as the decision makers input information about the importance of the various nodes. They do this by making a series of pairwise comparisons.
Practical examples
Experienced practitioners know that the best way to understand the AHP is to work through cases and examples. Two detailed case studies, specifically designed as in-depth teaching examples, are provided as appendices to this article:
Simple step-by-step example with four Criteria and three Alternatives: Choosing a leader for an organization.
More complex step-by-step example with ten Criteria/Subcriteria and six Alternatives: Buying a family car and Machinery Selection Example.
Some of the books on AHP contain practical examples of its use, though they are not typically intended to be step-by-step learning aids. One of them contains a handful of expanded examples, plus about 400 AHP hierarchies briefly described and illustrated with figures. Many examples are discussed, mostly for professional audiences, in papers published by the International Symposium on the Analytic Hierarchy Process.
Criticisms
The AHP is included in most operations research and management science textbooks, and is taught in numerous universities; it is used extensively in organizations that have carefully investigated its theoretical underpinnings. The method does have its critics.
In the early 1990s a series of debates between critics and proponents of AHP was published in Management Science and The Journal of the Operational Research Society, two prestigious journals where Saaty and his colleagues had considerable influence. These debates seem to have been settled in favor of AHP:
An in-depth paper was published in Operations Research in 2001.
A 2008 Management Science paper reviewing 15 years of progress in all areas of Multicriteria Decision Making
in 2008, the major society for operations research, the Institute for Operations Research and the Management Sciences formally recognized AHP's broad impact on its fields.
A 1997 paper examined possible flaws in the verbal (vs. numerical) scale often used in AHP pairwise comparisons. Another from the same year claimed that innocuous changes to the AHP model can introduce order where no order exists. A 2006 paper found that the addition of criteria for which all alternatives perform equally can alter the priorities of alternatives.
In 2021, the first comprehensive evaluation of the AHP was published in a book authored by two academics from Technical University of Valencia and Universidad Politécnica de Cartagena, and published by Springer Nature. Based on an empirical investigation and objective testimonies by 101 researchers, the study found at least 30 flaws in the AHP and found it unsuitable for complex problems, and in certain situations even for small problems.
Rank reversal
Decision making involves ranking alternatives in terms of criteria or attributes of those alternatives. It is an axiom of some decision theories that when new alternatives are added to a decision problem, the ranking of the old alternatives must not change — that "rank reversal" must not occur.
There are two schools of thought about rank reversal. One maintains that new alternatives that introduce no additional attributes should not cause rank reversal under any circumstances. The other maintains that there are some situations in which rank reversal can reasonably be expected. The original formulation of AHP allowed rank reversals. In 1993, Forman introduced a second AHP synthesis mode, called the ideal synthesis mode, to address choice situations in which the addition or removal of an 'irrelevant' alternative should not and will not cause a change in the ranks of existing alternatives. The current version of the AHP can accommodate both these schools—its ideal mode preserves rank, while its distributive mode allows the ranks to change. Either mode is selected according to the problem at hand.
Rank reversal and AHP are extensively discussed in a 2001 paper in Operations Research, as well as a chapter entitled Rank Preservation and Reversal, in the current basic book on AHP. The latter presents published examples of rank reversal due to adding copies and near copies of an alternative, due to intransitivity of decision rules, due to adding phantom and decoy alternatives, and due to the switching phenomenon in utility functions. It also discusses the Distributive and Ideal Modes of AHP.
A new form of rank reversal of AHP was found in 2014 in which AHP produces rank order reversal when eliminating irrelevant data, this is data that do not differentiate alternatives.
There are different types of rank reversals. Also, other methods besides the AHP may exhibit such rank reversals. More discussion on rank reversals with the AHP and other MCDM methods is provided in the rank reversals in decision-making page.
Non-monotonicity of some weight extraction methods
Within a comparison matrix one may replace a judgement with a less favorable judgment and then check to see if the indication of the new priority becomes less favorable than the original priority. In the context of tournament matrices, it has been proven by Oskar Perron that the principal right eigenvector method is not monotonic. This behaviour can also be demonstrated for reciprocal n x n matrices, where n > 3. Alternative approaches are discussed elsewhere.
See also
Analytic hierarchy process – car example
Analytic hierarchy process – leader example
Analytic network process
Arrow's impossibility theorem
Decision making
Decision-making paradox
Decision-making software
Hierarchical decision process
L. L. Thurstone
Law of comparative judgment
Multi-criteria decision analysis
Pairwise comparison
Preference
Principal component analysis
Rank reversals in decision-making
References
Further reading
Saaty, Thomas L. Decision Making for Leaders: The Analytical Hierarchy Process for Decisions in a Complex World (1982). Belmont, California: Wadsworth. ; Paperback, Pittsburgh: RWS. . "Focuses on practical application of the AHP; briefly covers theory."
Saaty, Thomas L. Fundamentals of Decision Making and Priority Theory with the Analytic Hierarchy Process (1994). Pittsburgh: RWS. . "A thorough exposition of the theoretical aspects of AHP."
Saaty, Thomas L. Mathematical Principles of Decision Making (Principia Mathematica Decernendi) (2009). Pittsburgh: RWS. . "Comprehensive coverage of the AHP, its successor the ANP, and further developments of their underlying concepts."
Saaty, Thomas L., with Ernest H. Forman. The Hierarchon: A Dictionary of Hierarchies. (1992) Pittsburgh: RWS. . "Dozens of illustrations and examples of AHP hierarchies. A beginning classification of ideas relating to planning, conflict resolution, and decision making."
Saaty, Thomas L., with Luis G. Vargas The Logic of Priorities: Applications in Business, Energy, Health, and Transportation (1982). Boston: Kluwer-Nijhoff. (Hardcover) (Paperback). Republished 1991 by RWS, .
Kardi Teknomo. Analytic Hierarchy Process Tutorial (2012). Revoledu.
Kearns, Kevin P.; Saaty, Thomas L. Analytical Planning: The Organization of Systems (1985). Oxford: Pergamon Press. . Republished 1991 by RWS, .
with Joyce Alexander. Conflict Resolution: The Analytic Hierarchy Process (1989). New York: Praeger.
Vargas, Luis L.; Saaty, Thomas L. Prediction, Projection and Forecasting: Applications of the Analytic Hierarchy Process in Economics, Finance, Politics, Games and Sports (1991). Boston: Kluwer Academic.
Vargas, Luis L.; Saaty, Thomas L. Decision Making in Economic, Social and Technological Environments (1994). Pittsburgh: RWS.
Vargas, Luis L.; Saaty, Thomas L. Models, Methods, Concepts & Applications of the Analytic Hierarchy Process (2001). Boston: Kluwer Academic.
Peniwati, Kirti; Vargas, Luis L. Group Decision Making: Drawing Out and Reconciling Differences (2007). Pittsburgh: RWS.
External links
International Journal of the Analytic Hierarchy Process An online journal about multi-criteria decision making using the AHP.
easyAHP Online tool to make collaborative decisions using AHP easyAHP is a free online tool to make decisions in a collaborative or individual way. easy AHP uses AHP methodology: Analytic hierarchy process.
AHP video. (9:17 YouTube clip) Very thorough exposition of AHP by Dr. Klaus Göpel
Analytic Hierarchy Process (AHP) Example with Simulations using Matlab – Waqqas Farooq – AHP example for college selection using matlab.
An illustrated guide (pdf) – Dr. Oliver Meixner University of Wien – "Analytic Hierarchy Process", a very easy to understand summary of the mathematical theory
AHP example with Matlab implementation – AHP explanation with an example and matlab code.
R ahp package – An AHP open source package.
AHPy - An open source Python implementation of AHP with an optimal solver for missing pairwise comparisons
Introductory Mathematics of the Analytic Hierarchy Process – An introduction to the mathematics of the Analytic Hierarchy Process.
How to use AHP for Project Prioritization by Dr. James Brown (webinar)
Guide to use AHP in Excel A guide to using AHP in Excel by Dr. Richard Hodgett
Use the AHP Methodology to More Effectively Define and Evaluate Your SAP Implementation Approach by Jeetendra Kumar
Group decision-making
Multiple-criteria decision analysis
Industrial engineering
Project management techniques | Analytic hierarchy process | [
"Engineering"
] | 5,441 | [
"Industrial engineering"
] |
1,248,251 | https://en.wikipedia.org/wiki/Coulomb%20explosion | A Coulombic explosion is a condensed-matter physics process in which a molecule or crystal lattice is destroyed by the Coulombic repulsion between its constituent atoms. Coulombic explosions are a prominent technique in laser-based machining, and appear naturally in certain high-energy reactions.
Mechanism
A Coulombic explosion begins when an intense electric field (often from a laser) excites the valence electrons in a solid, ejecting them from the system and leaving behind positively charged ions. The chemical bonds holding the solid together are weakened by the loss of the electrons, enabling the Coulombic repulsion between the ions to overcome them. The result is an explosion of ions and electrons – a plasma.
The laser must be very intense to produce a Coulomb explosion. If it is too weak, the energy given to the electrons will be transferred to the ions via electron-phonon coupling. This will cause the entire material to heat up, melt, and thermally ablate away as a plasma. The end result is similar to Coulomb explosion, except that any fine structure in the material will be damaged by thermal melting.
It may be shown that the Coulomb explosion occurs in the same parameter regime as the superradiant phase transition i.e. when the destabilizing interactions become overwhelming and dominate over the oscillatory phonon-solid binding motions.
Technological use
A Coulomb explosion is a "cold" alternative to the dominant laser etching technique of thermal ablation, which depends on local heating, melting, and vaporization of molecules and atoms using less-intense beams. Pulse brevity down only to the nanosecond regime is sufficient to localize thermal ablation – before the heat is conducted far, the energy input (pulse) has ended. Nevertheless, thermally ablated materials may seal pores important in catalysis or battery operation, and recrystallize or even burn the substrate, thus changing the physical and chemical properties at the etch site. In contrast, even light foams remain unsealed after ablation by Coulomb explosion.
Coulomb explosions for industrial machining are made with ultra-short (picosecond or femtoseconds) laser pulses. The enormous beam intensities required (10–400 terawatt per square centimeter thresholds, depending on material) are only practical to generate, shape, and deliver for very brief instants of time. Coulomb explosion etching can be used in any material to bore holes, remove surface layers, and texture and microstructure surfaces; e.g., to control ink loading in printing presses.
Appearance in nature
High speed camera imaging of alkali metals exploding in water has suggested the explosion is a coulomb explosion.
During a nuclear explosion based on the fission of uranium, 167 MeV is emitted in the form of a coulombic explosion between each prior nucleus of uranium, the repulsive electrostatic energy between the two fission daughter nuclei, translates into the kinetic energy of the fission products that results in both the primary driver of the blackbody radiation that rapidly generates the hot dense plasma/nuclear fireball formation and thus also both later blast and thermal effects.
Scientists at the University of Cologne Zoological Institute have suggested that coulomb explosion (specifically, the electrostatic repulsion of dissociated carboxyl groups of polyglutamic acid) may be part of the explosive action of nematocytes, the stinging cells in aquatic organisms of the phylum Cnidaria.
Coulomb explosion imaging
Molecules are held together by a balance of charge between negative electrons and positive nuclei. When multiple electrons are expelled, either by laser irradiation or bombardment using highly charged ions, the remaining, mutually repulsive, nuclei fly apart in a Coulomb explosion. The structure of simple gas phase molecules can be determined by imaging which tracks the fragment trajectories. As of 2022 the method can work with up to 11-atom molecules.
See also
Laser engraving
Laser cutting
Tunnel ionization
Coherent x-ray diffraction imaging
References
Materials science
Laser applications
Electrostatics
Explosions | Coulomb explosion | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 855 | [
"nan",
"Applied and interdisciplinary physics",
"Materials science",
"Explosions"
] |
1,249,122 | https://en.wikipedia.org/wiki/Water%20supply%20network | A water supply network or water supply system is a system of engineered hydrologic and hydraulic components that provide water supply. A water supply system typically includes the following:
A drainage basin (see water purification – sources of drinking water)
A raw water collection point (above or below ground) where the water accumulates, such as a lake, a river, or groundwater from an underground aquifer. Raw water may be transferred using uncovered ground-level aqueducts, covered tunnels, or underground water pipes to water purification facilities.
Water purification facilities. Treated water is transferred using water pipes (usually underground).
Water storage facilities such as reservoirs, water tanks, or water towers. Smaller water systems may store the water in cisterns or pressure vessels. Tall buildings may also need to store water locally in pressure vessels in order for the water to reach the upper floors.
Additional water pressurizing components such as pumping stations may need to be situated at the outlet of underground or aboveground reservoirs or cisterns (if gravity flow is impractical).
A pipe network for distribution of water to consumers (which may be private houses or industrial, commercial, or institution establishments) and other usage points (such as fire hydrants)
Connections to the sewers (underground pipes, or aboveground ditches in some developing countries) are generally found downstream of the water consumers, but the sewer system is considered to be a separate system, rather than part of the water supply system.
Water supply networks are often run by public utilities of the water industry.
Water abstraction and raw water transfer
Raw water (untreated) is from a surface water source (such as an intake on a lake or a river) or from a groundwater source (such as a water well drawing from an underground aquifer) within the watershed that provides the water resource.
The raw water is transferred to the water purification facilities using uncovered aqueducts, covered tunnels or underground water pipes.
Water treatment
Virtually all large systems must treat the water; a fact that is tightly regulated by global, state and federal agencies, such as the World Health Organization (WHO) or the United States Environmental Protection Agency (EPA). Water treatment must occur before the product reaches the consumer and afterwards (when it is discharged again). Water purification usually occurs close to the final delivery points to reduce pumping costs and the chances of the water becoming contaminated after treatment.
Traditional surface water treatment plants generally consists of three steps: clarification, filtration and disinfection. Clarification refers to the separation of particles (dirt, organic matter, etc.) from the water stream. Chemical addition (i.e. alum, ferric chloride) destabilizes the particle charges and prepares them for clarification either by settling or floating out of the water stream. Sand, anthracite or activated carbon filters refine the water stream, removing smaller particulate matter. While other methods of disinfection exist, the preferred method is via chlorine addition. Chlorine effectively kills bacteria and most viruses and maintains a residual to protect the water supply through the supply network.
Water distribution network
The product, delivered to the point of consumption, is called potable water if it meets the water quality standards required for human consumption.
The water in the supply network is maintained at positive pressure to ensure that water reaches all parts of the network, that a sufficient flow is available at every take-off point and to ensure that untreated water in the ground cannot enter the network. The water is typically pressurised by pumping the water into storage tanks constructed at the highest local point in the network. One network may have several such service reservoirs.
In small domestic systems, the water may be pressurised by a pressure vessel or even by an underground cistern (the latter however does need additional pressurizing). This eliminates the need of a water tower or any other heightened water reserve to supply the water pressure.
These systems are usually owned and maintained by local governments such as cities or other public entities, but are occasionally operated by a commercial enterprise (see water privatization). Water supply networks are part of the master planning of communities, counties, and municipalities. Their planning and design requires the expertise of city planners and civil engineers, who must consider many factors, such as location, current demand, future growth, leakage, pressure, pipe size, pressure loss, fire fighting flows, etc.—using pipe network analysis and other tools.
As water passes through the distribution system, the water quality can degrade by chemical reactions and biological processes. Corrosion of metal pipe materials in the distribution system can cause the release of metals into the water with undesirable aesthetic and health effects. Release of iron from unlined iron pipes can result in customer reports of "red water" at the tap. Release of copper from copper pipes can result in customer reports of "blue water" and/or a metallic taste. Release of lead can occur from the solder used to join copper pipe together or from brass fixtures. Copper and lead levels at the consumer's tap are regulated to protect consumer health.
Utilities will often adjust the chemistry of the water before distribution to minimize its corrosiveness. The simplest adjustment involves control of pH and alkalinity to produce a water that tends to passivate corrosion by depositing a layer of calcium carbonate. Corrosion inhibitors are often added to reduce release of metals into the water. Common corrosion inhibitors added to the water are phosphates and silicates.
Maintenance of a biologically safe drinking water is another goal in water distribution. Typically, a chlorine based disinfectant, such as sodium hypochlorite or monochloramine is added to the water as it leaves the treatment plant. Booster stations can be placed within the distribution system to ensure that all areas of the distribution system have adequate sustained levels of disinfection.
Topologies
Like electric power lines, roads, and microwave radio networks, water systems may have a loop or branch network topology, or a combination of both. The piping networks are circular or rectangular. If any one section of water distribution main fails or needs repair, that section can be isolated without disrupting all users on the network.
Most systems are divided into zones. Factors determining the extent or size of a zone can include hydraulics, telemetry systems, history, and population density. Sometimes systems are designed for a specific area then are modified to accommodate development. Terrain affects hydraulics and some forms of telemetry. While each zone may operate as a stand-alone system, there is usually some arrangement to interconnect zones in order to manage equipment failures or system failures.
Water network maintenance
Water supply networks usually represent the majority of assets of a water utility. Systematic documentation of maintenance works using a computerized maintenance management system (CMMS) is a key to a successful operation of a water utility.
Sustainable urban water supply
A sustainable urban water supply network covers all the activities related to provision of potable water. Sustainable development is of increasing importance for the water supply to urban areas. Incorporating innovative water technologies into water supply systems improves water supply from sustainable perspectives. The development of innovative water technologies provides flexibility to the water supply system, generating a fundamental and effective means of sustainability based on an integrated real options approach.
Water is an essential natural resource for human existence. It is needed in every industrial and natural process, for example, it is used for oil refining, for liquid-liquid extraction in hydro-metallurgical processes, for cooling, for scrubbing in the iron and the steel industry, and for several operations in food processing facilities.
It is necessary to adopt a new approach to design urban water supply networks; water shortages are expected in the forthcoming decades and environmental regulations for water utilization and waste-water disposal are increasingly stringent.
To achieve a sustainable water supply network, new sources of water are needed to be developed, and to reduce environmental pollution.
The price of water is increasing, so less water must be wasted and actions must be taken to prevent pipeline leakage. Shutting down the supply service to fix leaks is less and less tolerated by consumers. A sustainable water supply network must monitor the freshwater consumption rate and the waste-water generation rate.
Many of the urban water supply networks in developing countries face problems related to population increase, water scarcity, and environmental pollution.
Population growth
In 1900 just 13% of the global population lived in cities. By 2005, 49% of the global population lived in urban areas. In 2030 it is predicted that this statistic will rise to 60%. Attempts to expand water supply by governments are costly and often not sufficient. The building of new illegal settlements makes it hard to map, and make connections to, the water supply, and leads to inadequate water management. In 2002, there were 158 million people with inadequate water supply. An increasing number of people live in slums, in inadequate sanitary conditions, and are therefore at risk of disease.
Water scarcity
Potable water is not well distributed in the world. 1.8 million deaths are attributed to unsafe water supplies every year, according to the WHO. Many people do not have any access, or do not have access to quality and quantity of potable water, though water itself is abundant. Poor people in developing countries can be close to major rivers, or be in high rainfall areas, yet not have access to potable water at all. There are also people living where lack of water creates millions of deaths every year.
Where the water supply system cannot reach the slums, people manage to use hand pumps, to reach the pit wells, rivers, canals, swamps and any other source of water. In most cases the water quality is unfit for human consumption. The principal cause of water scarcity is the growth in demand. Water is taken from remote areas to satisfy the needs of urban areas. Another reason for water scarcity is climate change: precipitation patterns have changed; rivers have decreased their flow; lakes are drying up; and aquifers are being emptied.
Governmental issues
In developing countries many governments are corrupt and poor and they respond to these problems with frequently changing policies and non clear agreements. Water demand exceeds supply, and household and industrial water supplies are prioritised over other uses, which leads to water stress. Potable water has a price in the market; water often becomes a business for private companies, which earn a profit by putting a higher price on water, which imposes a barrier for lower-income people. The Millennium Development Goals propose the changes required.
Goal 6 of the United Nations' Sustainable Development Goals is to "Ensure availability and sustainable management of water and sanitation for all". This is in recognition of the human right to water and sanitation, which was formally acknowledged at the United Nations General Assembly in 2010, that "clean drinking water and sanitation are essential to the recognition of all human rights". Sustainable water supply includes ensuring availability, accessibility, affordability and quality of water for all individuals.
In advanced economies, the problems are about optimising existing supply networks. These economies have usually had continuing evolution, which allowed them to construct infrastructure to supply water to people. The European Union has developed a set of rules and policies to overcome expected future problems.
There are many international documents with interesting, but not very specific, ideas and therefore they are not put into practice. Recommendations have been made by the United Nations, such as the Dublin Statement on Water and Sustainable Development.
Optimizing the water supply network
The yield of a system can be measured by either its value or its net benefit. For a water supply system, the true value or the net benefit is a reliable water supply service having adequate quantity and good quality of the product. For example, if the existing water supply of a city needs to be extended to supply a new municipality, the impact of the new branch of the system must be designed to supply the new needs, while maintaining supply to the old system.
Single-objective optimization
The design of a system is governed by multiple criteria, one being cost. If the benefit is fixed, the least cost design results in maximum benefit. However, the least cost approach normally results in a minimum capacity for a water supply network. A minimum cost model usually searches for the least cost solution (in pipe sizes), while satisfying the hydraulic constraints such as: required output pressures, maximum pipe flow rate and pipe flow velocities. The cost is a function of pipe diameters; therefore the optimization problem consists of finding a minimum cost solution by optimising pipe sizes to provide the minimum acceptable capacity.
Multi-objective optimization
However, according to the authors of the paper entitled, “Method for optimizing design and rehabilitation of water distribution systems”, “the least capacity is not a desirable solution to a sustainable water supply network in a long term, due to the uncertainty of the future demand”. It is preferable to provide extra pipe capacity to cope with unexpected demand growth and with water outages. The problem changes from a single objective optimization problem (minimizing cost), to a multi-objective optimization problem (minimizing cost and maximizing flow capacity).
Weighted sum method
To solve a multi-objective optimization problem, it is necessary to convert the problem into a single objective optimization problem, by using adjustments, such as a weighted sum of objectives, or an ε-constraint method. The weighted sum approach gives a certain weight to the different objectives, and then factors in all these weights to form a single objective function that can be solved by single factor optimization. This method is not entirely satisfactory, because the weights cannot be correctly chosen, so this approach cannot find the optimal solution for all the original objectives.
The constraint method
The second approach (the constraint method), chooses one of the objective functions as the single objective, and the other objective functions are treated as constraints with a limited value. However, the optimal solution depends on the pre-defined constraint limits.
Sensitivity analysis
The multiple objective optimization problems involve computing the tradeoff between the costs and benefits resulting in a set of solutions that can be used for sensitivity analysis and tested in different scenarios. But there is no single optimal solution that will satisfy the global optimality of both objectives. As both objectives are to some extent contradictory, it is not possible to improve one objective without sacrificing the other. It is necessary in some cases use a different approach. (e.g. Pareto Analysis), and choose the best combination.
Operational constraints
Returning to the cost objective function, it cannot violate any of the operational constraints. Generally this cost is dominated by the energy cost for pumping. “The operational constraints include the standards of customer service, such as: the minimum delivered pressure, in addition to the physical constraints such as the maximum and the minimum water levels in storage tanks to prevent overtopping and emptying respectively.”
In order to optimize the operational performance of the water supply network, at the same time as minimizing the energy costs, it is necessary to predict the consequences of different pump and valve settings on the behavior of the network.
Apart from Linear and Non-linear Programming, there are other methods and approaches to design, to manage and operate a water supply network to achieve sustainability—for instance, the adoption of appropriate technology coupled with effective strategies for operation and maintenance. These strategies must include effective management models, technical support to the householders and industries, sustainable financing mechanisms, and development of reliable supply chains. All these measures must ensure the following: system working lifespan; maintenance cycle; continuity of functioning; down time for repairs; water yield and water quality.
Sustainable development
In an unsustainable system there is insufficient maintenance of the water networks, especially in the major pipe lines in urban areas. The system deteriorates and then needs rehabilitation or renewal.
Householders and sewage treatment plants can both make the water supply networks more efficient and sustainable. Major improvements in eco-efficiency are gained through systematic separation of rainfall and wastewater. Membrane technology can be used for recycling wastewater.
The municipal government can develop a “Municipal Water Reuse System” which is a current approach to manage the rainwater. It applies a water reuse scheme for treated wastewater, on a municipal scale, to provide non-potable water for industry, household and municipal uses. This technology consists in separating the urine fraction of sanitary wastewater, and collecting it for recycling its nutrients. The feces and graywater fraction is collected, together with organic wastes from the households, using a gravity sewer system, continuously flushed with non-potable water. The water is treated anaerobically and the biogas is used for energy production.
One effective way to achieve sustainable water management is to shift emphasis towards decentralized water projects, such as drip irrigation diffusion in India. This project covers large spatial areas while relying on individual technological adoption decisions, offering scalable solutions that can mitigate water scarcity and enhance agricultural productivity.
Another method that can be utilized is through the promoting of community engagement and resistance against unsustainable water infrastructure projects. Grassroots movements, as observed in anti-dam protests in various countries, play a crucial role in challenging dominant development narratives and advocating for more socially and ecologically just water management practices.
Municipalities and other forms of local governments should also invest in innovative technologies, such as membrane technology for wastewater recycling, and develop policy frameworks that incentivize eco-efficient practices. Municipal water reuse systems, as demonstrated in implementation, offer promising avenues for integrating wastewater treatment and resource recovery into urban water networks.
The sustainable water supply system is an integrated system including water intake, water utilization, wastewater discharge and treatment and water environmental protection. It requires reducing freshwater and groundwater usage in all sectors of consumption. Developing sustainable water supply systems is a growing trend, because it serves people's long-term interests. There are several ways to reuse and recycle the water, in order to achieve long-term sustainability, such as:
Gray water re-use and treatment: gray water is wastewater coming from baths, showers, sinks and washbasins. If this water is treated it can be used as a source of water for uses other than drinking. Depending on the type of gray water and its level of treatment, it can be re-used for irrigation and toilet flushing. According to an investigation about the impacts of domestic grey water reuse on public health, carried out by the New South Wales Health Centre in Australia in the year 2000, grey water contains less nitrogen and fecal pathogenic organisms than sewage, and the organic content of grey water decomposes more rapidly.
Ecological treatment systems use little energy: there are many applications in gray water re-use, such as reed beds, soil treatment systems and plant filters. This process is ideal for gray water re-use, because of easier maintenance and higher removal rates of organic matter, ammonia, nitrogen and phosphorus.
Other possible approaches to scoping models for water supply, applicable to any urban area, include the following:
Sustainable drainage system
Borehole extraction
Intercluster groundwater flow
Canal and river extraction
Aquifer storage
A more user-friendly indoor water use
The Dublin Statement on Water and Sustainable Development is a good example of the new trend to overcome water supply problems. This statement, suggested by advanced economies, has come up with some principles that are of great significance to urban water supply. These are:
Fresh water is a finite and vulnerable resource, essential to sustain life, development and the environment.
Water development and management should be based on a participatory approach, involving users, planners and policy-makers at all levels.
Women play a central part in the provision, management and safeguarding of water. Institutional arrangements should reflect the role of women in water provision and protection.
Water has an economic value in all its competing uses and should be recognized as an economic good.
From these statements, developed in 1992, several policies have been created to give importance to water and to move urban water system management towards sustainable development. The Water Framework Directive by the European Commission is a good example of what has been created there out of former policies.
Future approaches
There is great need for a more sustainable water supply systems. To achieve sustainability several factors must be tackled at the same time: climate change, rising energy cost, and rising populations. All of these factors provoke change and put pressure on management of available water resources.
An obstacle to transforming conventional water supply systems, is the amount of time needed to achieve the transformation. More specifically, transformation must be implemented by municipal legislation bodies, which always need short-term solutions too. Another obstacle to achieving sustainability in water supply systems is the insufficient practical experience with the technologies required, and the missing know-how about the organization and the transition process.
Urban water infrastructure faces several challenges that undermine its sustainability and resilience. One critical issue highlighted in recent research is the vulnerability of water networks to climate variability and extreme weather events. Poor seasonal rains, as observed in the case of the Panama Canal's lock and dam infrastructure, exemplify how inadequate water supply can strain water-intensive infrastructure, raising questions about engineering legitimacy and the reliability of water systems.
Another key challenge is the unequal development associated with large-scale water infrastructure projects such as dams and canals . Such projects, while aimed at promoting economic growth, often actually reproduce social and economic inequalities by displacing rural communities and marginalizing indigenous populations. This phenomenon of "accumulation by dispossession" further emphasizes the need for more equitable and inclusive approaches to water infrastructure development.
Possible ways to improve this situation is simulating of the network, implementing pilot projects, learning from the costs involved and the benefits achieved.
See also
Aqueduct
Civil engineering
Conduit hydroelectricity
Domestic water system
Hardy Cross method
Hydrological optimization
Hydrology
Infrastructure
Plumbing
River
Tap water
Water
Water pipes
Water meter
Water well
Automatic meter reading
Backflow prevention device
Fire hydrant
Strainers
Valve
Water tower
Water quality
Water resources
Water supply
References
External links
DCMMS: A web-based GIS application to record maintenance activities for water and wastewater networks.
An open-source hydraulic toolbox for water distribution systems
Water supply network schematic
Supply network
Environmental engineering
Hydraulics
Supply network | Water supply network | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 4,489 | [
"Hydrology",
"Chemical engineering",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Water industry",
"Environmental engineering",
"Fluid dynamics"
] |
1,249,147 | https://en.wikipedia.org/wiki/Pressure%20tank | A pressure tank or pressurizer is a type of hydraulic accumulator used in a piping system to maintain a desired pressure. Applications include buffering water pressure in homes.
A simple well water control system
Referring to the figure on the left, a submersible water pump is installed in a well. The pressure switch turns the water pump on when it senses a pressure that is less than Plo and turns it off when it senses a pressure greater than Phi. While the pump is on, the pressure tank fills up. The pressure tank is then depleted as it supplies water in the specified pressure range to prevent "short-cycling", in which the pump tries to establish the proper pressure by rapidly cycling between Plo and Phi.
A simple pressure tank would be just a tank which held water with an air space above the water which would compress as more water entered the tank. Modern systems isolate the water from the pressurized air using a flexible rubber or plastic diaphragm or bladder, because otherwise the air will dissolve in the water and be removed from the tank by usage. Eventually there will be little or no air and the tank will become "waterlogged" causing short-cycling, and will need to be drained to restore operation. The diaphragm or bladder may itself exert a pressure on the water, but it is usually small and will be neglected in the following discussion.
Referring to the diagram on the right, a pressure tank is generally pressurized when empty with a "charging pressure" Pc, which is usually about 2 psi below the turn-on pressure Plo (Case 1). The total volume of the tank is Vt. When in use, the air in the tank will be compressed to pressure P and there will be a volume V of water in the tank (Case 2). In the following development, all pressures are gauge pressures, which are the pressures above atmospheric pressure (Pa, which is altitude dependent). The ideal gas law may be written for both cases, and the amount of air in each case is equal:
where N is the number of molecules of gas (equal in both cases), k is the Boltzmann constant and T is the temperature. Assuming that the temperature is equal for both cases, the above equations can be solved for the water pressure/volume relationship in the tank:
Tanks are generally specified by their total volume Vt and the "drawdown" (ΔV), which is the amount of water the tank will eject as the tank pressure goes from Phi to Plo, which are established by the pressure switch:
The reason for the charging pressure can now be seen: The larger the charging pressure, the larger the drawdown. However, a charging pressure above Plo will not allow the pump to turn on when the water pressure is below Plo, so it is kept a bit below Plo. Another important parameter is the drawdown factor (fΔV), which is the ratio of the drawdown to the total tank volume:
This factor is independent of the tank size so that the drawdown can be calculated for any tank, given its total volume, atmospheric pressure, charging pressure, and the limiting pressures established by the pressure switch.
See also
Pressurizer (nuclear power)
References
Bibliography
External links
Plumbing
Pressure vessels | Pressure tank | [
"Physics",
"Chemistry",
"Engineering"
] | 668 | [
"Structural engineering",
"Chemical equipment",
"Plumbing",
"Physical systems",
"Construction",
"Hydraulics",
"Pressure vessels"
] |
15,685,517 | https://en.wikipedia.org/wiki/Homothetic%20center | In geometry, a homothetic center (also called a center of similarity or a center of similitude) is a point from which at least two geometrically similar figures can be seen as a dilation or contraction of one another. If the center is external, the two figures are directly similar to one another; their angles have the same rotational sense. If the center is internal, the two figures are scaled mirror images of one another; their angles have the opposite sense.
General polygons
If two geometric figures possess a homothetic center, they are similar to one another; in other words they must have the same angles at corresponding points and differ only in their relative scaling. The homothetic center and the two figures need not lie in the same plane; they can be related by a projection from the homothetic center.
Homothetic centers may be external or internal. If the center is internal, the two geometric figures are scaled mirror images of one another; in technical language, they have opposite chirality. A clockwise angle in one figure would correspond to a counterclockwise angle in the other. Conversely, if the center is external, the two figures are directly similar to one another; their angles have the same sense.
Circles
Circles are geometrically similar to one another and mirror symmetric. Hence, a pair of circles has both types of homothetic centers, internal and external, unless the centers are equal or the radii are equal; these exceptional cases are treated after general position. These two homothetic centers lie on the line joining the centers of the two given circles, which is called the line of centers (Figure 3). Circles with radius zero can also be included (see exceptional cases), and negative radius can also be used, switching external and internal.
Computing homothetic centers
For a given pair of circles, the internal and external homothetic centers may be found in various ways. In analytic geometry, the internal homothetic center is the weighted average of the centers of the circles, weighted by the opposite circle's radius – distance from center of circle to inner center is proportional to that radius, so weighting is proportional to the opposite radius. Denoting the centers of the circles by and their radii by and denoting the center by , this is:
The external center can be computed by the same equation, but considering one of the radii as negative; either one yields the same equation, which is:
More generally, taking both radii with the same sign (both positive or both negative) yields the inner center, while taking the radii with opposite signs (one positive and the other negative) yields the outer center. Note that the equation for the inner center is valid for any values (unless both radii zero or one is the negative of the other), but the equation for the external center requires that the radii be different, otherwise it involves division by zero.
In synthetic geometry, two parallel diameters are drawn, one for each circle; these make the same angle with the line of centers. The lines drawn through corresponding endpoints of those radii, which are homologous points, intersect each other and the line of centers at the external homothetic center. Conversely, the lines drawn through one endpoint and the opposite endpoint of its counterpart intersects each other and the line of centers at the internal homothetic center.
As a limiting case of this construction, a line tangent to both circles (a bitangent line) passes through one of the homothetic centers, as it forms right angles with both the corresponding diameters, which are thus parallel; see tangent lines to two circles for details. If the circles fall on opposite sides of the line, it passes through the internal homothetic center, as in in the figure above. Conversely, if the circles fall on the same side of the line, it passes through the external homothetic center (not pictured).
Special cases
If the circles have the same radius (but different centers), they have no external homothetic center in the affine plane: in analytic geometry this results in division by zero, while in synthetic geometry the lines are parallel to the line of centers (both for secant lines and the bitangent lines) and thus have no intersection. An external center can be defined in the projective plane to be the point at infinity corresponding to the slope of this line. This is also the limit of the external center if the centers of the circles are fixed and the radii are varied until they are equal.
If the circles have the same center but different radii, both the external and internal coincide with the common center of the circles. This can be seen from the analytic formula, and is also the limit of the two homothetic centers as the centers of the two circles are varied until they coincide, holding the radii equal. There is no line of centers, however, and the synthetic construction fails as the two parallel lines coincide.
If one radius is zero but the other is non-zero (a point and a circle), both the external and internal center coincide with the point (center of the radius zero circle).
If the two circles are identical (same center, same radius), the internal center is their common center, but there is no well-defined external center – properly, the function from the parameter space of two circles in the plane to the external center has a non-removable discontinuity on the locus of identical circles. In the limit of two circles with the same radius but distinct centers moving to having the same center, the external center is the point at infinity corresponding to the slope of the line of centers, which can be anything, so no limit exists for all possible pairs of such circles.
Conversely, if both radii are zero (two points) but the points are distinct, the external center can be defined as the point at infinity corresponding to the slope of the line of centers, but there is no well-defined internal center.
Homologous and antihomologous points
In general, a line passing through a homothetic center intersects each of its circles in two places. Of these four points, two are said to be homologous if radii drawn to them make the same angle with the line connecting the centers; for example, the points in Figure 4. Points which are collinear with respect to the homothetic center but are not homologous are said to be antihomologous; for example, points in Figure 4.
Pairs of antihomologous points lie on a circle
When two rays from the same homothetic center intersect the circles, each set of antihomologous points lie on a circle.
Consider triangles (Figure 4).
They are similar because
since is the homothetic center.
From that similarity, it follows that
By the inscribed angle theorem,
Because is supplementary to ,
In the quadrilateral ,
which means it can be inscribed in a circle.
From the secant theorem, it follows that
In the same way, it can be shown that can be inscribed in a circle and
The proof is similar for the internal homothetic center :
Segment is seen in the same angle from and , which means lie on a circle.
Then from the intersecting chords theorem,
Similarly can be inscribed in a circle and
Relation with the radical axis
Two circles have a radical axis, which is the line of points from which tangents to both circles have equal length. More generally, every point on the radical axis has the property that its powers relative to the circles are equal. The radical axis is always perpendicular to the line of centers, and if two circles intersect, their radical axis is the line joining their points of intersection. For three circles, three radical axes can be defined, one for each pair of circles (); remarkably, these three radical axes intersect at a single point, the radical center. Tangents drawn from the radical center to the three circles would all have equal length.
Any two pairs of antihomologous points can be used to find a point on the radical axis. Consider the two rays emanating from the external homothetic center in Figure 4. These rays intersect the two given circles (green and blue in Figure 4) in two pairs of antihomologous points, for the first ray, and for the second ray. These four points lie on a single circle, that intersects both given circles. By definition, the line is the radical axis of the new circle with the green given circle, whereas the line is the radical axis of the new circle with the blue given circle. These two lines intersect at the point , which is the radical center of the new circle and the two given circles. Therefore, the point also lies on the radical axis of the two given circles.
Tangent circles and antihomologous points
For each pair of antihomologous points of two circles exists a third circle which is tangent to the given ones and touches them at the antihomologous points.
The opposite is also true — every circle which is tangent to two other circles touches them at a pair of antihomologous points.
Let our two circles have centers (Figure 5). is their external homothetic center.
We construct an arbitrary ray from which intersects the two circles in and .
Extend until they intersect in .
It is easily proven that triangles are similar because of the homothety. They are also isosceles because (radius), therefore
Thus is also isosceles and a circle can be constructed with center and radius
This circle is tangent to the two given circles in points .
The proof for the other pair of antihomologous points (), as well as in the case of the internal homothetic center is analogous.
If we construct the tangent circles for every possible pair of antihomologous points we get two families of circles - one for each homothetic center. The family of circles of the external homothetic center is such that every tangent circle either contains both given circles or none (Figure 6). On the other hand, the circles from the other family always contain only one of the given circles (Figure 7).
All circles from a tangent family have a common radical center and it coincides with the homothetic center.
To show this, consider two rays from the homothetic center, intersecting the given circles (Figure 8). Two tangent circles exist which touch the given circles at the antihomologous points. As we've already shown these points lie on a circle and thus the two rays are radical axes for . Then the intersecting point of the two radical axes must also belong to the radical axis of . This point of intersection is the homothetic center .
If the two tangent circle touch collinear pairs of antihomologous point — as in Figure 5 — then because of the homothety
Thus the powers of with respect to the two tangent circles are equal which means that belongs to the radical axis.
Homothetic centers of three circles
Any pair of circles has two centers of similarity, therefore, three circles would have six centers of similarity, two for each distinct pair of given circles. Remarkably, these six points lie on four lines, three points on each line. Here is one way to show this.
Consider the plane of the three circles (Figure 9). Offset each center point perpendicularly to the plane by a distance equal to the corresponding radius. The centers can be offset to either side of the plane. The three offset points define a single plane. In that plane we build three lines through each pair of points. The lines pierce the plane of circles in the points . Since the locus of points which are common to two distinct and non-parallel planes is a line then necessarily these three points lie on such line. From the similarity of triangles we see that
(where are the radii of the circles) and thus is in fact the homothetic center of the corresponding two circles. We can do the same for and .
Repeating the above procedure for different combinations of homothetic centers (in our method this is determined by the side to which we offset the centers of the circles) would yield a total of four lines — three homothetic centers on each line (Figure 10).
Here is yet another way to prove this.
Let be a conjugate pair of circles tangent to all three given circles (Figure 11). By conjugate we imply that both tangent circles belong to the same family with respect to any one of the given pairs of circles. As we've already seen, the radical axis of any two tangent circles from the same family passes through the homothetic center of the two given circles. Since the tangent circles are common for all three pairs of given circles then their homothetic centers all belong to the radical axis of e.g., they lie on a single line.
This property is exploited in Joseph Diaz Gergonne's general solution to Apollonius' problem. Given the three circles, the homothetic centers can be found and thus the radical axis of a pair of solution circles. Of course, there are infinitely many circles with the same radical axis, so additional work is done to find out exactly which two circles are the solution.
See also
Intercept theorem
Similarity (geometry)
Homothetic transformation
Radical axis, radical center
Apollonius' problem
References
Euclidean geometry
Circles
Geometric centers | Homothetic center | [
"Physics",
"Mathematics"
] | 2,746 | [
"Point (geometry)",
"Geometric centers",
"Circles",
"Pi",
"Symmetry"
] |
15,686,668 | https://en.wikipedia.org/wiki/Sokolov%E2%80%93Ternov%20effect | The Sokolov–Ternov effect is the effect of self-polarization of relativistic electrons or positrons moving at high energy in a magnetic field. The self-polarization occurs through the emission of spin-flip synchrotron radiation. The effect was predicted by Igor Ternov and the prediction rigorously justified by Arseny Sokolov using exact solutions to the Dirac equation.
Theory
An electron in a magnetic field can have its spin oriented in the same ("spin up") or in the opposite ("spin down") direction with respect to the direction of the magnetic field (which is assumed to be oriented "up"). The "spin down" state has a higher energy than "spin up" state. The polarization arises due to the fact that the rate of transition through emission of synchrotron radiation to the "spin down" state is slightly greater than the probability of transition to the "spin up" state. As a result, an initially unpolarized beam of high-energy electrons circulating in a storage ring after sufficiently long time will have spins oriented in the direction opposite to the magnetic field. Saturation is not complete and is explicitly described by the formula
where is the limiting degree of polarization (92.4%), and is the relaxation time:
Here is as before, and are the mass and charge of the electron, is the vacuum permittivity, is the speed of light, is the Schwinger field, is the magnetic field, and is the electron energy.
The limiting degree of polarization is less than one due to the existence of spin–orbital energy exchange, which allows transitions to the "spin up" state (with probability 25.25 times less than to the "spin down" state).
Typical relaxation time is on the order of minutes and hours. Thus producing a highly polarized beam requires a long enough time and the use of storage rings.
The self-polarization effect for positrons is similar, with the only difference that positrons will tend to have spins oriented in the direction parallel to the direction of the magnetic field.
Experimental observation
The Sokolov–Ternov effect was experimentally observed in the USSR, France, Germany, United States, Japan, and Switzerland in storage rings with electrons of energy 1–50 GeV.
1971 – Budker Institute of Nuclear Physics (first observation), with the use of 625 MeV storage ring VEPP-2.
1971 – Orsay (France), with the use of 536 MeV АСО storage ring.
1975 – Stanford (USA), with the use of 2.4 GeV SPEAR storage ring.
1980 – DESY, Hamburg (Germany), with the use of 15.2 GeV PETRA.
Applications and generalization
The effect of radiative polarization provides a unique capability for creating polarized beams of high-energy electrons and positrons that can be used for various experiments.
The effect also has been related to the Unruh effect which, up to now, under experimentally achievable conditions is too small to be observed.
The equilibrium polarization given by the Sokolov and Ternov has corrections when the orbit is not perfectly planar. The formula has been generalized by Derbenev and Kondratenko and others.
Patent
Sokolov A. A. and Ternov I. M. (1973): Award N 131 of 7 August 1973 with priority of 26 June 1963, Byull. Otkr. i Izobr., vol. 47.
See also
Unruh effect
Hawking radiation
Froissart–Stora equation
Notes
Special relativity
Synchrotron radiation
Particle physics
Polarization (waves) | Sokolov–Ternov effect | [
"Physics"
] | 762 | [
"Astrophysics",
"Special relativity",
"Particle physics",
"Theory of relativity",
"Polarization (waves)"
] |
15,687,074 | https://en.wikipedia.org/wiki/Collision%20avoidance%20%28spacecraft%29 | Spacecraft collision avoidance is the implementation and study of processes minimizing the chance of orbiting spacecraft inadvertently colliding with other orbiting objects. The most common subject of spacecraft collision avoidance research and development is for human-made satellites in geocentric orbits. The subject includes procedures designed to prevent the accumulation of space debris in orbit, analytical methods for predicting likely collisions, and avoidance procedures to maneuver offending spacecraft away from danger.
Orbital speed around large bodies (like the Earth) is fast, resulting in significant kinetic energy being involved in on-orbit collisions. For example, at the Low Earth orbital velocity of ~7.8 km/s, two perpendicularly colliding spacecraft would meet at ~12.2 km/s. Almost no known structurally solid materials can withstand such an energetic impact. Most of the satellite would be instantly vaporized by the collision and broken up into myriad pieces ejected at force in all directions. Because of this, any spacecraft colliding with another object in orbit is likely to be critically damaged or completely destroyed.
Necessity
A cascading series of collisions between orbiting satellites and other objects could take place if a critical mass of space debris is allowed to accumulate in Earth orbit, dubbed the Kessler syndrome. More collisions would make new smaller fragments which make more collisions and so forth. The resulting positive feedback loop would create off-limits regions in orbit because of risk of collision, and eventually completely block access to space due to the risky ascent through debris-filled orbits during launch.
Very few of all satellites lofted by human-made launch vehicles that remain in Earth orbit today are still functional. As of September 2021, the ESA's Space Debris Office estimates that slightly over half of satellites in space are still operational.
While the number of satellites launched into orbit is relatively low in comparison to the amount of space available in orbit around the Earth, risky near-misses and occasional collisions happen. The 2009 satellite collision entirely obliterated both spacecraft and resulted in the creation of an estimated 1,000 new pieces of space debris larger than 10 cm (4 in) and many smaller ones.
There are other smaller bits of material in orbit around Earth that could also cause significant damage to satellites. These are relatively small objects such as micrometeoroids, remnants of satellite collisions, or small natural satellites.
These objects seem innocuous, but even tiny particles like stray paint flecks can damage spacecraft. Paint flecks caused necessary window replacements after many Space Shuttle flights.
Many companies are launching large satellite constellations to provide high-speed communications and internet access from Low Earth orbit, namely SpaceX's Starlink and Amazon planned Project Kuiper constellations. Each of these systems are planned use tens of thousands of satellites, which will massively increase the total number of satellites and exacerbate space debris issues.
Risk-mitigation methods
Several best practices are used to minimize the number of launched objects becoming uncontrollable space debris, varying in technique depending on the object's orbit. Most protective measures ensure that satellites and other artificial objects only remain in their operational orbits for as long as they are functional and controllable. These responsibilities fall on the satellite operator, who is bound by international agreements for how to dispose of orbiting objects.
Suborbital trajectories
Objects launched onto suborbital trajectories such as sounding rocket payloads and ballistic missile warheads do not achieve orbital velocities and fall back to earth at the end of the flight, so they do not require any intentional care on the part of the operator to ensure reentry and disposal.
The Space Shuttle external tank is designed to quickly dispose of itself after launch. The large external tank remains attached to the Space Shuttle orbiter from liftoff until when it and the orbiter are traveling at just below orbital velocity and have an altitude of approximately 113 km (70 mi), at which point it detaches and follows a ballistic trajectory quickly reentering the atmosphere. Most of the external tank disintegrates due to the heat of reentry, while the orbiter uses reaction control thrusters to complete its orbital insertion.
Low Earth orbit
The vast majority of artificial satellites and space stations orbit in Low Earth orbits (LEO), with mean altitudes lower than 2000 km (1200 mi). LEO satellites are close to the thicker parts of the atmosphere where safe reentry is practical because the Delta-v required to decelerate from LEO is small. Most LEO satellites use the last of their remaining onboard station-keeping fuel (used to maintain the satellite's orbit against forces like atmospheric drag that gradually perturb the orbit) to execute de-orbit burns and dispose of themselves.
The ease of access for de-orbiting LEO satellites at end of life makes it a successful method for controlling the space debris risk in LEO.
Medium Earth orbit and higher
Orbits with mean altitudes higher than LEO (such as Medium Earth orbits (MEO), Geosynchronous orbit/Geostationary orbit (GSO/GEO), and other species) are far from the denser parts of the atmosphere, making full de-orbit burns significantly more impractical. Few satellite designs have sufficient fuel margins to be able to afford such a maneuver at the end of their lives.
Satellites at altitudes towards the lower bound of MEO can use the "25-year rule" to decelerate with onboard propulsion so that it will fall out of orbit within 25 years, but this provision is only allowed if satellite operators can prove by statistical analysis that there is less than a 1/10,000 chance that the atmospheric reentry will cause human injury or property damage. Satellites disposed of in this fashion reenter the atmosphere in an area of the South Pacific Ocean far from inhabited areas called the spacecraft cemetery.
Graveyard orbits
Spacecraft orbiting at higher altitudes between LEO and High Earth orbit (HEO), most commonly in the highly specific and crowded GSO/GEO, are too far to make use of the "25-year rule". GSO and GEO require that the orbital plane be almost perfectly equatorial and the altitude be as close to a perfectly circular 35,786 km (22,236 mi), which means that space is limited and satellites cannot be allowed to stay past their useful life. Instead of decelerating for reentry, most satellites at these altitudes accelerate slightly into higher graveyard orbits where they will forever remain out of the way of interaction with operational satellites.
Empty rocket stages remaining in orbit
Historically, many multi-stage launcher designs completely expended their fuel to achieve orbit and left their spent rocket stages in orbit, as in the former Soviet Zenit family of rockets. These upper stages are large artificial satellites, which depending on the orbit can take many years to reenter.
Most modern designs include sufficient fuel margins for de-orbit burns after injecting payload into orbit. SpaceX's Falcon 9 is a launch vehicle designed to minimize the effect of its upper stage on space debris. The rocket is composed of two stages, the first of which is suborbital. It reenters within minutes of launch, either intentionally using fuel reserved for stage recovery to land for reuse or is left to continue on its ballistic trajectory and disintegrate upon reentry into the atmosphere.
Falcon 9 second stages are dealt with using different techniques depending on the orbit. For Low Earth orbits, the second stage uses remaining fuel to perform a de-orbit burn and disintegrate in the atmosphere. Stages stranded in Medium Earth orbits, like Geostationary transfer orbits (GTO) and Geostationary orbit (GEO), generally don't have sufficient fuel to de-orbit themselves. GTO trajectories are designed such that the second stage's orbit will naturally decay and reenter the atmosphere after a few months, while stages from missions targeting direct insertion into GEO will remain for a lot longer.
Collision prediction methods
Most impact risk predictions are calculated using databases of orbiting objects with orbit parameters like position and velocity measured by ground-based observations. The United States Department of Defense Space Surveillance Network maintains a catalog of all known orbiting objects approximately equal to a softball in size or larger. Information on smaller articles of space debris is less accurate or unknown.
Once the exact orbit of an object is accurately known, the DoD's SSN publishes known parameters for public analysis on the DoD's space-track.org and NASA's Space Science Data Coordinated Archive. The object's orbit can then be projected into the future, estimating where it will be located and the chance it will have a close encounter with another orbiting object. Long-term orbit projections have large error bars due to complicated gravitational effects that gradually perturb the orbit (akin to those of the Three-body problem) and the measurement errors of ground tracking equipment. For these reasons, methods for more precise measurement and estimation are an active field of research.
NASA conducts orbital projections and assesses collision risk for known objects larger than 4 inches (10 cm). For critical assets like the International Space Station, evaluations are made for the risk that any object will traverse within a rectangular region half a mile (1.25 km) above/below and 15 miles (25 km) ahead/behind in orbit and to either side of the spacecraft. This high-risk zone is known as the “pizza box" because of the shape it resembles.
Collision avoidance methods
Current avoidance techniques rely on slightly changing the orbit to minimize collision risk and then returning the spacecraft to its previous orbit after the risk event has passed. The exact method used to make orbital adjustments differs based on what controls are available on the spacecraft. Collision avoidance maneuvers are sometimes also called Debris Avoidance Maneuvers (DAMs) when the offending object is an article of space debris.
Spacecraft with onboard propulsion
NASA uses avoidance maneuvers if the collision risk is identified sufficiently in advance and the risk is high. NASA policy for crewed spacecraft, which all have onboard propulsion, like the Space Shuttle and the International Space Station (agreed upon by all international partners) requires planning for avoidance maneuvers if the probability of collision is
>1/100,000 and the maneuver wouldn't conflict with mission objectives
>1/10,000 and the maneuver wouldn't further endanger the crew
As of August 2020, the ISS has conducted 27 collision avoidance maneuvers since its initial launch in 1999 and is trending upwards with time. The class of debris most dangerous to the US Orbital Segment are those between 1-10 cm. The population of debris in this size range is significant and difficult to track accurately with current methods, meriting further research.
These avoidance maneuvers are almost always conducted by the firing of onboard Reaction control thrusters, although some other satellite and spacecraft orientation systems like Magnetorquers, Reaction wheels, and Control moment gyroscopes may be involved. The ISS can also use the main engines of a docked cargo spacecraft – usually a Progress spacecraft or Automated Transfer Vehicle. The maneuvers slightly change the orbital trajectory and are usually conducted hours before the risk event to allow the effects of the orbital change to take effect.
When two satellite operators are notified of a potential collision, one or both operators may decide to maneuver their satellite, eg. ESA & SpaceX in 2019.
Recent research has developed algorithms to aid collision avoidance efforts within large satellite constellations, although it is unknown whether such research has been implemented in any active constellation GNC.
Docking aborts
Another use of a collision avoidance maneuver is to abort an automated docking, and such a procedure is built into the software that controls the docking of Automated Transfer Vehicles to the ISS. This can be initiated by the crew aboard the space station, as an emergency override, in the event of a problem during the docking. This maneuver was demonstrated shortly after the launch of the first ATV, Jules Verne, and subsequently during demonstration approaches to the station which it conducted in late March 2008.
Spacecraft without onboard propulsion
Most human-launched satellites without onboard propulsion are small CubeSats which rely on alternative devices for orientation control. At the scale of small objects like CubeSats, forces related to the large relative surface area in proportion to mass become significant. CubeSats are often launched into Low Earth orbit, where the atmosphere still provides a small amount of aerodynamic drag.
The aerodynamic drag on small satellites in Low Earth orbit can be used to change orbits slightly to avoid debris collisions by changing the surface area exposed to atmospheric drag, alternating between low-drag and high-drag configurations to control deceleration.
Complicating factors
Attempts to alleviate potential collisions are complicated by factors including if
at least one of the offending objects lacks remote control capability due to being defunct
at least one of the offending objects is a natural satellite, like an asteroid
the risk event isn't predicted with sufficient time to act
All these occurrences limit strategic options for collision risk reduction in different ways. Very little can prevent the projected collision if both objects don't have control capabilities. If only one of the objects is an operational satellite, it would be the sole contributor to an avoidance maneuver, significantly cutting into or entirely using up remaining fuel reserves. The satellite may also have insufficient fuel to complete the maneuver properly, reducing its effectiveness.
Collision avoidance maneuvers require significant planning and execution time, which can be an issue if the risk isn't predicted sufficiently in advance. Spacecraft propulsion is often weak, relying on long burns to change their orbits, and the velocity change often requires a meaningful fraction of a complete orbit to produce the required effect.
For example, maneuvers commonly conducted by the International Space Station to avoid collisions often require roughly 150 second burns and significant disturbances to crew operations because of the mandatory slow reconfiguration of the station's solar panels to avoid damage by propulsion devices. Roughly speaking, the estimated quickest reaction time of the ISS from normal operation is about 5 hours and 20 minutes to account for the ~3 hour setup period of station reconfiguration and the ~2 hours of post-burn lead time to allow the velocity change to take effect.
Effects on launch windows
Collision avoidance is a concern during spaceflight launch windows. Typically, a Collision On Launch Assessment (COLA) needs to be performed and approved before launching a satellite. A launch window is said to have a COLA blackout period during intervals when the vehicle cannot lift off to ensure its trajectory does not take it too close to another object already in space.
References
External links
Interactive debris visualization by stuffin.space
See also
Space debris
Collision avoidance
Space traffic management
Orbital maneuvers | Collision avoidance (spacecraft) | [
"Technology"
] | 2,940 | [
"Satellite collisions",
"Space debris"
] |
15,687,473 | https://en.wikipedia.org/wiki/Pharmaceutical%20care | Pharmaceutical care is a pharmacy practice model developed in the 1990s that describes patient-centered medication management services performed by pharmacists.
Medication Management Service
There are many definitions for "medication management service." The following definition was proposed in the 2012 textbook Pharmaceutical Care Practice: The Patient-Centered Approach to Medication Management:"Medication management services are the professional activities needed to meet the standard of care which ensures each patient's medications (whether they are prescription, nonprescription, alternative, traditional, vitamins, or nutritional supplements) are individually assessed to determine that each medication is appropriate for the medical condition being treated, that the medication is being effective and achieving the goals established, that the medication is safe for the patient in the presence of the co-morbidities and other medications the patient may be taking, and the patient is able and willing to take the medications as intended."
History
Pharmaceutical care as a pharmacy practice model developed out of the need to re-professionalize pharmacy.
It is thought the first mention of pharmaceutical care came from Dr. Donald Brodie's 1973 lecture shared at The Ninth Annual Rho Chi Lecture in Boston, MA, USA. Dr. Brodie defined pharmaceutical care as "the care a given patient requires and receives which assures safe and rational drug usage." It was then popularized in 1990 after the American Journal of Health-System Pharmacy (AJHP) published an article by Drs. Charles Hepler and Linda Strand entitled ‘Opportunities and responsibilities in pharmaceutical care'.
The concept was endorsed by American Society of Health-System Pharmacists (ASHP) and the American Association of College of Pharmacy (AACP) in 1991. In 1992, the American Pharmacists Association (APhA) followed suit.
In 1993, ASHP issued a statement in response to members seeking a standardized definition of pharmaceutical care. In this statement they defined pharmaceutical care as "the direct, responsible provision of medication-related care for the purpose of achieving definite outcomes that improve a patient’s quality of life."
In 1998, the textbook Pharmaceutical Care Practice: The Patient-Centered Approach to Medication Management was first published. This included a definition of pharmaceutical care informed by the research of Drs. Robert Cipolle, Linda Strand, and Peter Morley that spanned 5 years and involved 20 different community pharmacy practice sites and 54 practicing pharmacists.
The American Medical Association (AMA) approved relevant reimbursement codes in 2004.
In 2013, a European organization, the Pharmaceutical Care Network Europe (PCNE), created a new definition that could satisfy experts from a multitude of countries. After a review of existing definitions, a number of options were presented to the participants and in a one-day meeting consensus on a definition was reached: Pharmaceutical Care is the pharmacologist/pharmacist's contribution to the care of individuals in order to optimize medicines use and improve health outcomes.
Components
Philosophy of Practice
"The philosophy of pharmaceutical care practice consists of:
a description of the social need for the practice,
a clear statement of individual practitioner responsibilities to meet this social need,
the expectation to be patient-centered, and
the requirement to function within the caring paradigm.
A philosophy of practice is expected when working with medicine and nursing and is practiced by all health care professionals."
Patient Care Process
The patient care process is a cognitive process in which the drug-related needs of patients are approached systematically and comprehensively.
"The patient care process, which must be consistent with the patient care processes of the other health care providers, consists of:
an assessment of the patient's drug-related needs,
a care plan to meet the specific needs of the patient, and
a follow-up evaluation to determine the impact of the decisions made and actions taken."
A principle of the patient care process is patient-centeredness.
The patient care process was initially called the "Pharmacists Workup of Drug Therapy" and served as a means to document drug therapy decisions.
Practice Management System
"The practice management system includes all of the resources required to bring the service to the patient. Physical space, the appointment system, documentation, reporting, evaluation, payment for the service, and much more are included in the management of a service."
Goal
The ultimate goal of pharmaceutical care (optimize medicines use and improving health outcomes) exists in all practice settings and in all cultures where medicines are used. It involves two major functions: identifying potential and manifest problems in the pharmacotherapy (drug therapy problems, or DTPs), and then resolving the problems and preventing the potential problems from becoming real for the patient and his therapy outcomes. This should preferably be done together with other health care professionals and the patient through a review of the medication (and diseases) and subsequent counselling and discussions.
See also
ATC codes Anatomical Therapeutic Chemical Classification System
Classification of Pharmaco-Therapeutic Referrals
History of pharmacy
ICD-10 International Classification of Diseases
ICPC-2 PLUS
International Classification of Primary Care ICPC-2
Pharmacists
Pharmacotherapy
Referral (medicine)
Drug Therapy Problems
References
Bibliography
Robert J. Cipolle, Linda M. Strand, Peter C. Morley. Pharmaceutical Care Practice: The Patient-Centered Approach to Medication Management Services. McGraw-Hill 2012.
Álvarez de Toledo F, et al. Atención farmacéutica en personas que han sufrido episodios coronarios agudos (Estudio TOMCOR). Rev Esp Salud Pública. 2001; 75:375-88.
Pastor Sánchez R, Alberola Gómez-Escolar C, Álvarez de Toledo Saavedra F, Fernández de Cano Martín N, Solá Uthurry N. Classification of Pharmaco-Terapeutic Referrals (CPR). MEDAFAR. Madrid: IMC; 2008.
Álvarez de Toledo Saavedra F, Fernández de Cano Martín N, coordinadores. MEDAFAR Asma. Madrid: IMC; 2007.
Álvarez de Toledo Saavedra F, Fernández de Cano Martín N, coordinadores. MEDAFAR Hipertensión. Madrid: IMC; 2007.
External links
Fundación Pharmaceutical Care España
Pharmaceutical Care - Facebook Page
Pharmacy | Pharmaceutical care | [
"Chemistry"
] | 1,274 | [
"Pharmacology",
"Pharmacy"
] |
15,690,323 | https://en.wikipedia.org/wiki/Transition%20temperature | In crystallography, the transition temperature is the temperature at which a material changes from one crystal state (allotrope) to another. More formally, it is the temperature at which two crystalline forms of a substance can co-exist in equilibrium. For example, when rhombic sulfur is heated above 95.6 °C, it changes form into monoclinic sulfur; when cooled
below 95.6 °C, it reverts to rhombic sulfur. At 95.6 °C the two forms can co-exist. Another example is tin, which transitions from a cubic crystal below 13.2 °C to a tetragonal crystal above that temperature.
In the case of ferroelectric or ferromagnetic crystals, a transition temperature may be known as the Curie temperature.
See also
Crystal system
Crystallography
Threshold temperatures
References | Transition temperature | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 174 | [
"Thermodynamics stubs",
"Physical phenomena",
"Phase transitions",
"Materials science stubs",
"Threshold temperatures",
"Materials science",
"Crystallography stubs",
"Crystallography",
"Condensed matter physics",
"Thermodynamics",
"Physical chemistry stubs"
] |
23,797,849 | https://en.wikipedia.org/wiki/Laminar%20sublayer | The laminar sublayer, also called the viscous sublayer, is the region of a mainly-turbulent flow that is near a no-slip boundary and in which viscous shear stresses are important. As such, it is a type of boundary layer. The existence of the viscous sublayer can be understood in that the flow velocity decreases towards the no-slip boundary.
The laminar sublayer is important for river-bed ecology: below the laminar-turbulent interface, the flow is stratified, but above it, it rapidly becomes well-mixed. This threshold can be important in providing homes and feeding grounds for benthic organisms.
Whether the roughness due to the bed sediment or other factors are smaller or larger than this sublayer has an important bearing in hydraulics and sediment transport. Flow is defined as hydraulically rough if the roughness elements are larger than the laminar sublayer (thereby perturbing the flow), and as hydraulically smooth if they are smaller than the laminar sublayer (and therefore ignorable by the main body of the flow).
References
Fluid mechanics | Laminar sublayer | [
"Chemistry",
"Engineering"
] | 231 | [
"Civil engineering",
"Fluid mechanics",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
23,798,069 | https://en.wikipedia.org/wiki/Steam%20hammer | A steam hammer, also called a drop hammer, is an industrial power hammer driven by steam that is used for tasks such as shaping forgings and driving piles. Typically the hammer is attached to a piston that slides within a fixed cylinder, but in some designs the hammer is attached to a cylinder that slides along a fixed piston.
The concept of the steam hammer was described by James Watt in 1784, but it was not until 1840 that the first working steam hammer was built to meet the needs of forging increasingly large iron or steel components. In 1843 there was an acrimonious dispute between François Bourdon of France and James Nasmyth of Britain over who had invented the machine. Bourdon had built the first working machine, but Nasmyth claimed it was built from a copy of his design.
Steam hammers proved to be invaluable in many industrial processes. Technical improvements gave greater control over the force delivered, greater longevity, greater efficiency and greater power. A steam hammer built in 1891 by the Bethlehem Iron Company delivered a 125-ton blow. In the 20th century steam hammers were gradually displaced in forging by mechanical and hydraulic presses, but some are still in use. Compressed air power hammers, descendants of the early steam hammers, are still manufactured.
Mechanism
A single-acting steam hammer is raised by the pressure of steam injected into the lower part of a cylinder and drops under gravity when the pressure is released. With the more common double-acting steam hammer, steam is also used to push the ram down, giving a more powerful blow at the die.
The weight of the ram may range from .
The piece being worked is placed between a bottom die resting on an anvil block and a top die attached to the ram (hammer).
Hammers are subject to repeated concussion, which could cause fracturing of cast iron components. The early hammers were therefore made from a number of parts bolted together.
This made it cheaper to replace broken parts, and also gave it a degree of elasticity that made fractures less likely.A steam hammer may have one or two supporting frames. The single frame design lets the operator move around the dies more easily, while the double frame can support a more powerful hammer. The frame(s) and the anvil block are mounted on wooden beams that protect the concrete foundations by absorbing the shock.
Deep foundations are needed, but a large steam drop hammer will still shake the building that holds it.
This may be solved with a counterblow steam hammer, in which two converging rams drive the top and bottom dies together. The upper ram is driven down and the lower ram is pulled or driven up. These hammers produce a large impact and can make large forgings.
They can be installed with smaller foundations than anvil hammers of similar force.
Counterblow hammers are not often used in the United States, but are common in Europe.
With some early steam hammers an operator moved the valves by hand, controlling each blow. With others the valve action was automatic, allowing for rapid repetitive hammering. Automatic hammers could give an elastic blow, where steam cushioned the piston towards the end of the down stroke, or a dead blow with no cushioning. The elastic blow gave a quicker rate of hammering, but less force than the dead blow.
Machines were built that could run in either mode according to the job requirement.
The force of the blow could be controlled by varying the amount of steam introduced to cushion the blow.
A modern air/steam hammer can deliver up to 300 blows per minute.
History
Concept
The possibility of a steam hammer was noted by James Watt (1736–1819) in his 28 April 1784 patent for an improved steam engine.
Watt described "Heavy Hammers or Stampers, for forging or stamping iron, copper, or other metals, or other matters without the intervention of rotative motions or wheels, by fixing the Hammer or Stamper to be so worked, either directly to the piston or piston rod of the engine."
Watt's design had the cylinder at one end of a wooden beam and the hammer at the other.
The hammer did not move vertically, but in the arc of a circle.
On 6 June 1806 W. Deverell, engineer of Surrey, filed a patent for a steam-powered hammer or stamper.
The hammer would be welded to a piston rod contained in a cylinder. Steam from a boiler would be let in under the piston, raising it and compressing the air above it. The steam would then be released and the compressed air would force the piston down.
In August 1827 John Hague was awarded a patent for a method of working cranes and tilt-hammers driven by a piston in an oscillating cylinder where air power supplied the motive force. A partial vacuum was made in one end of a long cylinder by an air pump worked by a steam engine or some other power source, and atmospheric pressure drove the piston into that end of the cylinder. When a valve was reversed, the vacuum was formed in the other end and the piston forced in the opposite direction.
Hague made a hammer to this design for planishing frying pans. Many years later, when discussing the advantages of air over steam for delivering power, it was recalled that Hague's air hammer "worked with such an extraordinary rapidity that it was impossible to see where the hammer was in working, and the effect was seemed more like giving one continuous pressure." However, it was not possible to regulate the force of the blows.
Invention
It seems probable that the Scottish Engineer James Nasmyth (1808–1890) and his French counterpart François Bourdon (1797–1865) reinvented the steam hammer independently in 1839, both trying to solve the same problem of forging shafts and cranks for the increasingly large steam engines used in locomotives and paddle boats.
In Nasmyth's 1883 "autobiography", written by Samuel Smiles, he described how the need arose for a paddle shaft for Isambard Kingdom Brunel's new transatlantic steamer SS Great Britain, with a diameter shaft, larger than any that had been previously forged. He came up with his steam hammer design, making a sketch dated 24 November 1839, but the immediate need disappeared when the practicality of screw propellers was demonstrated and the Great Britain was converted to that design.
Nasmyth showed his design to all visitors.
Bourdon came up with the idea of what he called a "Pilon" in 1839 and made detailed drawings of his design, which he also showed to all engineers who visited the works at Le Creusot owned by the brothers Adolphe and Eugène Schneider.
However, the Schneiders hesitated to build Bourdon's radical new machine.
Bourdon and Eugène Schneider visited the Nasmyth works in England in the middle of 1840, where they were shown Nasmyth's sketch.
This confirmed the feasibility of the concept to Schneider.
In 1840 Bourdon built the first steam hammer in the world at the Schneider & Cie works at Le Creusot.
It weighed and lifted to . The Schneiders patented the design in 1841.
Nasmyth visited Le Creusot in April 1842. By his account, Bourdon took him to the forge department so he might, as he said, "see his own child". Nasmyth said "there it was, in truth–a thumping child of my brain!"
After returning from France in 1842 Nasmyth built his first steam hammer in his Patricroft foundry in Manchester, England, adjacent to the (then new) Liverpool and Manchester Railway and the Bridgewater Canal.
In 1843 a dispute broke out between Nasmyth and Bourdon over priority of invention of the steam hammer. Nasmyth, an excellent publicist, managed to convince many people that he was the first.
Early improvements
Nasmyth's first steam hammer, described in his patent of 9 December 1842, was built for the Low Moor Works at Bradford.
They rejected the machine, but on 18 August 1843 accepted an improved version with a self-acting gear.
Robert Wilson (1803–1882), who had also invented the screw propeller and was manager of Nasmyth's Bridgewater works, invented the self-acting motion that made it possible to adjust the force of the blow delivered by the hammer – a critically important improvement.
An early writer said of Wilson's gear, "... I would be prouder to say that I was the inventor of that motion, then to say I had commanded a regiment at Waterloo..."
Nasmyth's steam hammers could now vary the force of the blow across a wide range.
Nasmyth was fond of breaking an egg placed in a wineglass without breaking the glass, followed by a blow that shook the building.
By 1868 engineers had introduced further improvements to the original design. John Condie's steam hammer, built for Fulton in Glasgow, had a stationary piston and a moving cylinder to which the hammer was attached. The piston was hollow, and was used to deliver steam to the cylinder and then remove it.
The hammer weighed 6.5 tons with a stroke of .
Condie steam hammers were used to forge the shafts of Isambard Kingdom Brunel's SS Great Eastern.
A high-speed compressed-air hammer was described in The Mechanics' Magazine in 1865,
a variant of the steam hammer for use where steam power was not available or a very dry environment was required.
The Bowling Ironworks steam hammers had the steam cylinder bolted to the back of the hammer, thus reducing the height of the machine.
These were designed by John Charles Pearce, who took out a patent for his steam hammer design several years before Nasmyth's patent expired.
Marie-Joseph Farcot of Paris proposed a number of improvements including an arrangement so the steam acted from above, increasing the striking force, improved valve arrangements and the use of springs and material to absorb the shock and prevent breakage.
John Ramsbottom invented a duplex hammer, with two rams moving horizontally towards a forging placed between them.
Using the same principles of operation, Nasmyth developed a steam-powered pile-driving machine. At its first use at Devonport, a dramatic contest was carried out. His engine drove a pile in four and half minutes compared with the twelve hours that the conventional method required.
It was soon found that a hammer with a relatively short fall height was more effective than a taller machine. The shorter machine could deliver many more blows in a given time, driving the pile faster even though each blow was smaller. It also caused less damage to the pile.
Riveting machines designed by Garforth and Cook were based on the steam hammer.
The catalog for the Great Exhibition held in London in 1851 said of Garforth's design, "With this machine, one man and three boys can rivet with perfect ease, and in the firmest manner, at the rate of six rivets per minute, or three hundred and sixty per hour."
Other variants included crushers to help extract iron ore from quartz and a hammer to drive holes in the rock of a quarry to hold gunpowder charges.
An 1883 book on modern steam practice said
Later development
Schneider & Co. built 110 steam hammers between 1843 and 1867 with different sizes and strike rates, but trending towards ever larger machines to handle the demands of large cannon, engine shafts and armor plate, with steel increasingly used in place of wrought iron.
In 1861 the "Fritz" steam hammer came into operation at the Krupp works in Essen, Germany.
With a 50-ton blow, for many years it was the most powerful in the world.
There is a story that the Fritz steam hammer took its name from a machinist named Fritz whom Alfred Krupp presented to the Emperor William when he visited the works in 1877. Krupp told the emperor that Fritz had such perfect control of the machine that he could let the hammer drop without harming an object placed on the center of the block. The Emperor immediately put his watch, which was studded with diamonds, on the block and motioned Fritz to start the hammer. When the machinist hesitated, Krupp told him "Fritz let fly!" He did as he was told, the watch was unharmed, and the emperor gave Fritz the watch as a gift. Krupp had the words "Fritz let fly!" engraved on the hammer.
The Schneiders eventually saw a need for a hammer of colossal proportions.
The Creusot steam hammer was a giant steam hammer built in 1877 by Schneider and Co. in the French industrial town of Le Creusot.
With the ability to deliver a blow of up to 100 tons, the Creusot hammer was the largest and most powerful in the world.
A wooden replica was built for the Exposition Universelle (1878) in Paris.
In 1891 the Bethlehem Iron Company of the United States purchased patent rights from Schneider and built a steam hammer of almost identical design but capable of delivering a 125-ton blow.
Eventually the great steam hammers became obsolete, displaced by hydraulic and mechanical presses. The presses applied force slowly and at a uniform rate, ensuring that the internal structure of the forging was uniform, without hidden internal flaws. They were also cheaper to operate, not requiring steam to be blown off, and much cheaper to build, not requiring huge strong foundations.
The 1877 Creusot steam hammer now stands as a monument in the Creusot town square. An original Nasmyth hammer stands facing his foundry buildings (now a "business park"). A larger Nasmyth & Wilson steam hammer stands in the campus of the University of Bolton.
Steam hammers continue to be used for driving piles into the ground.
Steam supplied by a circulating steam generator is more efficient than air.
However, today compressed air is often used rather than steam.
As of 2013 manufacturers continued to sell air/steam pile-driving hammers.
Forging services suppliers also continue to use steam hammers of varying sizes based on classical designs.
See also
Trip hammer
Power hammer
Frohnauer Hammer Mill
References
Citations
Sources
External links
Video of steam hammer in modern use at Scot Forge
Hammers
Metalworking tools
Industrial machinery
Scottish inventions
French inventions
Steam power | Steam hammer | [
"Physics",
"Engineering"
] | 2,900 | [
"Steam hammers",
"Physical quantities",
"Steam power",
"Power (physics)",
"Industrial machinery"
] |
23,800,341 | https://en.wikipedia.org/wiki/Compton%20telescope | A Compton telescope (also known as Compton camera or Compton imager) is a gamma-ray detector which utilizes Compton scattering to determine the origin of the observed gamma rays.
Compton cameras are usually applied to detect gamma rays in the energy range where Compton scattering is the dominating interaction process, from a few hundred keV to several MeV. They are applied in fields such as astrophysics, nuclear medicine, and nuclear threat detection.
In astrophysics, the most famous Compton telescopes was COMPTEL aboard the Compton Gamma Ray Observatory, which pioneered the observation of the gamma-ray sky in the energy range between 0.75 and 30 MeV. A potential successor is NCT – the Nuclear Compton Telescope.
References
Telescopes
Astrophysics | Compton telescope | [
"Physics",
"Astronomy"
] | 146 | [
"Telescopes",
"Astronomy stubs",
"Astrophysics",
"Astrophysics stubs",
"Astronomical instruments",
"Astronomical sub-disciplines"
] |
23,801,862 | https://en.wikipedia.org/wiki/ISO%2019439 | ISO 19439:2006 Enterprise integration—Framework for enterprise modelling, is an international standard for enterprise modelling and enterprise integration developed by the International Organization for Standardization, based on CIMOSA and GERAM.
Overview
ISO 19439 framework wants to provide a "unified conceptual basis for model-based enterprise engineering that enables consistency, convergence and interoperability of the various modelling methodologies and supporting tools. The framework does not encompass methodological processes; it is neutral in this regard".
This standard specifies a framework, which "serves as a common basis to identify and coordinate standards development for modelling of enterprises, but not restricted to, computer integrated manufacturing. It also serves as the basis for further standards for the development of models that will be computer-enactable and enable business process model-based decision support leading to model-based operation, monitoring and control".
Dimensions
According to David Shorter (2004) the ISO19439 is alike CIMOSA, and has defines three dimensions for enterprise modelling:
Model phase : "The enterprise model phases are based on the idea that enterprise models have a life cycle that is related to the life cycle of the entity being modeled. The phases defined in the standard are: Domain Identification, Concept Definition, Requirements Definition, Design Specification, Implementation Description, Domain Operation, Decommission Definition."
View dimension : "This is based on the idea that both enterprise modelers and users filter their observations of the real world by particular views. The predefined views are: Function View, Information View, Resource View, Organization View/Decision View."
Genericity dimension: "This defines the dimension from general concepts to particular models . The standard defines three levels of genericity: Generic Level, Partial Level, Particular Level"
See also
Enterprise architecture framework
List of ISO standards
Generalised Enterprise Reference Architecture and Methodology
References
External links
ISO 19439 description at iso.org
19439
Enterprise modelling
Modeling languages | ISO 19439 | [
"Engineering"
] | 386 | [
"Systems engineering",
"Enterprise modelling"
] |
23,802,163 | https://en.wikipedia.org/wiki/Electrical%20impedance%20myography | Electrical impedance myography, or EIM, is a non-invasive technique for the assessment of muscle health that is based on the measurement of the electrical impedance characteristics of individual muscles or groups of muscles. The technique has been used for the purpose of evaluating neuromuscular diseases both for their diagnosis and for their ongoing assessment of progression or with therapeutic intervention. Muscle composition and microscopic structure change with disease, and EIM measures alterations in impedance that occur as a result of disease pathology. EIM has been specifically recognized for its potential as an ALS biomarker (also known as a biological correlate or surrogate endpoint) by Prize4Life, a 501(c)(3) nonprofit organization dedicated to accelerating the discovery of treatments and cures for ALS. The $1M ALS Biomarker Challenge focused on identifying a biomarker precise and reliable enough to cut Phase II drug trials in half. The prize was awarded to Dr. Seward Rutkove, chief, Division of Neuromuscular Disease, in the Department of Neurology at Beth Israel Deaconess Medical Center and Professor of Neurology at Harvard Medical School, for his work in developing the technique of EIM and its specific application to ALS. It is hoped that EIM as a biomarker will result in the more rapid and efficient identification of new treatments for ALS. EIM has shown sensitivity to disease status in a variety of neuromuscular conditions, including radiculopathy, inflammatory myopathy, Duchenne muscular dystrophy, and spinal muscular atrophy.
In addition to the assessment of neuromuscular disease, EIM also has the prospect of serving as a convenient and sensitive measure of muscle condition. Work in aging populations and individuals with orthopedic injuries indicates that EIM is very sensitive to muscle atrophy and disuse and is conversely likely sensitive to muscle conditioning and hypertrophy. Work on mouse and rats models, including a study of mice on board the final Space Shuttle mission (STS-135), has helped to confirm this potential value.
Underlying concepts
Interest in electrical impedance dates back to the turn of the 20th century, when physiologist Louis Lapicque postulated an elementary circuit to model membranes of nerve cells. Scientists experimented with variations on this model until 1940, when Kenneth Cole developed a circuit model that accounted for the impedance properties of both cell membranes and intracellular fluid.
Like all impedance-based methods, EIM hinges on a simplified model of muscle tissue as an RC circuit. This model attributes the resistive component of the circuit to the resistance of extracellular and intracellular fluids, and the reactive component to the capacitive effects of cell membranes. The integrity of individual cell membranes has a significant effect on the tissue's impedance; hence, a muscle's impedance can be used to measure the tissue's degradation in disease progression. In neuromuscular disease, a variety of factors can influence the compositional and micro structural aspects of muscle, including most notably muscle fiber atrophy and disorganization, the deposition of fat and connective tissues, as occurs in muscular dystrophy, and the presence of inflammation, among many other pathologies. EIM captures these changes in the tissue as a whole by measuring its impedance characteristics across multiple frequencies and at multiple angles relative to the major muscle fiber direction.
In EIM, impedance is separated into resistance and reactance, its real and imaginary components. From this, one can compute the muscle's phase, which represents the time-shift that a sinusoid undergoes when passing through the muscle. For a given resistance (R) and reactance (X), phase (θ) can be calculated. In current work, all three parameters appear to play important roles depending exactly on which diseases are being studied and how the technology is being applied.
EIM can also be impacted by the thickness of the skin and subcutaneous fat overlying a region of muscle. However, electrode designs can be created that can circumvent the effect to a large extent and thus still provide primary muscle data. Moreover, the use of multifrequency measurements can also assist with this process of disentangling the effects of fat from those of muscle. From this information, it also becomes possible to infer/calculate the approximate amount of fat overlying a muscle in a given region.
Multifrequency measurements
Both resistance and reactance depend on the input frequency of the signal. Because changes in frequency shift the relative contributions of resistance (fluid) and reactance (membrane) to impedance, multifrequency EIM may allow a more comprehensive assessment of disease. Resistance, reactance, or phase can be plotted as a function of frequency to demonstrate the differences in frequency dependence between healthy and diseased groups. Diseased muscle exhibits an increase in reactance and phase with increasing frequency, while reactance and phase values of healthy muscle increase with frequency until 50–100 kHz, at which point they begin to decrease as a function of frequency. Frequencies ranging from 500 Hz to 2 MHz are used to determine the frequency spectrum for a given muscle.
Muscle anisotropy
Electrical impedance of muscle tissue is anisotropic; current flowing parallel to muscle fibers flows differently from current flowing orthogonally across the fibers. Current flowing orthogonally across a muscle encounters more cell membranes, thus increasing resistance, reactance, and phase values. By taking measurements at different angles with respect to muscle fibers, EIM can be used to determine the anisotropy of a given muscle. Anisotropy tends to be shown either as a graph plotting resistance, reactance, or phase as a function of angle with respect to the direction of muscle fibers or as a ratio of transverse (perpendicular to fibers) measurement to longitudinal measurement (parallel to muscle fibers) of a given impedance factor.
Muscle anisotropy also changes with neuromuscular disease. EIM has shown a difference between anisotropy profiles of neuromuscular disease patients and healthy controls. In addition, EIM can use anisotropy to discriminate between myopathic and neurogenic disease. Different forms of neuromuscular disease have unique anisotropies. Myopathic disease is characterized by decreased anisotropy. Neurogenic disease produces a less predictable anisotropy. The angle of lowest phase may be shifted from the parallel position, and the anisotropy as a whole is often greater than that of a healthy control.
Measurement approaches
In general, to apply the technique, a minimum of four surface electrodes are placed over the muscle of interest. A minute alternating current is applied across the outer two electrodes, and voltage signals are recorded by the inner electrodes. The frequency of the applied current and the relationship of the electrode array to the major muscle fiber direction is varied so that a full multifrequency and multidirectional assessment of the muscle can be achieved.
EIM has been performed with a number of different impedance analysis devices. Commercially available systems used for bioimpedance analysis, can be calibrated to measure impedance of individual muscles. A suitable impedance analyzer can also be custom built using a lock-in amplifier to produce the signal and a low-capacitance probe, such as the Tektronix P6243, to record voltages from the surface electrodes.
Such methods, however, are slow and clumsy to apply given the need for careful electrode positioning over a muscle of interest and the potential for misalignment of electrodes and inaccuracy. Accordingly, an initial hand-held system was constructed using multiple components with an electrode head that could be placed directly on the patient. The device featured an array of electrode plates, which could be selectively activated to perform impedance measurements in arbitrary orientations. The oscilloscopes were programmed to produce a compound sinusoid signal, which could be used to measure the impedance at multiple frequencies simultaneously via a Fast Fourier transform.
Since that initial system was created, other handheld commercial systems are being developed, such as Skulpt, for use in both neuromuscular disease assessment and for fitness monitoring, including the calculation of a muscle quality (or MQ) value. This latter value aims to provide an approximate assessment of the relative force-generating capacity of muscle for a given cross-sectional area of tissue. Muscle quality, for example, is a measure used in the assessment of sarcopenia.
Comparison with standard bioelectrical impedance analysis
Standard bioelectrical impedance analysis (BIA), like EIM, also employs a weak, high frequency electric current to measure characteristics of the human body. In standard BIA, unlike EIM, electric current is passed between electrodes placed on the hands and feet, and the impedance characteristics of the entire current path are measured. Thus, the measured impedance characteristics are relatively nonspecific since they encompass much of the body including the entire length of the extremities, the chest, abdomen and pelvis; accordingly, only summary whole-body measures of lean body mass and % fat can be offered. Moreover, in BIA, current travels the path of least resistance, and thus any factors that alter the current path will cause variability in the data. For example, the expansion of large vessels (e.g., veins) with increasing hydration will offer a low-resistance path, and thus distorting the resulting data. In addition, changes in abdominal contents will similarly alter the data. Body position can also have substantial effects, with joint position contributing to variations in the data. EIM, in contrast, measures only the superficial aspects of individual muscles and is relatively unaffected by body or limb position or hydration status. The differences between EIM and standard BIA were exemplified in one study in amyotrophic lateral sclerosis (ALS) which showed that EIM was effectively able to track progression in 60 ALS patients whereas BIA was not.
References
Electrophysiology
Impedance measurements | Electrical impedance myography | [
"Physics"
] | 2,069 | [
"Impedance measurements",
"Physical quantities",
"Electrical resistance and conductance"
] |
1,863,215 | https://en.wikipedia.org/wiki/Specific%20storage | In the field of hydrogeology, storage properties are physical properties that characterize the capacity of an aquifer to release groundwater. These properties are storativity (S), specific storage (Ss) and specific yield (Sy). According to Groundwater, by Freeze and Cherry (1979), specific storage, [m−1], of a saturated aquifer is defined as the volume of water that a unit volume of the aquifer releases from storage under a unit decline in hydraulic head.
They are often determined using some combination of field tests (e.g., aquifer tests) and laboratory tests on aquifer material samples. Recently, these properties have been also determined using remote sensing data derived from Interferometric synthetic-aperture radar.
Storativity
Definition
Storativity or the storage coefficient is the volume of water released from storage per unit decline in hydraulic head in the aquifer, per unit area of the aquifer. Storativity is a dimensionless quantity, and is always greater than 0.
is the volume of water released from storage ([L3]);
is the hydraulic head ([L])
is the specific storage
is the specific yield
is the thickness of aquifer
is the area ([L2])
Confined
For a confined aquifer or aquitard, storativity is the vertically integrated specific storage value. Specific storage is the volume of water released from one unit volume of the aquifer under one unit decline in head. This is related to both the compressibility of the aquifer and the compressibility of the water itself. Assuming the aquifer or aquitard is homogeneous:
Unconfined
For an unconfined aquifer, storativity is approximately equal to the specific yield () since the release from specific storage () is typically orders of magnitude less ().
The specific storage is the amount of water that a portion of an aquifer releases from storage, per unit mass or volume of the aquifer, per unit change in hydraulic head, while remaining fully saturated.
Mass specific storage is the mass of water that an aquifer releases from storage, per mass of aquifer, per unit decline in hydraulic head:
where
is the mass specific storage ([L−1]);
is the mass of that portion of the aquifer from which the water is released ([M]);
is the mass of water released from storage ([M]); and
is the decline in hydraulic head ([L]).
Volumetric specific storage (or volume-specific storage) is the volume of water that an aquifer releases from storage, per volume of the aquifer, per unit decline in hydraulic head (Freeze and Cherry, 1979):
where
is the volumetric specific storage ([L−1]);
is the bulk volume of that portion of the aquifer from which the water is released ([L3]);
is the volume of water released from storage ([L3]);
is the decline in pressure(N•m−2 or [ML−1T−2]) ;
is the decline in hydraulic head ([L]) and
is the specific weight of water (N•m−3 or [ML−2T−2]).
In hydrogeology, volumetric specific storage is much more commonly encountered than mass specific storage. Consequently, the term specific storage generally refers to volumetric specific storage.
In terms of measurable physical properties, specific storage can be expressed as
where
is the specific weight of water (N•m−3 or [ML−2T−2])
is the porosity of the material (dimensionless ratio between 0 and 1)
is the compressibility of the bulk aquifer material (m2N−1 or [LM−1T2]), and
is the compressibility of water (m2N−1 or [LM−1T2])
The compressibility terms relate a given change in stress to a change in volume (a strain). These two terms can be defined as:
where
is the effective stress (N/m2 or [MLT−2/L2])
These equations relate a change in total or water volume ( or ) per change in applied stress (effective stress — or pore pressure — ) per unit volume. The compressibilities (and therefore also Ss) can be estimated from laboratory consolidation tests (in an apparatus called a consolidometer), using the consolidation theory of soil mechanics (developed by Karl Terzaghi).
Determination of the storage coefficient of aquifer systems
Aquifer-test analysis
Aquifer-test analyses provide estimates of aquifer-system storage coefficients by examining the drawdown and recovery responses of water levels in wells to applied stresses, typically induced by pumping from nearby wells.
Stress-strain analysis
Elastic and inelastic skeletal storage coefficients can be estimated through a graphical method developed by Riley. This method involves plotting the applied stress (hydraulic head) on the y-axis against vertical strain or displacement (compaction) on the x-axis. The inverse slopes of the dominant linear trends in these compaction-head trajectories indicate the skeletal storage coefficients. The displacements used to build the stress-strain curve can be determined by extensometers, InSAR or levelling.
Laboratory consolidation tests
Laboratory consolidation tests yield measurements of the coefficient of consolidation within the inelastic range and provide estimates of vertical hydraulic conductivity. The inelastic skeletal specific storage of the sample can be determined by calculating the ratio of vertical hydraulic conductivity to the coefficient of consolidation.
Model simulations and calibration
Simulations of land subsidence incorporate data on aquifer-system storage and hydraulic conductivity. Calibrating these models can lead to optimized estimates of storage coefficients and vertical hydraulic conductivity.
Specific yield
Specific yield, also known as the drainable porosity, is a ratio, less than or equal to the effective porosity, indicating the volumetric fraction of the bulk aquifer volume that a given aquifer will yield when all the water is allowed to drain out of it under the forces of gravity:
where
is the volume of water drained, and
is the total rock or material volume
It is primarily used for unconfined aquifers, since the elastic storage component, , is relatively small and usually has an insignificant contribution. Specific yield can be close to effective porosity, but there are several subtle things which make this value more complicated than it seems. Some water always remains in the formation, even after drainage; it clings to the grains of sand and clay in the formation. Also, the value of specific yield may not be fully realized for a very long time, due to complications caused by unsaturated flow. Problems related to unsaturated flow are simulated using the numerical solution of Richards Equation, which requires estimation of the specific yield, or the numerical solution of the Soil Moisture Velocity Equation, which does not require estimation of the specific yield.
See also
Aquifer test
Soil mechanics
Groundwater flow equation describes how these terms are used in the context of solving groundwater flow problems
References
Freeze, R.A. and J.A. Cherry. 1979. Groundwater. Prentice-Hall, Inc. Englewood Cliffs, NJ. 604 p.
Morris, D.A. and A.I. Johnson. 1967. Summary of hydrologic and physical properties of rock and soil materials as analyzed by the Hydrologic Laboratory of the U.S. Geological Survey 1948-1960. U.S. Geological Survey Water Supply Paper 1839-D. 42 p.
De Wiest, R. J. (1966). On the storage coefficient and the equations of groundwater flow. Journal of Geophysical Research, 71(4), 1117–1122.
Specific
Hydrology
Aquifers
Water
Soil mechanics
Soil physics | Specific storage | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,625 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Soil mechanics",
"Soil physics",
"Aquifers",
"Environmental engineering",
"Water"
] |
1,863,666 | https://en.wikipedia.org/wiki/Groundwater%20flow%20equation | Used in hydrogeology, the groundwater flow equation is the mathematical relationship which is used to describe the flow of groundwater through an aquifer. The transient flow of groundwater is described by a form of the diffusion equation, similar to that used in heat transfer to describe the flow of heat in a solid (heat conduction). The steady-state flow of groundwater is described by a form of the Laplace equation, which is a form of potential flow and has analogs in numerous fields.
The groundwater flow equation is often derived for a small representative elemental volume (REV), where the properties of the medium are assumed to be effectively constant. A mass balance is done on the water flowing in and out of this small volume, the flux terms in the relationship being expressed in terms of head by using the constitutive equation called Darcy's law, which requires that the flow is laminar. Other approaches are based on Agent Based Models to incorporate the effect of complex aquifers such as karstic or fractured rocks (i.e. volcanic)
Mass balance
A mass balance must be performed, and used along with Darcy's law, to arrive at the transient groundwater flow equation. This balance is analogous to the energy balance used in heat transfer to arrive at the heat equation. It is simply a statement of accounting, that for a given control volume, aside from sources or sinks, mass cannot be created or destroyed. The conservation of mass states that, for a given increment of time (Δt), the difference between the mass flowing in across the boundaries, the mass flowing out across the boundaries, and the sources within the volume, is the change in storage.
Diffusion equation (transient flow)
Mass can be represented as density times volume, and under most conditions, water can be considered incompressible (density does not depend on pressure). The mass fluxes across the boundaries then become volume fluxes (as are found in Darcy's law). Using Taylor series to represent the in and out flux terms across the boundaries of the control volume, and using the divergence theorem to turn the flux across the boundary into a flux over the entire volume, the final form of the groundwater flow equation (in differential form) is:
This is known in other fields as the diffusion equation or heat equation, it is a parabolic partial differential equation (PDE). This mathematical statement indicates that the change in hydraulic head with time (left hand side) equals the negative divergence of the flux (q) and the source terms (G). This equation has both head and flux as unknowns, but Darcy's law relates flux to hydraulic heads, so substituting it in for the flux (q) leads to
Now if hydraulic conductivity (K) is spatially uniform and isotropic (rather than a tensor), it can be taken out of the spatial derivative, simplifying them to the Laplacian, this makes the equation
Dividing through by the specific storage (Ss), puts hydraulic diffusivity (α = K/Ss or equivalently, α = T/S) on the right hand side. The hydraulic diffusivity is proportional to the speed at which a finite pressure pulse will propagate through the system (large values of α lead to fast propagation of signals). The groundwater flow equation then becomes
Where the sink/source term, G, now has the same units but is divided by the appropriate storage term (as defined by the hydraulic diffusivity substitution).
Rectangular cartesian coordinates
Especially when using rectangular grid finite-difference models (e.g. MODFLOW, made by the USGS), we deal with Cartesian coordinates. In these coordinates the general Laplacian operator becomes (for three-dimensional flow) specifically
MODFLOW code discretizes and simulates an orthogonal 3-D form of the governing groundwater flow equation. However, it has an option to run in a "quasi-3D" mode if the user wishes to do so; in this case the model deals with the vertically averaged T and S, rather than k and Ss. In the quasi-3D mode, flow is calculated between 2D horizontal layers using the concept of leakage.
Circular cylindrical coordinates
Another useful coordinate system is 3D cylindrical coordinates (typically where a pumping well is a line source located at the origin — parallel to the z axis — causing converging radial flow). Under these conditions the above equation becomes (r being radial distance and θ being angle),
Assumptions
This equation represents flow to a pumping well (a sink of strength G), located at the origin. Both this equation and the Cartesian version above are the fundamental equation in groundwater flow, but to arrive at this point requires considerable simplification. Some of the main assumptions which went into both these equations are:
the aquifer material is incompressible (no change in matrix due to changes in pressure — aka subsidence),
the water is of constant density (incompressible),
any external loads on the aquifer (e.g., overburden, atmospheric pressure) are constant,
for the 1D radial problem the pumping well is fully penetrating a non-leaky aquifer,
the groundwater is flowing slowly (Reynolds number less than unity), and
the hydraulic conductivity (K) is an isotropic scalar.
Despite these large assumptions, the groundwater flow equation does a good job of representing the distribution of heads in aquifers due to a transient distribution of sources and sinks.
Laplace equation (steady-state flow)
If the aquifer has recharging boundary conditions a steady-state may be reached (or it may be used as an approximation in many cases), and the diffusion equation (above) simplifies to the Laplace equation.
This equation states that hydraulic head is a harmonic function, and has many analogs in other fields. The Laplace equation can be solved using techniques, using similar assumptions stated above, but with the additional requirements of a steady-state flow field.
A common method for solution of this equations in civil engineering and soil mechanics is to use the graphical technique of drawing flownets; where contour lines of hydraulic head and the stream function make a curvilinear grid, allowing complex geometries to be solved approximately.
Steady-state flow to a pumping well (which never truly occurs, but is sometimes a useful approximation) is commonly called the Thiem solution.
Two-dimensional groundwater flow
The above groundwater flow equations are valid for three dimensional flow. In unconfined aquifers, the solution to the 3D form of the equation is complicated by the presence of a free surface water table boundary condition: in addition to solving for the spatial distribution of heads, the location of this surface is also an unknown. This is a non-linear problem, even though the governing equation is linear.
An alternative formulation of the groundwater flow equation may be obtained by invoking the Dupuit–Forchheimer assumption, where it is assumed that heads do not vary in the vertical direction (i.e., ). A horizontal water balance is applied to a long vertical column with area extending from the aquifer base to the unsaturated surface. This distance is referred to as the saturated thickness, b. In a confined aquifer, the saturated thickness is determined by the height of the aquifer, H, and the pressure head is non-zero everywhere. In an unconfined aquifer, the saturated thickness is defined as the vertical distance between the water table surface and the aquifer base. If , and the aquifer base is at the zero datum, then the unconfined saturated thickness is equal to the head, i.e., b=h.
Assuming both the hydraulic conductivity and the horizontal components of flow are uniform along the entire saturated thickness of the aquifer (i.e., and ), we can express Darcy's law in terms of integrated groundwater discharges, Qx and Qy:
Inserting these into our mass balance expression, we obtain the general 2D governing equation for incompressible saturated groundwater flow:
Where n is the aquifer porosity. The source term, N (length per time), represents the addition of water in the vertical direction (e.g., recharge). By incorporating the correct definitions for saturated thickness, specific storage, and specific yield, we can transform this into two unique governing equations for confined and unconfined conditions:
(confined), where S=Ssb is the aquifer storativity and
(unconfined), where Sy is the specific yield of the aquifer.
Note that the partial differential equation in the unconfined case is non-linear, whereas it is linear in the confined case. For unconfined steady-state flow, this non-linearity may be removed by expressing the PDE in terms of the head squared:
Or, for homogeneous aquifers,
This formulation allows us to apply standard methods for solving linear PDEs in the case of unconfined flow. For heterogeneous aquifers with no recharge, Potential flow methods may be applied for mixed confined/unconfined cases.
See also
Analytic element method
A numerical method used for the solution of partial differential equations
Dupuit–Forchheimer assumption
A simplification of the groundwater flow equation regarding vertical flow
Groundwater energy balance
Groundwater flow equations based on the energy balance
Richards equation
References
Further reading
H. F. Wang and M.P. Anderson Introduction to Groundwater Modeling: Finite Difference and Finite Element Methods
An excellent beginner's read for groundwater modeling. Covers all the basic concepts, with simple examples in FORTRAN 77.
Freeze, R. Allan; Cherry, John A. (1979). Groundwater . Prentice Hall. .
External links
USGS groundwater software — free groundwater modeling software like MODFLOW
Groundwater Hydrology (MIT OpenCourseware)
Aquifers
Hydraulics
Hydraulic engineering
Hydrology
Partial differential equations
Transport phenomena | Groundwater flow equation | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,059 | [
"Transport phenomena",
"Physical phenomena",
"Hydrology",
"Chemical engineering",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Aquifers",
"Environmental engineering",
"Hydraulic engineering",
"Fluid dynamics"
] |
1,864,331 | https://en.wikipedia.org/wiki/Water%20content | Water content or moisture content is the quantity of water contained in a material, such as soil (called soil moisture), rock, ceramics, crops, or wood. Water content is used in a wide range of scientific and technical areas, and is expressed as a ratio, which can range from 0 (completely dry) to the value of the materials' porosity at saturation. It can be given on a volumetric or mass (gravimetric) basis.
Definitions
Volumetric water content, θ, is defined mathematically as:
where is the volume of water and is equal to the total volume of the wet material, i.e. of the sum of the volume of solid host material (e.g., soil particles, vegetation tissue) , of water , and of air .
Gravimetric water content is expressed by mass (weight) as follows:
where is the mass of water and is the mass of the solids.
For materials that change in volume with water content, such as coal, the gravimetric water content, u, is expressed in terms of the mass of water per unit mass of the moist specimen (before drying):
However, woodworking, geotechnics and soil science require the gravimetric moisture content to be expressed with respect to the sample's dry weight:
And in food science, both and are used and called respectively moisture content wet basis (MC) and moisture content dry basis (MC).
Values are often expressed as a percentage, i.e. u×100%.
To convert gravimetric water content to volumetric water content, multiply the gravimetric water content by the bulk specific gravity of the material:
.
Derived quantities
In soil mechanics and petroleum engineering the water saturation or degree of saturation, , is defined as
where is the porosity, in terms of the volume of void or pore space and the total volume of the substance . Values of Sw can range from 0 (dry) to 1 (saturated). In reality, Sw never reaches 0 or 1 - these are idealizations for engineering use.
The normalized water content, , (also called effective saturation or ) is a dimensionless value defined by van Genuchten as:
where is the volumetric water content; is the residual water content, defined as the water content for which the gradient becomes zero; and, is the saturated water content, which is equivalent to porosity, .
Measurement
Direct methods
Water content can be directly measured using a drying oven.
The oven-dry method requires drying a sample (of soil, wood, etc.) in a special oven or kiln and checking the sample weight at regular time intervals. When the drying process is complete, the sample's weight is compared to its weight before drying, and the difference is used to calculate the sample's original moisture content.
Gravimetric water content, u, is calculated via the mass of water :
where and are the masses of the sample before and after drying in the oven.
This gives the numerator of u; the denominator is either or (resulting in u or u", respectively), depending on the discipline.
On the other hand, volumetric water content, θ, is calculated via the volume of water :
where is the density of water.
This gives the numerator of θ; the denominator, , is the total volume of the wet material, which is fixed by simply filling up a container of known volume (e.g., a tin can) when taking a sample.
For wood, the convention is to report moisture content on oven-dry basis (i.e. generally drying sample in an oven set at 105 deg Celsius for 24 hours or until it stops losing weight). In wood drying, this is an important concept.
Laboratory methods
Other methods that determine water content of a sample include chemical titrations (for example the Karl Fischer titration), determining mass loss on heating (perhaps in the presence of an inert gas), or after freeze drying. In the food industry the Dean-Stark method is also commonly used.
From the Annual Book of ASTM (American Society for Testing and Materials) Standards, the total evaporable moisture content in Aggregate (C 566) can be calculated with the formula:
where is the fraction of total evaporable moisture content of sample, is the mass of the original sample, and is mass of dried sample.
Soil moisture measurement
In addition to the direct and laboratory methods above, the following options are available.
Geophysical methods
There are several geophysical methods available that can approximate in situ soil water content. These methods include: time-domain reflectometry (TDR), neutron probe, frequency domain sensor, capacitance probe, amplitude domain reflectometry, electrical resistivity tomography, ground penetrating radar (GPR), and others that are sensitive to the physical properties of water . Geophysical sensors are often used to monitor soil moisture continuously in agricultural and scientific applications.
Satellite remote sensing method
Satellite microwave remote sensing is used to estimate soil moisture based on the large contrast between the dielectric properties of wet and dry soil. The microwave radiation is not sensitive to atmospheric variables, and can penetrate through clouds. Also, microwave signal can penetrate, to a certain extent, the vegetation canopy and retrieve information from ground surface. The data from microwave remote sensing satellites such as WindSat, AMSR-E, RADARSAT, ERS-1-2, Metop/ASCAT, and SMAP are used to estimate surface soil moisture.
Wood moisture measurement
In addition to the primary methods above, another method exists to measure the moisture content of wood: an electronic moisture meter.
Pin and pinless meters are the two main types of moisture meters.
Pin meters require driving two pins into the surface of the wood while making sure that the pins are aligned with the grain and not perpendicular to it. Pin meters provide moisture content readings by measuring the resistance in the electrical current between the two pins. The drier the wood, the more resistance to the electrical current, when measuring below the fiber saturation point of wood. Pin meters are generally preferred when there is no flat surface of the wood available to measure
Pinless meters emit an electromagnetic signal into the wood to provide readings of the wood's moisture content and are generally preferred when damage to the wood's surface is unacceptable or when a high volume of readings or greater ease of use is required.
Classification and uses
Moisture may be present as adsorbed moisture at internal surfaces and as capillary condensed water in small pores. At low relative humidities, moisture consists mainly of adsorbed water. At higher relative humidities, liquid water becomes more and more important, depending or not depending on the pore size can also be an influence of volume. In wood-based materials, however, almost all water is adsorbed at humidities below 98% RH.
In biological applications there can also be a distinction between physisorbed water and "free" water — the physisorbed water being that closely associated with and relatively difficult to remove from a biological material. The method used to determine water content may affect whether water present in this form is accounted for. For a better indication of "free" and "bound" water, the water activity of a material should be considered.
Water molecules may also be present in materials closely associated with individual molecules, as "water of crystallization", or as water molecules which are static components of protein structure.
Earth and agricultural sciences
In soil science, hydrology and agricultural sciences, water content has an important role for groundwater recharge, agriculture, and soil chemistry. Many recent scientific research efforts have aimed toward a predictive-understanding of water content over space and time. Observations have revealed generally that spatial variance in water content tends to increase as overall wetness increases in semiarid regions, to decrease as overall wetness increases in humid regions, and to peak under intermediate wetness conditions in temperate regions .
There are four standard water contents that are routinely measured and used, which are described in the following table:
And lastly the available water content, θa, which is equivalent to:
θa ≡ θfc − θpwp
which can range between 0.1 in gravel and 0.3 in peat.
Agriculture
When a soil becomes too dry, plant transpiration drops because the water is increasingly bound to the soil particles by suction. Below the wilting point plants are no longer able to extract water. At this point they wilt and cease transpiring altogether. Conditions where soil is too dry to maintain reliable plant growth is referred to as agricultural drought, and is a particular focus of irrigation management. Such conditions are common in arid and semi-arid environments.
Some agriculture professionals are beginning to use environmental measurements such as soil moisture to schedule irrigation. This method is referred to as smart irrigation or soil cultivation.
Groundwater
In saturated groundwater aquifers, all available pore spaces are filled with water (volumetric water content = porosity). Above a capillary fringe, pore spaces have air in them too.
Most soils have a water content less than porosity, which is the definition of unsaturated conditions, and they make up the subject of vadose zone hydrogeology. The capillary fringe of the water table is the dividing line between saturated and unsaturated conditions. Water content in the capillary fringe decreases with increasing distance above the phreatic surface. The flow of water through and unsaturated zone in soils often involves a process of fingering, resulting from Saffman–Taylor instability. This results mostly through drainage processes and produces and unstable interface between saturated and unsaturated regions.
One of the main complications which arises in studying the vadose zone, is the fact that the unsaturated hydraulic conductivity is a function of the water content of the material. As a material dries out, the connected wet pathways through the media become smaller, the hydraulic conductivity decreasing with lower water content in a very non-linear fashion.
A water retention curve is the relationship between volumetric water content and the water potential of the porous medium. It is characteristic for different types of porous medium. Due to hysteresis, different wetting and drying curves may be distinguished.
In aggregates
Generally, an aggregate has four different moisture conditions. They are Oven-dry (OD), Air-dry (AD), Saturated surface dry (SSD) and damp (or wet). Oven-dry and Saturated surface dry can be achieved by experiments in laboratories, while Air-dry and damp (or wet) are aggregates' common conditions in nature.
Four Conditions
Oven-dry (OD) is defined as the condition of an aggregate where there is no moisture within any part of the aggregate. This condition can be achieved in a laboratory by heating the aggregate to 220 °F (105 °C) for a period of time.
Air-dry (AD) is defined as the condition of an aggregate in which there are some water or moisture in the pores of the aggregate, while the outer surfaces of it is dry. This is a natural condition of aggregates in summer or in dry regions. In this condition, an aggregate will absorb water from other materials added to the surface of it, which would possibly have some impact on some characters of the aggregate.
Saturated surface dry (SSD) is defined as the condition of an aggregate in which the surfaces of the particles are "dry" (i.e., they will neither absorb any of the mixing water added; nor will they contribute any of their contained water to the mix), but the inter-particle voids are saturated with water. In this condition aggregates will not affect the free water content of a composite material.ftp://ftp.dot.state.tx.us/pub/txdot-info/cst/TMS/400-A_series/pdfs/cnn403.pdf
The water adsorption by mass (Am) is defined in terms of the mass of saturated-surface-dry (Mssd) sample and the mass of oven dried test sample (Mdry) by the formula:
Damp' (or wet) is defined as the condition of an aggregate in which water is fully permeated the aggregate through the pores in it, and there is free water in excess of the SSD condition on its surfaces which will become part of the mixing water.
Application
Among these four moisture conditions of aggregates, saturated surface dry is the condition that has the most applications in laboratory experiments, research, and studies, especially those related to water absorption, composition ratio, or shrinkage tests in materials like concrete. For many related experiments, a saturated surface dry condition is a premise that must be realized before the experiment. In saturated surface dry conditions, the aggregate's water content is in a relatively stable and static situation where its environment would not affect it. Therefore, in experiments and tests where aggregates are in saturated surface dry condition, there would be fewer disrupting factors than in the other three conditions.
See also
Humidity, "water content" in air
Moisture
Viscous fingering
Moisture analysis
Soil moisture sensors
Water activity
Water retention curve
References
Further reading
Wessel-Bothe, Weihermüller (2020): Field Measurement Methods in Soil Science. New practical guide to soil measurements explains the principles of operation of different moisture sensor types (independent of manufacturer), their accuracy, fields of application and how such sensors are installed, as well as subtleties of the data so obtained. Also deals with other crop-related soil parameters.
Analytical chemistry
Hydrology
Physical chemistry
Soil mechanics
Soil physics
Water
Woodworking | Water content | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,796 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Soil mechanics",
"Soil physics",
"nan",
"Environmental engineering",
"Water",
"Physical chemistry"
] |
1,864,889 | https://en.wikipedia.org/wiki/Cosmology | Cosmology () is a branch of physics and metaphysics dealing with the nature of the universe, the cosmos. The term cosmology was first used in English in 1656 in Thomas Blount's Glossographia, and in 1731 taken up in Latin by German philosopher Christian Wolff in Cosmologia Generalis. Religious or mythological cosmology is a body of beliefs based on mythological, religious, and esoteric literature and traditions of creation myths and eschatology. In the science of astronomy, cosmology is concerned with the study of the chronology of the universe.
Physical cosmology is the study of the observable universe's origin, its large-scale structures and dynamics, and the ultimate fate of the universe, including the laws of science that govern these areas. It is investigated by scientists, including astronomers and physicists, as well as philosophers, such as metaphysicians, philosophers of physics, and philosophers of space and time. Because of this shared scope with philosophy, theories in physical cosmology may include both scientific and non-scientific propositions and may depend upon assumptions that cannot be tested. Physical cosmology is a sub-branch of astronomy that is concerned with the universe as a whole. Modern physical cosmology is dominated by the Big Bang Theory which attempts to bring together observational astronomy and particle physics; more specifically, a standard parameterization of the Big Bang with dark matter and dark energy, known as the Lambda-CDM model.
Theoretical astrophysicist David N. Spergel has described cosmology as a "historical science" because "when we look out in space, we look back in time" due to the finite nature of the speed of light.
Disciplines
Physics and astrophysics have played central roles in shaping our understanding of the universe through scientific observation and experiment. Physical cosmology was shaped through both mathematics and observation in an analysis of the whole universe. The universe is generally understood to have begun with the Big Bang, followed almost instantaneously by cosmic inflation, an expansion of space from which the universe is thought to have emerged 13.799 ± 0.021 billion years ago. Cosmogony studies the origin of the universe, and cosmography maps the features of the universe.
In Diderot's Encyclopédie, cosmology is broken down into uranology (the science of the heavens), aerology (the science of the air), geology (the science of the continents), and hydrology (the science of waters).
Metaphysical cosmology has also been described as the placing of humans in the universe in relationship to all other entities. This is exemplified by Marcus Aurelius's observation that a man's place in that relationship: "He who does not know what the world is does not know where he is, and he who does not know for what purpose the world exists, does not know who he is, nor what the world is."
Discoveries
Physical cosmology
Physical cosmology is the branch of physics and astrophysics that deals with the study of the physical origins and evolution of the universe. It also includes the study of the nature of the universe on a large scale. In its earliest form, it was what is now known as "celestial mechanics," the study of the heavens. Greek philosophers Aristarchus of Samos, Aristotle, and Ptolemy proposed different cosmological theories. The geocentric Ptolemaic system was the prevailing theory until the 16th century when Nicolaus Copernicus, and subsequently Johannes Kepler and Galileo Galilei, proposed a heliocentric system. This is one of the most famous examples of epistemological rupture in physical cosmology.
Isaac Newton's Principia Mathematica, published in 1687, was the first description of the law of universal gravitation. It provided a physical mechanism for Kepler's laws and also allowed the anomalies in previous systems, caused by gravitational interaction between the planets, to be resolved. A fundamental difference between Newton's cosmology and those preceding it was the Copernican principle—that the bodies on Earth obey the same physical laws as all celestial bodies. This was a crucial philosophical advance in physical cosmology.
Modern scientific cosmology is widely considered to have begun in 1917 with Albert Einstein's publication of his final modification of general relativity in the paper "Cosmological Considerations of the General Theory of Relativity" (although this paper was not widely available outside of Germany until the end of World War I). General relativity prompted cosmogonists such as Willem de Sitter, Karl Schwarzschild, and Arthur Eddington to explore its astronomical ramifications, which enhanced the ability of astronomers to study very distant objects. Physicists began changing the assumption that the universe was static and unchanging. In 1922, Alexander Friedmann introduced the idea of an expanding universe that contained moving matter.
In parallel to this dynamic approach to cosmology, one long-standing debate about the structure of the cosmos was coming to a climax – the Great Debate (1917 to 1922) – with early cosmologists such as Heber Curtis and Ernst Öpik determining that some nebulae seen in telescopes were separate galaxies far distant from our own. While Heber Curtis argued for the idea that spiral nebulae were star systems in their own right as island universes, Mount Wilson astronomer Harlow Shapley championed the model of a cosmos made up of the Milky Way star system only. This difference of ideas came to a climax with the organization of the Great Debate on 26 April 1920 at the meeting of the U.S. National Academy of Sciences in Washington, D.C. The debate was resolved when Edwin Hubble detected Cepheid Variables in the Andromeda Galaxy in 1923 and 1924. Their distance established spiral nebulae well beyond the edge of the Milky Way.
Subsequent modelling of the universe explored the possibility that the cosmological constant, introduced by Einstein in his 1917 paper, may result in an expanding universe, depending on its value. Thus the Big Bang model was proposed by the Belgian priest Georges Lemaître in 1927 which was subsequently corroborated by Edwin Hubble's discovery of the redshift in 1929 and later by the discovery of the cosmic microwave background radiation by Arno Penzias and Robert Woodrow Wilson in 1964. These findings were a first step to rule out some of many alternative cosmologies.
Since around 1990, several dramatic advances in observational cosmology have transformed cosmology from a largely speculative science into a predictive science with precise agreement between theory and observation. These advances include observations of the microwave background from the COBE, WMAP and Planck satellites, large new galaxy redshift surveys including 2dfGRS and SDSS, and observations of distant supernovae and gravitational lensing. These observations matched the predictions of the cosmic inflation theory, a modified Big Bang theory, and the specific version known as the Lambda-CDM model. This has led many to refer to modern times as the "golden age of cosmology".
In 2014, the BICEP2 collaboration claimed that they had detected the imprint of gravitational waves in the cosmic microwave background. However, this result was later found to be spurious: the supposed evidence of gravitational waves was in fact due to interstellar dust.
On 1 December 2014, at the Planck 2014 meeting in Ferrara, Italy, astronomers reported that the universe is 13.8 billion years old and composed of 4.9% atomic matter, 26.6% dark matter and 68.5% dark energy.
Religious or mythological cosmology
Religious or mythological cosmology is a body of beliefs based on mythological, religious, and esoteric literature and traditions of creation and eschatology. Creation myths are found in most religions, and are typically split into five different classifications, based on a system created by Mircea Eliade and his colleague Charles Long.
Types of Creation Myths based on similar motifs:
Creation ex nihilo in which the creation is through the thought, word, dream or bodily secretions of a divine being.
Earth diver creation in which a diver, usually a bird or amphibian sent by a creator, plunges to the seabed through a primordial ocean to bring up sand or mud which develops into a terrestrial world.
Emergence myths in which progenitors pass through a series of worlds and metamorphoses until reaching the present world.
Creation by the dismemberment of a primordial being.
Creation by the splitting or ordering of a primordial unity such as the cracking of a cosmic egg or a bringing order from chaos.
Philosophy
Cosmology deals with the world as the totality of space, time and all phenomena. Historically, it has had quite a broad scope, and in many cases was found in religion. Some questions about the Universe are beyond the scope of scientific inquiry but may still be interrogated through appeals to other philosophical approaches like dialectics. Some questions that are included in extra-scientific endeavors may include: Charles Kahn, an important historian of philosophy, attributed the origins of ancient Greek cosmology to Anaximander.
What is the origin of the universe? What is its first cause (if any)? Is its existence necessary? (see monism, pantheism, emanationism and creationism)
What are the ultimate material components of the universe? (see mechanism, dynamism, hylomorphism, atomism)
What is the ultimate reason (if any) for the existence of the universe? Does the cosmos have a purpose? (see teleology)
Does the existence of consciousness have a role in the existence of reality? How do we know what we know about the totality of the cosmos? Does cosmological reasoning reveal metaphysical truths? (see epistemology)
Historical cosmologies
Table notes: the term "static" simply means not expanding and not contracting. Symbol G represents Newton's gravitational constant; Λ (Lambda) is the cosmological constant.
See also
Absolute time and space
Big History
Earth science
Galaxy formation and evolution
Illustris project
Jainism and non-creationism
Lambda-CDM model
List of astrophysicists
Non-standard cosmology
Taiji (philosophy)
Timeline of cosmological theories
Universal rotation curve
Warm inflation
Big Ring
References
Sources
Download full text:
Charles Kahn. 1994. Anaximander and the Origins of Greek Cosmology. Indianapolis: Hackett.
Lectures given at the Summer School in High Energy Physics and Cosmology, ICTP (Trieste) 1993.) 60 pages, plus 5 Figures.
Sophia Centre. The Sophia Centre for the Study of Cosmology in Culture, University of Wales Trinity Saint David. | Cosmology | [
"Physics",
"Astronomy"
] | 2,203 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Physical cosmology",
"Astrophysics"
] |
18,579,411 | https://en.wikipedia.org/wiki/Potassium%20aluminium%20fluoride | Potassium aluminium fluoride (PAF, chemical formula KAlF4) is an inorganic compound.
This compound is used as flux in the smelting of secondary aluminium, to reduce or remove the magnesium content of the melt. The main environmental issue that arises from using PAF is the production of fluoride gases. Calcium hydroxide is widely used to suppress the fluorides produced but in most cases fails to remove it sufficiently.
PAF is also present in a wide range of products for the metals industry as a fluxing agent within additives to help its dispersion within a charge.
It is also used as an insecticide.
A single natural occurrence has been reported at a burning coal bank at Forestville, Pennsylvania, as an unnamed mineral.
References
Fluorides
Metal halides
Potassium compounds
Aluminium compounds
Alkali metal fluorides | Potassium aluminium fluoride | [
"Chemistry"
] | 172 | [
"Inorganic compounds",
"Fluorides",
"Metal halides",
"Salts"
] |
18,579,625 | https://en.wikipedia.org/wiki/Nucleareurope | nucleareurope (formerly FORATOM, European Atomic Forum - Forum Atomique Européen), is the Brussels-based trade association for the nuclear energy industry in Europe. Its main purpose is to promote the use of nuclear power in Europe.
The current Director General of FORATOM is Yves Desbazeille.
FORATOM estimated that in 2016 it spent between €300,000 - €399,999 on lobbying EU institutions.
References
External links
International nuclear energy organizations
International organisations based in Belgium
International scientific organizations based in Europe | Nucleareurope | [
"Engineering"
] | 110 | [
"International nuclear energy organizations",
"Nuclear organizations"
] |
18,580,879 | https://en.wikipedia.org/wiki/Transport | Transport (in British English) or transportation (in American English) is the intentional movement of humans, animals, and goods from one location to another. Modes of transport include air, land (rail and road), water, cable, pipelines, and space. The field can be divided into infrastructure, vehicles, and operations. Transport enables human trade, which is essential for the development of civilizations.
Transport infrastructure consists of both fixed installations, including roads, railways, airways, waterways, canals, and pipelines, and terminals such as airports, railway stations, bus stations, warehouses, trucking terminals, refueling depots (including fuel docks and fuel stations), and seaports. Terminals may be used both for the interchange of passengers and cargo and for maintenance.
Means of transport are any of the different kinds of transport facilities used to carry people or cargo. They may include vehicles, riding animals, and pack animals. Vehicles may include wagons, automobiles, bicycles, buses, trains, trucks, helicopters, watercraft, spacecraft, and aircraft.
Modes
A mode of transport is a solution that makes use of a certain type of vehicle, infrastructure, and operation. The transport of a person or of cargo may involve one mode or several of the modes, with the latter case being called inter-modal or multi-modal transport. Each mode has its own advantages and disadvantages, and will be chosen on the basis of cost, capability, and route.
Governments deal with the way the vehicles are operated, and the procedures set for this purpose, including financing, legalities, and policies. In the transport industry, operations and ownership of infrastructure can be either public or private, depending on the country and mode.
Passenger transport may be public, where operators provide scheduled services, or private. Freight transport has become focused on containerization, although bulk transport is used for large volumes of durable items. Transport plays an important part in economic growth and globalization, but most types cause air pollution and use large amounts of land. While it is heavily subsidized by governments, good planning of transport is essential to make traffic flow and restrain urban sprawl.
Human-powered
Human-powered transport, a form of sustainable transport, is the transport of people or goods using human muscle-power, in the form of walking, running, and swimming. Modern technology has allowed machines to enhance human power. Human-powered transport remains popular for reasons of cost-saving, leisure, physical exercise, and environmentalism; it is sometimes the only type available, especially in underdeveloped or inaccessible regions.
Although humans are able to walk without infrastructure, the transport can be enhanced through the use of roads, especially when using the human power with vehicles, such as bicycles and inline skates. Human-powered vehicles have also been developed for difficult environments, such as snow and water, by watercraft rowing and skiing; even the air can be entered with human-powered aircraft.
Animal-powered
Animal-powered transport is the use of working animals for the movement of people and commodities. Humans may ride some of the animals directly, use them as pack animals for carrying goods, or harness them, alone or in teams, to pull sleds or wheeled vehicles.
Air
A fixed-wing aircraft, commonly called an airplane, is a heavier-than-air craft where movement of the air in relation to the wings is used to generate lift. The term is used to distinguish this from rotary-wing aircraft, where the movement of the lift surfaces relative to the air generates lift. A gyroplane is both fixed-wing and rotary wing. Fixed-wing aircraft range from small trainers and recreational aircraft to large airliners and military cargo aircraft.
Two things necessary for aircraft are air flow over the wings for lift and an area for landing. The majority of aircraft also need an airport with the infrastructure for maintenance, restocking, and refueling and for the loading and unloading of crew, cargo, and passengers. While the vast majority of aircraft land and take off on land, some are capable of take-off and landing on ice, snow, and calm water.
The aircraft is the second fastest method of transport, after the rocket. Commercial jets can reach up to , single-engine aircraft . Aviation is able to quickly transport people and limited amounts of cargo over longer distances, but incurs high costs and energy use; for short distances or in inaccessible places, helicopters can be used. As of April 28, 2009, The Guardian article notes that "the WHO estimates that up to 500,000 people are on planes at any time."
Land
Land transport covers all land-based transport systems that provide for the movement of people, goods, and services. Land transport plays a vital role in linking communities to each other. Land transport is a key factor in urban planning. It consists of two kinds, rail and road.
Rail
Rail transport is where a train runs along a set of two parallel steel rails, known as a railway or railroad. The rails are anchored perpendicular to ties (or sleepers) of timber, concrete, or steel, to maintain a consistent distance apart, or gauge. The rails and perpendicular beams are placed on a foundation made of concrete or compressed earth and gravel in a bed of ballast. Alternative methods include monorail and maglev.
A train consists of one or more connected vehicles that operate on the rails. Propulsion is commonly provided by a locomotive, that hauls a series of unpowered cars, that can carry passengers or freight. The locomotive can be powered by steam, by diesel, or by electricity supplied by trackside systems. Alternatively, some or all the cars can be powered, known as a multiple unit. Also, a train can be powered by horses, cables, gravity, pneumatics, and gas turbines. Railed vehicles move with much less friction than rubber tires on paved roads, making trains more energy efficient, though not as efficient as ships.
Intercity trains are long-haul services connecting cities; modern high-speed rail is capable of speeds up to , but this requires specially built track. Regional and commuter trains feed cities from suburbs and surrounding areas, while intra-urban transport is performed by high-capacity tramways and rapid transits, often making up the backbone of a city's public transport. Freight trains traditionally used box cars, requiring manual loading and unloading of the cargo. Since the 1960s, container trains have become the dominant solution for general freight, while large quantities of bulk are transported by dedicated trains.
Road
A road is an identifiable route, way, or path between two or more places. Roads are typically smoothed, paved, or otherwise prepared to allow easy travel; though they need not be, and historically many roads were simply recognizable routes without any formal construction or maintenance. In urban areas, roads may pass through a city or village and be named as streets, serving a dual function as urban space easement and route.
The most common road vehicle is the automobile; a wheeled passenger vehicle that carries its own motor. Other users of roads include buses, trucks, motorcycles, bicycles, and pedestrians. As of 2010, there were 1.015 billion automobiles worldwide.
Road transport offers complete freedom to road users to transfer the vehicle from one lane to the other and from one road to another according to the need and convenience. This flexibility of changes in location, direction, speed, and timings of travel is not available to other modes of transport. It is possible to provide door-to-door service only by road transport.
Automobiles provide high flexibility with low capacity, but require high energy and area use, and are the main source of harmful noise and air pollution in cities; buses allow for more efficient travel at the cost of reduced flexibility. Road transport by truck is often the initial and final stage of freight transport.
Water
Water transport is movement by means of a watercraft—such as a barge, boat, ship, or sailboat—over a body of water, such as a sea, ocean, lake, canal, or river. The need for buoyancy is common to watercraft, making the hull a dominant aspect of its construction, maintenance, and appearance.
In the 19th century, the first steam ships were developed, using a steam engine to drive a paddle wheel or propeller to move the ship. The steam was produced in a boiler using wood or coal and fed through a steam external combustion engine. Now most ships have an internal combustion engine using a slightly refined type of petroleum called bunker fuel. Some ships, such as submarines, use nuclear power to produce the steam. Recreational or educational craft still use wind power, while some smaller craft use internal combustion engines to drive one or more propellers or, in the case of jet boats, an inboard water jet. In shallow draft areas, hovercraft are propelled by large pusher-prop fans. (See Marine propulsion.)
Although it is slow compared to other transport, modern sea transport is a highly efficient method of transporting large quantities of goods. Commercial vessels, nearly 35,000 in number, carried 7.4 billion tons of cargo in 2007. Transport by water is significantly less costly than air transport for transcontinental shipping; short sea shipping and ferries remain viable in coastal areas.
Other modes
Pipeline transport sends goods through a pipe; most commonly liquid and gases are sent, but pneumatic tubes can also send solid capsules using compressed air. For liquids/gases, any chemically stable liquid or gas can be sent through a pipeline. Short-distance systems exist for sewage, slurry, water, and beer, while long-distance networks are used for petroleum and natural gas.
Cable transport is a broad mode where vehicles are pulled by cables instead of an internal power source. It is most commonly used at steep gradient. Typical solutions include aerial tramways, elevators, and ski lifts; some of these are also categorized as conveyor transport.
Spaceflight is transport outside Earth's atmosphere by means of a spacecraft. It is most frequently used for satellites placed in Earth orbit. However, human spaceflight mission have landed on the Moon and are occasionally used to rotate crew-members to space stations. Uncrewed spacecraft have also been sent to all the planets of the Solar System.
Suborbital spaceflight is the fastest of the existing and planned transport systems from a place on Earth to a distant "other place" on Earth. Faster transport could be achieved through part of a low Earth orbit or by following that trajectory even faster, using the propulsion of the rocket to steer it.
Elements
Infrastructure
Infrastructure is the fixed installations that allow a vehicle to operate. It consists of a roadway, a terminal, and facilities for parking and maintenance. For rail, pipeline, road, and cable transport, the entire way the vehicle travels must be constructed. Air and watercraft are able to avoid this, since the airway and seaway do not need to be constructed. However, they require fixed infrastructure at terminals.
Terminals such as airports, ports, and stations, are locations where passengers and freight can be transferred from one vehicle or mode to another. For passenger transport, terminals are integrating different modes to allow riders, who are interchanging between modes, to take advantage of each mode's benefits. For instance, airport rail links connect airports to the city centres and suburbs. The terminals for automobiles are parking lots, while buses and coaches can operate from simple stops. For freight, terminals act as transshipment points, though some cargo is transported directly from the point of production to the point of use.
The financing of infrastructure can either be public or private. Transport is often a natural monopoly and a necessity for the public; roads, and in some countries railways and airports, are funded through taxation. New infrastructure projects can have high costs and are often financed through debt. Many infrastructure owners, therefore, impose usage fees, such as landing fees at airports or toll plazas on roads. Independent of this, authorities may impose taxes on the purchase or use of vehicles. Because of poor forecasting and overestimation of passenger numbers by planners, there is frequently a benefits shortfall for transport infrastructure projects.
Means of transport
Animals
Animals used in transportation include pack animals and riding animals.
Vehicles
A vehicle is a non-living device that is used to move people and goods. Unlike the infrastructure, the vehicle moves along with the cargo and riders. Unless being pulled/pushed by a cable or muscle-power, the vehicle must provide its own propulsion; this is most commonly done through a steam engine, combustion engine, electric motor, jet engine, or rocket, though other means of propulsion also exist. Vehicles also need a system of converting the energy into movement; this is most commonly done through wheels, propellers, and pressure.
Vehicles are most commonly staffed by a driver. However, some systems, such as people movers and some rapid transits, are fully automated. For passenger transport, the vehicle must have a compartment, seat, or platform for the passengers. Simple vehicles, such as automobiles, bicycles, or simple aircraft, may have one of the passengers as a driver. Recently, the progress related to the Fourth Industrial Revolution has brought a lot of new emerging technologies for transportation and automotive fields such as Connected Vehicles and Autonomous Driving. These innovations are said to form future mobility, but concerns remain on safety and cybersecurity, particularly concerning connected and autonomous mobility.
Operation
Private transport is only subject to the owner of the vehicle, who operates the vehicle themselves. For public transport and freight transport, operations are done through private enterprise or by governments. The infrastructure and vehicles may be owned and operated by the same company, or they may be operated by different entities. Traditionally, many countries have had a national airline and national railway. Since the 1980s, many of these have been privatized. International shipping remains a highly competitive industry with little regulation, but ports can be public-owned.
Policy
As the population of the world increases, cities grow in size and population—according to the United Nations, 55% of the world's population live in cities, and by 2050 this number is expected to rise to 68%. Public transport policy must evolve to meet the changing priorities of the urban world. The institution of policy enforces order in transport, which is by nature chaotic as people attempt to travel from one place to another as fast as possible. This policy helps to reduce accidents and save lives.
Functions
Relocation of travelers and cargo are the most common uses of transport. However, other uses exist, such as the strategic and tactical relocation of armed forces during warfare, or the civilian mobility construction or emergency equipment.
Passenger
Passenger transport, or travel, is divided into public and private transport. Public transport is scheduled services on fixed routes, while private is vehicles that provide ad hoc services at the riders desire. The latter offers better flexibility, but has lower capacity and a higher environmental impact. Travel may be as part of daily commuting or for business, leisure, or migration.
Short-haul transport is dominated by the automobile and mass transit. The latter consists of buses in rural and small cities, supplemented with commuter rail, trams, and rapid transit in larger cities. Long-haul transport involves the use of the automobile, trains, coaches, and aircraft, the last of which have become predominantly used for the longest, including intercontinental, travel. Intermodal passenger transport is where a journey is performed through the use of several modes of transport; since all human transport normally starts and ends with walking, all passenger transport can be considered intermodal. Public transport may also involve the intermediate change of vehicle, within or across modes, at a transport hub, such as a bus or railway station.
Taxis and buses can be found on both ends of the public transport spectrum. Buses are the cheapest mode of transport but are not necessarily flexible, and taxis are very flexible but more expensive. In the middle is demand-responsive transport, offering flexibility whilst remaining affordable.
International travel may be restricted for some individuals due to legislation and visa requirements.
Medical
An ambulance is a vehicle used to transport people from or between places of treatment, and in some instances will also provide out-of-hospital medical care to the patient. The word is often associated with road-going "emergency ambulances", which form part of emergency medical services, administering emergency care to those with acute medical problems.
Air medical services is a comprehensive term covering the use of air transport to move patients to and from healthcare facilities and accident scenes. Personnel provide comprehensive prehospital and emergency and critical care to all types of patients during aeromedical evacuation or rescue operations, aboard helicopters, propeller aircraft, or jet aircraft.
Freight
Freight transport, or shipping, is a key in the value chain in manufacturing. With increased specialization and globalization, production is being located further away from consumption, rapidly increasing the demand for transport. Transport creates place utility by moving the goods from the place of production to the place of consumption. While all modes of transport are used for cargo transport, there is high differentiation between the nature of the cargo transport, in which mode is chosen. Logistics refers to the entire process of transferring products from producer to consumer, including storage, transport, transshipment, warehousing, material-handling, and packaging, with associated exchange of information. Incoterm deals with the handling of payment and responsibility of risk during transport.
Containerization, with the standardization of ISO containers on all vehicles and at all ports, has revolutionized international and domestic trade, offering a huge reduction in transshipment costs. Traditionally, all cargo had to be manually loaded and unloaded into the haul of any ship or car; containerization allows for automated handling and transfer between modes, and the standardized sizes allow for gains in economy of scale in vehicle operation. This has been one of the key driving factors in international trade and globalization since the 1950s.
Bulk transport is common with cargo that can be handled roughly without deterioration; typical examples are ore, coal, cereals, and petroleum. Because of the uniformity of the product, mechanical handling can allow enormous quantities to be handled quickly and efficiently. The low value of the cargo combined with high volume also means that economies of scale become essential in transport, and gigantic ships and whole trains are commonly used to transport bulk. Liquid products with sufficient volume may also be transported by pipeline.
Air freight has become more common for products of high value; while less than one percent of world transport by volume is by airline, it amounts to forty percent of the value. Time has become especially important in regards to principles such as postponement and just-in-time within the value chain, resulting in a high willingness to pay for quick delivery of key components or items of high value-to-weight ratio. In addition to mail, common items sent by air include electronics and fashion clothing.
Industry
Impact
Economic
Transport is a key necessity for specialization—allowing production and consumption of products to occur at different locations. Throughout history, transport has been a spur to expansion; better transport allows more trade and a greater spread of people. Economic growth has always been dependent on increasing the capacity and rationality of transport. But the infrastructure and operation of transport have a great impact on the land, and transport is the largest drainer of energy, making transport sustainability a major issue.
Due to the way modern cities and communities are planned and operated, a physical distinction between home and work is usually created, forcing people to transport themselves to places of work, study, or leisure, as well as to temporarily relocate for other daily activities. Passenger transport is also the essence of tourism, a major part of recreational transport. Commerce requires the transport of people to conduct business, either to allow face-to-face communication for important decisions or to move specialists from their regular place of work to sites where they are needed.
In lean thinking, transporting materials or work in process from one location to another is seen as one of the seven wastes (Japanese term: muda) which do not add value to a product.
Planning
Transport planning allows for high use and less impact regarding new infrastructure. Using models of transport forecasting, planners are able to predict future transport patterns. On the operative level, logistics allows owners of cargo to plan transport as part of the supply chain. Transport as a field is also studied through transport economics, a component for the creation of regulation policy by authorities. Transport engineering, a sub-discipline of civil engineering, must take into account trip generation, trip distribution, mode choice, and route assignment, while the operative level is handled through traffic engineering.
Because of the negative impacts incurred, transport often becomes the subject of controversy related to choice of mode, as well as increased capacity. Automotive transport can be seen as a tragedy of the commons, where the flexibility and comfort for the individual deteriorate the natural and urban environment for all. Density of development depends on mode of transport, with public transport allowing for better spatial use. Good land use keeps common activities close to people's homes and places higher-density development closer to transport lines and hubs, to minimize the need for transport. There are economies of agglomeration. Beyond transport, some land uses are more efficient when clustered. Transport facilities consume land, and in cities pavement (devoted to streets and parking) can easily exceed 20 percent of the total land use. An efficient transport system can reduce land waste.
Too much infrastructure and too much smoothing for maximum vehicle throughput mean that in many cities there is too much traffic and many—if not all—of the negative impacts that come with it. It is only in recent years that traditional practices have started to be questioned in many places; as a result of new types of analysis which bring in a much broader range of skills than those traditionally relied on—spanning such areas as environmental impact analysis, public health, sociology, and economics—the viability of the old mobility solutions is increasingly being questioned.
Environment
Transport is a major use of energy and burns most of the world's petroleum. This creates air pollution, including nitrous oxides and particulates, and is a significant contributor to global warming through emission of carbon dioxide, for which transport is the fastest-growing emission sector. By sub-sector, road transport is the largest contributor to global warming. Environmental regulations in developed countries have reduced individual vehicles' emissions; however, this has been offset by increases in the numbers of vehicles and in the use of each vehicle. Some pathways to reduce the carbon emissions of road vehicles considerably have been studied. Energy use and emissions vary largely between modes, causing environmentalists to call for a transition from air and road to rail and human-powered transport, as well as increased transport electrification and energy efficiency.
Other environmental impacts of transport systems include traffic congestion and automobile-oriented urban sprawl, which can consume natural habitat and agricultural lands. By reducing transport emissions globally, it is predicted that there will be significant positive effects on Earth's air quality, acid rain, smog, and climate change.
While electric cars are being built to cut down CO2 emission at the point of use, an approach that is becoming popular among cities worldwide is to prioritize public transport, bicycles, and pedestrian movement. Redirecting vehicle movement to create 20-minute neighbourhoods that promotes exercise while greatly reducing vehicle dependency and pollution. Some policies are levying a congestion charge to cars for travelling within congested areas during peak time.
Airplane emissions change depending on the flight distance. It takes a lot of energy to take off and land, so longer flights are more efficient per mile traveled. However, longer flights naturally use more fuel in total. Short flights produce the most per passenger mile, while long flights produce slightly less. Things get worse when planes fly high in the atmosphere. Their emissions trap much more heat than those released at ground level. This isn't just because of , but a mix of other greenhouse gases in the exhaust. City buses produce about 0.3 kg of for every mile traveled per passenger. For long-distance bus trips (over 20 miles), that pollution drops to about 0.08 kg of per passenger mile. On average, commuter trains produce around 0.17 kg of for each mile traveled per passenger. Long-distance trains are slightly higher at about 0.19 kg of per passenger mile. The fleet emission average for delivery vans, trucks and big rigs is per gallon of diesel consumed. Delivery vans and trucks average about 7.8 mpg (or 1.3 kg of per mile) while big rigs average about 5.3 mpg (or 1.92 kg of per mile).
Sustainable development
The United Nations first formally recognized the role of transport in sustainable development in the 1992 United Nations Earth summit. In the 2012 United Nations World Conference, global leaders unanimously recognized that transport and mobility are central to achieving the sustainability targets. In recent years, data has been collected to show that the transport sector contributes to a quarter of the global greenhouse gas emissions, and therefore sustainable transport has been mainstreamed across several of the 2030 Sustainable Development Goals, especially those related to food, security, health, energy, economic growth, infrastructure, and cities and human settlements. Meeting sustainable transport targets is said to be particularly important to achieving the Paris Agreement.
There are various Sustainable Development Goals (SDGs) that are promoting sustainable transport to meet the defined goals. These include SDG 3 on health (increased road safety), SDG 7 on energy, SDG 8 on decent work and economic growth, SDG 9 on resilient infrastructure, SDG 11 on sustainable cities (access to transport and expanded public transport), SDG 12 on sustainable consumption and production (ending fossil fuel subsidies), and SDG 14 on oceans, seas, and marine resources.
History
Natural
Humans' first ways to move included walking, running, and swimming. The domestication of animals introduced a new way to lay the burden of transport on more powerful creatures, allowing the hauling of heavier loads, or humans riding animals for greater speed and duration. Inventions such as the wheel and the sled (U.K. sledge) helped make animal transport more efficient through the introduction of vehicles.
The first forms of road transport involved animals, such as horses (domesticated in the 4th or the 3rd millennium BCE), oxen (from about 8000 BCE), or humans carrying goods over dirt tracks that often followed game trails.
Water transport
Water transport, including rowed and sailed vessels, dates back to time immemorial and was the only efficient way to transport large quantities or over large distances prior to the Industrial Revolution. The first watercraft were canoes cut out from tree trunks. Early water transport was accomplished with ships that were either rowed or used the wind for propulsion, or a combination of the two. The importance of water has led to most cities that grew up as sites for trading being located on rivers or on the sea-shore, often at the intersection of two bodies of water.
Mechanical
Until the Industrial Revolution, transport remained slow and costly, and production and consumption gravitated as close to each other as feasible. The Industrial Revolution in the 19th century saw several inventions fundamentally change transport. With telegraphy, communication became instant and independent of the transport of physical objects. The invention of the steam engine, closely followed by its application in rail transport, made land transport independent of human or animal muscles. Both speed and capacity increased, allowing specialization through manufacturing being located independently of natural resources. The 19th century also saw the development of the steam ship, which sped up global transport.
With the development of the combustion engine and the automobile around 1900, road transport became more competitive again, and mechanical private transport originated. The first "modern" highways were constructed during the 19th century with macadam. Later, tarmac and concrete became the dominant paving materials.
In 1903 the Wright brothers demonstrated the first successful controllable airplane, and after World War I (1914–1918) aircraft became a fast way to transport people and express goods over long distances.
After World War II (1939–1945) the automobile and airlines took higher shares of transport, reducing rail and water to freight and short-haul passenger services. Scientific spaceflight began in the 1950s, with rapid growth until the 1970s, when interest dwindled. In the 1950s the introduction of containerization gave massive efficiency gains in freight transport, fostering globalization. International air travel became much more accessible in the 1960s with the commercialization of the jet engine. Along with the growth in automobiles and motorways, rail and water transport declined in relative importance. After the introduction of the Shinkansen in Japan in 1964, high-speed rail in Asia and Europe started attracting passengers on long-haul routes away from the airlines.
Early in U.S. history, private joint-stock corporations owned most aqueducts, bridges, canals, railroads, roads, and tunnels. Most such transport infrastructure came under government control in the late 19th and early 20th centuries, culminating in the nationalization of inter-city passenger rail-service with the establishment of Amtrak. Recently, however, a movement to privatize roads and other infrastructure has gained some ground and adherents.
See also
Car-free movement
Energy efficiency in transport
Environmental impact of aviation
Free public transport
Green transport hierarchy
Health and environmental impact of transport
Health impact of light rail systems
IEEE Intelligent Transportation Systems Society
Journal of Transport and Land Use
List of emerging transportation technologies
Outline of transport
Personal rapid transit
Public transport
Public transport accessibility level
Rail transport by country
Speed record
Taxicabs by country
Transport divide
Transportation engineering
References
Bibliography
Further reading
McKibben, Bill, "Toward a Land of Buses and Bikes" (review of Ben Goldfarb, Crossings: How Road Ecology Is Shaping the Future of Our Planet, Norton, 2023, 370 pp.; and Henry Grabar, Paved Paradise: How Parking Explains the World, Penguin Press, 2023, 346 pp.), The New York Review of Books, vol. LXX, no. 15 (5 October 2023), pp. 30–32. "Someday in the not impossibly distant future, if we manage to prevent a global warming catastrophe, you could imagine a post-auto world where bikes and buses and trains are ever more important, as seems to be happening in Europe at the moment." (p. 32.)
External links
Transportation from UCB Libraries GovPubs
America On the Move An online transportation exhibition from the National Museum of American History, Smithsonian Institution
Economics of transport and utility industries
Logistics | Transport | [
"Physics"
] | 6,168 | [
"Physical systems",
"Transport"
] |
18,581,463 | https://en.wikipedia.org/wiki/Water%20resources | Water resources are natural resources of water that are potentially useful for humans, for example as a source of drinking water supply or irrigation water. These resources can be either freshwater from natural sources, or water produced artificially from other sources, such as from reclaimed water (wastewater) or desalinated water (seawater). 97% of the water on Earth is salt water and only three percent is fresh water; slightly over two-thirds of this is frozen in glaciers and polar ice caps. The remaining unfrozen freshwater is found mainly as groundwater, with only a small fraction present above ground or in the air. Natural sources of fresh water include surface water, under river flow, groundwater and frozen water. People use water resources for agricultural, industrial and household activities.
Water resources are under threat from multiple issues. There is water scarcity, water pollution, water conflict and climate change. Fresh water is in principle a renewable resource. However, the world's supply of groundwater is steadily decreasing. Groundwater depletion (or overdrafting) is occurring for example in Asia, South America and North America.
Natural sources of fresh water
Natural sources of fresh water include surface water, under river flow, groundwater and frozen water.
Surface water
Surface water is water in a river, lake or fresh water wetland. Surface water is naturally replenished by precipitation and naturally lost through discharge to the oceans, evaporation, evapotranspiration and groundwater recharge. The only natural input to any surface water system is precipitation within its watershed. The total quantity of water in that system at any given time is also dependent on many other factors. These factors include storage capacity in lakes, wetlands and artificial reservoirs, the permeability of the soil beneath these storage bodies, the runoff characteristics of the land in the watershed, the timing of the precipitation and local evaporation rates. All of these factors also affect the proportions of water loss.
Humans often increase storage capacity by constructing reservoirs and decrease it by draining wetlands. Humans often increase runoff quantities and velocities by paving areas and channelizing the stream flow.
Natural surface water can be augmented by importing surface water from another watershed through a canal or pipeline.
Brazil is estimated to have the largest supply of fresh water in the world, followed by Russia and Canada.
Water from glaciers
Glacier runoff is considered to be surface water. The Himalayas, which are often called "The Roof of the World", contain some of the most extensive and rough high altitude areas on Earth as well as the greatest area of glaciers and permafrost outside of the poles. Ten of Asia's largest rivers flow from there, and more than a billion people's livelihoods depend on them. To complicate matters, temperatures there are rising more rapidly than the global average. In Nepal, the temperature has risen by 0.6 degrees Celsius over the last decade, whereas globally, the Earth has warmed approximately 0.7 degrees Celsius over the last hundred years.
Groundwater
Under river flow
Throughout the course of a river, the total volume of water transported downstream will often be a combination of the visible free water flow together with a substantial contribution flowing through rocks and sediments that underlie the river and its floodplain called the hyporheic zone. For many rivers in large valleys, this unseen component of flow may greatly exceed the visible flow. The hyporheic zone often forms a dynamic interface between surface water and groundwater from aquifers, exchanging flow between rivers and aquifers that may be fully charged or depleted. This is especially significant in karst areas where pot-holes and underground rivers are common.
Artificial sources of usable water
There are several artificial sources of fresh water. One is treated wastewater (reclaimed water). Another is atmospheric water generators. Desalinated seawater is another important source. It is important to consider the economic and environmental side effects of these technologies.
Wastewater reuse
Desalinated water
Research into other options
Researchers proposed air capture over oceans which would "significantly increasing freshwater through the capture of humid air over oceans" to address present and, especially, future water scarcity/insecurity.
A 2021 study proposed hypothetical portable solar-powered atmospheric water harvesting devices. However, such off-the-grid generation may sometimes "undermine efforts to develop permanent piped infrastructure" among other problems.
Water uses
The total quantity of water available at any given time is an important consideration. Some human water users have an intermittent need for water. For example, many farms require large quantities of water in the spring, and no water at all in the winter. Other users have a continuous need for water, such as a power plant that requires water for cooling. Over the long term the average rate of precipitation within a watershed is the upper bound for average consumption of natural surface water from that watershed.
Agriculture and other irrigation
Industries
It is estimated that 22% of worldwide water is used in industry. Major industrial users include hydroelectric dams, thermoelectric power plants, which use water for cooling, ore and oil refineries, which use water in chemical processes, and manufacturing plants, which use water as a solvent. Water withdrawal can be very high for certain industries, but consumption is generally much lower than that of agriculture.
Water is used in renewable power generation. Hydroelectric power derives energy from the force of water flowing downhill, driving a turbine connected to a generator. This hydroelectricity is a low-cost, non-polluting, renewable energy source. Significantly, hydroelectric power can also be used for load following unlike most renewable energy sources which are intermittent. Ultimately, the energy in a hydroelectric power plant is supplied by the sun. Heat from the sun evaporates water, which condenses as rain in higher altitudes and flows downhill. Pumped-storage hydroelectric plants also exist, which use grid electricity to pump water uphill when demand is low, and use the stored water to produce electricity when demand is high.
Thermoelectric power plants using cooling towers have high consumption, nearly equal to their withdrawal, as most of the withdrawn water is evaporated as part of the cooling process. The withdrawal, however, is lower than in once-through cooling systems.
Water is also used in many large scale industrial processes, such as thermoelectric power production, oil refining, fertilizer production and other chemical plant use, and natural gas extraction from shale rock. Discharge of untreated water from industrial uses is pollution. Pollution includes discharged solutes and increased water temperature (thermal pollution).
Drinking water and domestic use (households)
It is estimated that 8% of worldwide water use is for domestic purposes. These include drinking water, bathing, cooking, toilet flushing, cleaning, laundry and gardening. Basic domestic water requirements have been estimated by Peter Gleick at around 50 liters per person per day, excluding water for gardens.
Drinking water is water that is of sufficiently high quality so that it can be consumed or used without risk of immediate or long term harm. Such water is commonly called potable water. In most developed countries, the water supplied to domestic, commerce and industry is all of drinking water standard even though only a very small proportion is actually consumed or used in food preparation.
844 million people still lacked even a basic drinking water service in 2017. Of those, 159 million people worldwide drink water directly from surface water sources, such as lakes and streams. One in eight people in the world do not have access to safe water.
Challenges and threats
Water scarcity
Water pollution
Water conflict
Climate change
Groundwater overdrafting
The world's supply of groundwater is steadily decreasing. Groundwater depletion (or overdrafting) is occurring for example in Asia, South America and North America. It is still unclear how much natural renewal balances this usage, and whether ecosystems are threatened.
Water resource management
Water resource management is the activity of planning, developing, distributing and managing the optimum use of water resources. It is an aspect of water cycle management. The field of water resources management will have to continue to adapt to the current and future issues facing the allocation of water. With the growing uncertainties of global climate change and the long-term impacts of past management actions, this decision-making will be even more difficult. It is likely that ongoing climate change will lead to situations that have not been encountered. As a result, alternative management strategies, including participatory approaches and adaptive capacity are increasingly being used to strengthen water decision-making.
Ideally, water resource management planning has regard to all the competing demands for water and seeks to allocate water on an equitable basis to satisfy all uses and demands. As with other resource management, this is rarely possible in practice so decision-makers must prioritise issues of sustainability, equity and factor optimisation (in that order!) to achieve acceptable outcomes. One of the biggest concerns for water-based resources in the future is the sustainability of the current and future water resource allocation.
Sustainable Development Goal 6 has a target related to water resources management: "Target 6.5: By 2030, implement integrated water resources management at all levels, including through transboundary cooperation as appropriate."
Sustainable water management
At present, only about 0.08 percent of all the world's fresh water is accessible. And there is ever-increasing demand for drinking, manufacturing, leisure and agriculture. Due to the small percentage of water available, optimizing the fresh water we have left from natural resources has been a growing challenge around the world.
Much effort in water resource management is directed at optimizing the use of water and in minimizing the environmental impact of water use on the natural environment. The observation of water as an integral part of the ecosystem is based on integrated water resources management, based on the 1992 Dublin Principles (see below).
Sustainable water management requires a holistic approach based on the principles of Integrated Water Resource Management, originally articulated in 1992 at the Dublin (January) and Rio (July) conferences. The four Dublin Principles, promulgated in the Dublin Statement are:
Fresh water is a finite and vulnerable resource, essential to sustain life, development and the environment;
Water development and management should be based on a participatory approach, involving users, planners and policy-makers at all levels;
Women play a central part in the provision, management and safeguarding of water;
Water has an economic value in all its competing uses and should be recognized as an economic good.
Implementation of these principles has guided reform of national water management law around the world since 1992.
Further challenges to sustainable and equitable water resources management include the fact that many water bodies are shared across boundaries which may be international (see water conflict) or intra-national (see Murray-Darling basin).
Integrated water resources management
Integrated water resources management (IWRM) has been defined by the Global Water Partnership (GWP) as "a process which promotes the coordinated development and management of water, land and related resources, in order to maximize the resultant economic and social welfare in an equitable manner without compromising the sustainability of vital ecosystems".
Some scholars say that IWRM is complementary to water security because water security is a goal or destination, whilst IWRM is the process necessary to achieve that goal.
IWRM is a paradigm that emerged at international conferences in the late 1900s and early 2000s, although participatory water management institutions have existed for centuries. Discussions on a holistic way of managing water resources began already in the 1950s leading up to the 1977 United Nations Water Conference. The development of IWRM was particularly recommended in the final statement of the ministers at the International Conference on Water and the Environment in 1992, known as the Dublin Statement. This concept aims to promote changes in practices which are considered fundamental to improved water resource management. IWRM was a topic of the second World Water Forum, which was attended by a more varied group of stakeholders than the preceding conferences and contributed to the creation of the GWP.
In the International Water Association definition, IWRM rests upon three principles that together act as the overall framework:
Social equity: ensuring equal access for all users (particularly marginalized and poorer user groups) to an adequate quantity and quality of water necessary to sustain human well-being.
Economic efficiency: bringing the greatest benefit to the greatest number of users possible with the available financial and water resources.
Ecological sustainability: requiring that aquatic ecosystems are acknowledged as users and that adequate allocation is made to sustain their natural functioning.
In 2002, the development of IWRM was discussed at the World Summit on Sustainable Development held in Johannesburg, which aimed to encourage the implementation of IWRM at a global level. The third World Water Forum recommended IWRM and discussed information sharing, stakeholder participation, and gender and class dynamics.
Operationally, IWRM approaches involve applying knowledge from various disciplines as well as the insights from diverse stakeholders to devise and implement efficient, equitable and sustainable solutions to water and development problems. As such, IWRM is a comprehensive, participatory planning and implementation tool for managing and developing water resources in a way that balances social and economic needs, and that ensures the protection of ecosystems for future generations. In addition, in light of contributing the achievement of Sustainable Development goals (SDGs), IWRM has been evolving into more sustainable approach as it considers the Nexus approach, which is a cross-sectoral water resource management. The Nexus approach is based on the recognition that "water, energy and food are closely linked through global and local water, carbon and energy cycles or chains."
An IWRM approach aims at avoiding a fragmented approach of water resources management by considering the following aspects: Enabling environment, roles of Institutions, management Instruments. Some of the cross-cutting conditions that are also important to consider when implementing IWRM are: Political will and commitment, capacity development, adequate investment, financial stability and sustainable cost recovery, monitoring and evaluation. There is not one correct administrative model. The art of IWRM lies in selecting, adjusting and applying the right mix of these tools for a given situation. IWRM practices depend on context; at the operational level, the challenge is to translate the agreed principles into concrete action.
Managing water in urban settings
By country
Water resource management and governance is handled differently by different countries. For example, in the United States, the United States Geological Survey (USGS) and its partners monitor water resources, conduct research and inform the public about groundwater quality. Water resources in specific countries are described below:
See also
List of sovereign states by freshwater withdrawal
List of countries by total renewable water resources
References
External links
Renewable water resources in the world by country
Portal to international hydrology and water resources
Sustainable Sanitation and Water Management Toolbox
Aquatic ecology
Hydrology
Irrigation
Natural resources
Water and the environment
Water management
Resources
Water resources management
Water industry
Sanitation | Water resources | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 3,013 | [
"Hydrology",
"Ecosystems",
"Water industry",
"Environmental engineering",
"Aquatic ecology",
"Water supply"
] |
18,582,186 | https://en.wikipedia.org/wiki/Benzene | Benzene is an organic chemical compound with the molecular formula C6H6. The benzene molecule is composed of six carbon atoms joined in a planar hexagonal ring with one hydrogen atom attached to each. Because it contains only carbon and hydrogen atoms, benzene is classed as a hydrocarbon.
Benzene is a natural constituent of petroleum and is one of the elementary petrochemicals. Due to the cyclic continuous pi bonds between the carbon atoms, benzene is classed as an aromatic hydrocarbon. Benzene is a colorless and highly flammable liquid with a sweet smell, and is partially responsible for the aroma of gasoline. It is used primarily as a precursor to the manufacture of chemicals with more complex structures, such as ethylbenzene and cumene, of which billions of kilograms are produced annually. Although benzene is a major industrial chemical, it finds limited use in consumer items because of its toxicity. Benzene is a volatile organic compound.
Benzene is classified as a carcinogen. Its particular effects on human health, such as the long-term results of accidental exposure, have been reported on by news organizations such as The New York Times. For instance, a 2022 article stated that benzene contamination in the Boston metropolitan area caused hazardous conditions in multiple places, with the publication noting that the compound may eventually cause leukemia in some individuals.
History
Discovery
The word "benzene" derives from "gum benzoin" (benzoin resin), an aromatic resin known since ancient times in Southeast Asia, and later to European pharmacists and perfumers in the 16th century via trade routes. An acidic material was derived from benzoin by sublimation, and named "flowers of benzoin", or benzoic acid. The hydrocarbon derived from benzoic acid thus acquired the name benzin, benzol, or benzene. Michael Faraday first isolated and identified benzene in 1825 from the oily residue derived from the production of illuminating gas, giving it the name bicarburet of hydrogen. In 1833, Eilhard Mitscherlich produced it by distilling benzoic acid (from gum benzoin) and lime. He gave the compound the name benzin. In 1836, the French chemist Auguste Laurent named the substance "phène"; this word has become the root of the English word "phenol", which is hydroxylated benzene, and "phenyl", the radical formed by abstraction of a hydrogen atom from benzene.
In 1845, Charles Blachford Mansfield, working under August Wilhelm von Hofmann, isolated benzene from coal tar. Four years later, Mansfield began the first industrial-scale production of benzene, based on the coal-tar method. Gradually, the sense developed among chemists that a number of substances were chemically related to benzene, comprising a diverse chemical family. In 1855, Hofmann was the first to apply the word "aromatic" to designate this family relationship, after a characteristic property of many of its members. In 1997, benzene was detected in deep space.
Ring formula
The empirical formula for benzene was long known, but its highly polyunsaturated structure, with just one hydrogen atom for each carbon atom, was challenging to determine. Archibald Scott Couper in 1858 and Johann Josef Loschmidt in 1861 suggested possible structures that contained multiple double bonds or multiple rings, but in these years very little was known about aromatic chemistry, and so chemists were unable to adduce appropriate evidence to favor any particular formula.
But many chemists had begun to work on aromatic substances, especially in Germany, and relevant data was coming fast. In 1865, the German chemist Friedrich August Kekulé published a paper in French (for he was then teaching in Francophone Belgium) suggesting that the structure contained a ring of six carbon atoms with alternating single and double bonds. The next year he published a much longer paper in German on the same subject. Kekulé used evidence that had accumulated in the intervening years—namely, that there always appeared to be only one isomer of any monoderivative of benzene, and that there always appeared to be exactly three isomers of every disubstituted derivative—now understood to correspond to the ortho, meta, and para patterns of arene substitution—to argue in support of his proposed structure. Kekulé's symmetrical ring could explain these curious facts, as well as benzene's 1:1 carbon-hydrogen ratio.
The new understanding of benzene, and hence of all aromatic compounds, proved to be so important for both pure and applied chemistry that in 1890 the German Chemical Society organized an elaborate appreciation in Kekulé's honor, celebrating the twenty-fifth anniversary of his first benzene paper. Here Kekulé spoke of the creation of the theory. He said that he had discovered the ring shape of the benzene molecule after having a reverie or day-dream of a snake biting its own tail (a symbol in ancient cultures known as the ouroboros). This vision, he said, came to him after years of studying the nature of carbon-carbon bonds. This was seven years after he had solved the problem of how carbon atoms could bond to up to four other atoms at the same time. Curiously, a similar, humorous depiction of benzene had appeared in 1886 in a pamphlet entitled Berichte der Durstigen Chemischen Gesellschaft (Journal of the Thirsty Chemical Society), a parody of the Berichte der Deutschen Chemischen Gesellschaft, only the parody had monkeys seizing each other in a circle, rather than snakes as in Kekulé's anecdote. Some historians have suggested that the parody was a lampoon of the snake anecdote, possibly already well known through oral transmission even if it had not yet appeared in print. Kekulé's 1890 speech in which this anecdote appeared has been translated into English. If the anecdote is the memory of a real event, circumstances mentioned in the story suggest that it must have happened early in 1862.
In 1929, the cyclic nature of benzene was finally confirmed by the crystallographer Kathleen Lonsdale using X-ray diffraction methods. Using large crystals of hexamethylbenzene, a benzene derivative with the same core of six carbon atoms, Lonsdale obtained diffraction patterns. Through calculating more than thirty parameters, Lonsdale demonstrated that the benzene ring could not be anything but a flat hexagon, and provided accurate distances for all carbon-carbon bonds in the molecule.
Nomenclature
The German chemist Wilhelm Körner suggested the prefixes ortho-, meta-, para- to distinguish di-substituted benzene derivatives in 1867; however, he did not use the prefixes to distinguish the relative positions of the substituents on a benzene ring. It was the German chemist Carl Gräbe who, in 1869, first used the prefixes ortho-, meta-, para- to denote specific relative locations of the substituents on a di-substituted aromatic ring (viz, naphthalene). In 1870, the German chemist Viktor Meyer first applied Gräbe's nomenclature to benzene.
Early applications
In 1903, Ludwig Roselius popularized the use of benzene to decaffeinate coffee. This discovery led to the production of Sanka. This process was later discontinued. Benzene was historically used as a significant component in many consumer products such as liquid wrench, several paint strippers, rubber cements, spot removers, and other products. Manufacture of some of these benzene-containing formulations ceased in about 1950, although Liquid Wrench continued to contain significant amounts of benzene until the late 1970s.
Occurrence
Trace amounts of benzene are found in petroleum and coal. It is a byproduct of the incomplete combustion of many materials. For commercial use, until World War II, much of benzene was obtained as a by-product of coke production (or "coke-oven light oil") for the steel industry. However, in the 1950s, increased demand for benzene, especially from the growing polymers industry, necessitated the production of benzene from petroleum. Today, most benzene comes from the petrochemical industry, with only a small fraction being produced from coal. Benzene has been detected on Mars.
Structure
X-ray diffraction shows that all six carbon-carbon bonds in benzene are of the same length, at 140 picometres (pm). The C–C bond lengths are greater than a double bond (135 pm) but shorter than a single bond (147 pm). This intermediate distance is caused by electron delocalization: the electrons for C=C bonding are distributed equally between each of the six carbon atoms. Benzene has 6 hydrogen atoms, fewer than the corresponding parent alkane, hexane, which has 14. Benzene and cyclohexane have a similar structure, only the ring of delocalized electrons and the loss of one hydrogen per carbon distinguishes it from cyclohexane. The molecule is planar. The molecular orbital description involves the formation of three delocalized π orbitals spanning all six carbon atoms, while the valence bond description involves a superposition of resonance structures. It is likely that this stability contributes to the peculiar molecular and chemical properties known as aromaticity. To reflect the delocalized nature of the bonding, benzene is often depicted with a circle inside a hexagonal arrangement of carbon atoms.
Derivatives of benzene occur sufficiently often as a component of organic molecules, so much so that the Unicode Consortium has allocated a symbol in the Miscellaneous Technical block with the code U+232C (⌬) to represent it with three double bonds, and U+23E3 (⏣) for a delocalized version.
Benzene derivatives
Many important chemical compounds are derived from benzene by replacing one or more of its hydrogen atoms with another functional group. Examples of simple benzene derivatives are phenol, toluene, and aniline, abbreviated PhOH, PhMe, and PhNH2, respectively. Linking benzene rings gives biphenyl, C6H5–C6H5. Further loss of hydrogen gives "fused" aromatic hydrocarbons, such as naphthalene, anthracene, phenanthrene, and pyrene. The limit of the fusion process is the hydrogen-free allotrope of carbon, graphite.
In heterocycles, carbon atoms in the benzene ring are replaced with other elements. The most important variations contain nitrogen. Replacing one CH with N gives the compound pyridine, C5H5N. Although benzene and pyridine are structurally related, benzene cannot be converted into pyridine. Replacement of a second CH bond with N gives, depending on the location of the second N, pyridazine, pyrimidine, or pyrazine.
Production
Four chemical processes contribute to industrial benzene production: catalytic reforming, toluene hydrodealkylation, toluene disproportionation, and steam cracking etc. According to the ATSDR Toxicological Profile for benzene, between 1978 and 1981, catalytic reformates accounted for approximately 44–50% of the total U.S. benzene production.
Catalytic reforming
In catalytic reforming, a mixture of hydrocarbons with boiling points between 60 and 200 °C is blended with hydrogen gas and then exposed to a bifunctional platinum chloride or rhenium chloride catalyst at 500–525 °C and pressures ranging from 8–50 atm. Under these conditions, aliphatic hydrocarbons form rings and lose hydrogen to become aromatic hydrocarbons. The aromatic products of the reaction are then separated from the reaction mixture (or reformate) by extraction with any one of a number of solvents, including diethylene glycol or sulfolane, and benzene is then separated from the other aromatics by distillation. The extraction step of aromatics from the reformate is designed to produce aromatics with lowest non-aromatic components. Recovery of the aromatics, commonly referred to as BTX (benzene, toluene and xylene isomers), involves such extraction and distillation steps.
In similar fashion to this catalytic reforming, UOP and BP commercialized a method from LPG (mainly propane and butane) to aromatics.
Toluene hydrodealkylation
Toluene hydrodealkylation converts toluene to benzene. In this hydrogen-intensive process, toluene is mixed with hydrogen, then passed over a chromium, molybdenum, or platinum oxide catalyst at 500–650 °C and 20–60 atm pressure. Sometimes, higher temperatures are used instead of a catalyst (at the similar reaction condition). Under these conditions, toluene undergoes dealkylation to benzene and methane:
This irreversible reaction is accompanied by an equilibrium side reaction that produces
biphenyl (diphenyl) at higher temperature:
2 +
If the raw material stream contains much non-aromatic components (paraffins or naphthenes), those are likely decomposed to lower hydrocarbons such as methane, which increases the consumption of hydrogen.
A typical reaction yield exceeds 95%. Sometimes, xylenes and heavier aromatics are used in place of toluene, with similar efficiency.
This is often called "on-purpose" methodology to produce benzene, compared to conventional BTX (benzene-toluene-xylene) extraction processes.
Toluene disproportionation
Toluene disproportionation (TDP) is the conversion of toluene to benzene and xylene.
Given that demand for para-xylene (p-xylene) substantially exceeds demand for other xylene isomers, a refinement of the TDP process called Selective TDP (STDP) may be used. In this process, the xylene stream exiting the TDP unit is approximately 90% p-xylene. In some systems, even the benzene-to-xylenes ratio is modified to favor xylenes.
Steam cracking
Steam cracking is the process for producing ethylene and other alkenes from aliphatic hydrocarbons. Depending on the feedstock used to produce the olefins, steam cracking can produce a benzene-rich liquid by-product called pyrolysis gasoline. Pyrolysis gasoline can be blended with other hydrocarbons as a gasoline additive, or routed through an extraction process to recover BTX aromatics (benzene, toluene and xylenes).
Other methods
Although of no commercial significance, many other routes to benzene exist. Phenol and halobenzenes can be reduced with metals. Benzoic acid and its salts undergo decarboxylation to benzene. The reaction of the diazonium compound derived from aniline with hypophosphorus acid gives benzene. Alkyne trimerisation of acetylene gives benzene. Complete decarboxylation of mellitic acid gives benzene.
Uses
Benzene is used mainly as an intermediate to make other chemicals, above all ethylbenzene (and other alkylbenzenes), cumene, cyclohexane, and nitrobenzene. In 1988 it was reported that two-thirds of all chemicals on the American Chemical Society's lists contained at least one benzene ring. More than half of the entire benzene production is processed into ethylbenzene, a precursor to styrene, which is used to make polymers and plastics like polystyrene. Some 20% of the benzene production is used to manufacture cumene, which is needed to produce phenol and acetone for resins and adhesives. Cyclohexane consumes around 10% of the world's benzene production; it is primarily used in the manufacture of nylon fibers, which are processed into textiles and engineering plastics. Smaller amounts of benzene are used to make some types of rubbers, lubricants, dyes, detergents, drugs, explosives, and pesticides. In 2013, the biggest consumer country of benzene was China, followed by the USA. Benzene production is currently expanding in the Middle East and in Africa, whereas production capacities in Western Europe and North America are stagnating.
Toluene is now often used as a substitute for benzene, for instance as a fuel additive. The solvent-properties of the two are similar, but toluene is less toxic and has a wider liquid range. Toluene is also processed into benzene.
Component of gasoline
As a gasoline (petrol) additive, benzene increases the octane rating and reduces knocking. As a consequence, gasoline often contained several percent benzene before the 1950s, when tetraethyl lead replaced it as the most widely used antiknock additive. With the global phaseout of leaded gasoline, benzene has made a comeback as a gasoline additive in some nations. In the United States, concern over its negative health effects and the possibility of benzene entering the groundwater has led to stringent regulation of gasoline's benzene content, with limits typically around 1%. European petrol specifications now contain the same 1% limit on benzene content. The United States Environmental Protection Agency introduced new regulations in 2011 that lowered the benzene content in gasoline to 0.62%.
In some European languages, the word for petroleum or gasoline is an exact cognate of "benzene". For instance in Catalan the word 'benzina' can be used for gasoline, though now it is relatively rare.
Reactions
The most common reactions of benzene involve substitution of a proton by other groups. Electrophilic aromatic substitution is a general method of derivatizing benzene. Benzene is sufficiently nucleophilic that it undergoes substitution by acylium ions and alkyl carbocations to give substituted derivatives.
The most widely practiced example of this reaction is the ethylation of benzene.
Approximately 24,700,000 tons were produced in 1999. Highly instructive but of far less industrial significance is the Friedel-Crafts alkylation of benzene (and many other aromatic rings) using an alkyl halide in the presence of a strong Lewis acid catalyst. Similarly, the Friedel-Crafts acylation is a related example of electrophilic aromatic substitution. The reaction involves the acylation of benzene (or many other aromatic rings) with an acyl chloride using a strong Lewis acid catalyst such as aluminium chloride or Iron(III) chloride.
Sulfonation, chlorination, nitration
Using electrophilic aromatic substitution, many functional groups are introduced onto the benzene framework. Sulfonation of benzene involves the use of oleum, a mixture of sulfuric acid with sulfur trioxide. Sulfonated benzene derivatives are useful detergents. In nitration, benzene reacts with nitronium ions (NO2+), which is a strong electrophile produced by combining sulfuric and nitric acids. Nitrobenzene is the precursor to aniline. Chlorination is achieved with chlorine to give chlorobenzene in the presence of a Lewis acid catalyst such as aluminium tri-chloride.
Hydrogenation
Via hydrogenation, benzene and its derivatives convert to cyclohexane and derivatives. This reaction is achieved by the use of high pressures of hydrogen in the presence of heterogeneous catalysts, such as finely divided nickel. Whereas alkenes can be hydrogenated near room temperatures, benzene and related compounds are more reluctant substrates, requiring temperatures >100 °C. This reaction is practiced on a large scale industrially. In the absence of the catalyst, benzene is impervious to hydrogen. Hydrogenation cannot be stopped to give cyclohexene or cyclohexadienes as these are superior substrates. Birch reduction, a non catalytic process, however selectively hydrogenates benzene to the diene.
Metal complexes
Benzene is an excellent ligand in the organometallic chemistry of low-valent metals. Important examples include the sandwich and half-sandwich complexes, respectively, Cr(C6H6)2 and [RuCl2(C6H6)]2.
Health effects
Benzene is classified as a carcinogen, which increases the risk of cancer and other illnesses, and is also a notorious cause of bone marrow failure. Substantial quantities of epidemiologic, clinical, and laboratory data link benzene to aplastic anemia, acute leukemia, bone marrow abnormalities and cardiovascular disease. The specific hematologic malignancies that benzene is associated with include: acute myeloid leukemia (AML), aplastic anemia, myelodysplastic syndrome (MDS), acute lymphoblastic leukemia (ALL), and chronic myeloid leukemia (CML).
Carcinogenic activity of benzene was discovered by Swedish pharmacologist in 1897 on female workers of a tire-making factory. The American Petroleum Institute (API) stated in 1948 that "it is generally considered that the only absolutely safe concentration for benzene is zero". There is no safe exposure level; even tiny amounts can cause harm. The US Department of Health and Human Services (DHHS) classifies benzene as a human carcinogen. Long-term exposure to excessive levels of benzene in the air causes leukemia, a potentially fatal cancer of the blood-forming organs. In particular, acute myeloid leukemia or acute nonlymphocytic leukemia (AML & ANLL) is caused by benzene. IARC rated benzene as "known to be carcinogenic to humans" (Group 1).
As benzene is ubiquitous in gasoline and hydrocarbon fuels that are in use everywhere, human exposure to benzene is a global health problem. Benzene targets the liver, kidney, lung, heart and brain and can cause DNA strand breaks and chromosomal damage, hence is teratogenic and mutagenic. Benzene causes cancer in animals including humans. Benzene has been shown to cause cancer in both sexes of multiple species of laboratory animals exposed via various routes.
Exposure to benzene
According to the Agency for Toxic Substances and Disease Registry (ATSDR) (2007), benzene is both a synthetically made and naturally occurring chemical from processes that include: volcanic eruptions, wild fires, synthesis of chemicals such as phenol, production of synthetic fibers, and fabrication of rubbers, lubricants, pesticides, medications, and dyes. The major sources of benzene exposure are tobacco smoke, automobile service stations, exhaust from motor vehicles, and industrial emissions; however, ingestion and dermal absorption of benzene can also occur through contact with contaminated water. Benzene is hepatically metabolized and excreted in the urine. Measurement of air and water levels of benzene is accomplished through collection via activated charcoal tubes, which are then analyzed with a gas chromatograph. The measurement of benzene in humans can be accomplished via urine, blood, and breath tests; however, all of these have their limitations because benzene is rapidly metabolized in the human body.
Exposure to benzene may lead progressively to aplastic anemia, leukaemia, and multiple myeloma.
OSHA regulates levels of benzene in the workplace. The maximum allowable amount of benzene in workroom air during an 8-hour workday, 40-hour workweek is 1 ppm. As benzene can cause cancer, NIOSH recommends that all workers wear special breathing equipment when they are likely to be exposed to benzene at levels exceeding the recommended (8-hour) exposure limit of 0.1 ppm.
Benzene exposure limits
The United States Environmental Protection Agency has set a maximum contaminant level for benzene in drinking water at 0.005 mg/L (5 ppb), as promulgated via the U.S. National Primary Drinking Water Regulations. This regulation is based on preventing benzene leukemogenesis. The maximum contaminant level goal (MCLG), a nonenforceable health goal that would allow an adequate margin of safety for the prevention of adverse effects, is zero benzene concentration in drinking water. The EPA requires that spills or accidental releases into the environment of 10 pounds (4.5 kg) or more of benzene be reported.
The U.S. Occupational Safety and Health Administration (OSHA) has set a permissible exposure limit of 1 part of benzene per million parts of air (1 ppm) in the workplace during an 8-hour workday, 40-hour workweek. The short term exposure limit for airborne benzene is 5 ppm for 15 minutes. These legal limits were based on studies demonstrating compelling evidence of health risk to workers exposed to benzene. The risk from exposure to 1 ppm for a working lifetime has been estimated as 5 excess leukemia deaths per 1,000 employees exposed. (This estimate assumes no threshold for benzene's carcinogenic effects.) OSHA has also established an action level of 0.5 ppm to encourage even lower exposures in the workplace.
The U.S. National Institute for Occupational Safety and Health (NIOSH) revised the Immediately Dangerous to Life and Health (IDLH) concentration for benzene to 500 ppm. The current NIOSH definition for an IDLH condition, as given in the NIOSH Respirator Selection Logic, is one that poses a threat of exposure to airborne contaminants when that exposure is likely to cause death or immediate or delayed permanent adverse health effects or prevent escape from such an environment. The purpose of establishing an IDLH value is (1) to ensure that the worker can escape from a given contaminated environment in the event of failure of the respiratory protection equipment and (2) is considered a maximum level above which only a highly reliable breathing apparatus providing maximum worker protection is permitted. In September 1995, NIOSH issued a new policy for developing recommended exposure limits (RELs) for substances, including carcinogens. As benzene can cause cancer, NIOSH recommends that all workers wear special breathing equipment when they are likely to be exposed to benzene at levels exceeding the REL (10-hour) of 0.1 ppm. The NIOSH short-term exposure limit (STEL – 15 min) is 1 ppm.
American Conference of Governmental Industrial Hygienists (ACGIH) adopted Threshold Limit Values (TLVs) for benzene at 0.5 ppm TWA and 2.5 ppm STEL.
Toxicology
Biomarkers of exposure
Several tests can determine exposure to benzene. Benzene itself can be measured in breath, blood or urine, but such testing is usually limited to the first 24 hours post-exposure due to the relatively rapid removal of the chemical by exhalation or biotransformation. Most people in developed countries have measureable baseline levels of benzene and other aromatic petroleum hydrocarbons in their blood. In the body, benzene is enzymatically converted to a series of oxidation products including muconic acid, phenylmercapturic acid, phenol, catechol, hydroquinone and 1,2,4-trihydroxybenzene. Most of these metabolites have some value as biomarkers of human exposure, since they accumulate in the urine in proportion to the extent and duration of exposure, and they may still be present for some days after exposure has ceased. The current ACGIH biological exposure limits for occupational exposure are 500 μg/g creatinine for muconic acid and 25 μg/g creatinine for phenylmercapturic acid in an end-of-shift urine specimen.
Biotransformations
Even if it is not a common substrate for metabolism, benzene can be oxidized by both bacteria and eukaryotes. In bacteria, dioxygenase enzyme can add an oxygen to the ring, and the unstable product is immediately reduced (by NADH) to a cyclic diol with two double bonds, breaking the aromaticity. Next, the diol is newly reduced by NADH to catechol. The catechol is then metabolized to acetyl CoA and succinyl CoA, used by organisms mainly in the citric acid cycle for energy production.
The pathway for the metabolism of benzene is complex and begins in the liver. Several enzymes are involved. These include cytochrome P450 2E1 (CYP2E1), quinine oxidoreductase (NQ01 or DT-diaphorase or NAD(P)H dehydrogenase (quinone 1)), GSH, and myeloperoxidase (MPO). CYP2E1 is involved at multiple steps: converting benzene to oxepin (benzene oxide), phenol to hydroquinone, and hydroquinone to both benzenetriol and catechol. Hydroquinone, benzenetriol and catechol are converted to polyphenols. In the bone marrow, MPO converts these polyphenols to benzoquinones. These intermediates and metabolites induce genotoxicity by multiple mechanisms including inhibition of topoisomerase II (which maintains chromosome structure), disruption of microtubules (which maintains cellular structure and organization), generation of oxygen free radicals (unstable species) that may lead to point mutations, increasing oxidative stress, inducing DNA strand breaks, and altering DNA methylation (which can affect gene expression). NQ01 and GSH shift metabolism away from toxicity. NQ01 metabolizes benzoquinone toward polyphenols (counteracting the effect of MPO). GSH is involved with the formation of phenylmercapturic acid.
Genetic polymorphisms in these enzymes may induce loss of function or gain of function. For example, mutations in CYP2E1 increase activity and result in increased generation of toxic metabolites. NQ01 mutations result in loss of function and may result in decreased detoxification. Myeloperoxidase mutations result in loss of function and may result in decreased generation of toxic metabolites. GSH mutations or deletions result in loss of function and result in decreased detoxification. These genes may be targets for genetic screening for susceptibility to benzene toxicity.
Molecular toxicology
The paradigm of toxicological assessment of benzene is shifting towards the domain of molecular toxicology as it allows understanding of fundamental biological mechanisms in a better way. Glutathione seems to play an important role by protecting against benzene-induced DNA breaks and it is being identified as a new biomarker for exposure and effect. Benzene causes chromosomal aberrations in the peripheral blood leukocytes and bone marrow explaining the higher incidence of leukemia and multiple myeloma caused by chronic exposure. These aberrations can be monitored using fluorescent in situ hybridization (FISH) with DNA probes to assess the effects of benzene along with the hematological tests as markers of hematotoxicity. Benzene metabolism involves enzymes coded for by polymorphic genes. Studies have shown that genotype at these loci may influence susceptibility to the toxic effects of benzene exposure. Individuals carrying variant of NAD(P)H:quinone oxidoreductase 1 (NQO1), microsomal epoxide hydrolase (EPHX) and deletion of the glutathione S-transferase T1 (GSTT1) showed a greater frequency of DNA single-stranded breaks.
Biological oxidation and carcinogenic activity
One way of understanding the carcinogenic effects of benzene is by examining the products of biological oxidation. Pure benzene, for example, oxidizes in the body to produce an epoxide, benzene oxide, which is not excreted readily and can interact with DNA to produce harmful mutations.
Routes of exposure
Inhalation
Outdoor air may contain low levels of benzene from automobile service stations, wood smoke, tobacco smoke, the transfer of gasoline, exhaust from motor vehicles, and industrial emissions. About 50% of the entire nationwide (United States) exposure to benzene results from smoking tobacco or from exposure to tobacco smoke. After smoking 32 cigarettes per day, the smoker would take in about 1.8 milligrams (mg) of benzene. This amount is about 10 times the average daily intake of benzene by nonsmokers.
Inhaled benzene is primarily expelled unchanged through exhalation. In a human study 16.4 to 41.6% of retained benzene was eliminated through the lungs within five to seven hours after a two- to three-hour exposure to 47 to 110 ppm and only 0.07 to 0.2% of the remaining benzene was excreted unchanged in the urine. After exposure to 63 to 405 mg/m3 of benzene for 1 to 5 hours, 51 to 87% was excreted in the urine as phenol over a period of 23 to 50 hours. In another human study, 30% of absorbed dermally applied benzene, which is primarily metabolized in the liver, was excreted as phenol in the urine.
Exposure from soft drinks
Under specific conditions and in the presence of other chemicals benzoic acid (a preservative) and ascorbic acid (Vitamin C) may interact to produce benzene. In March 2006, the official Food Standards Agency in United Kingdom conducted a survey of 150 brands of soft drinks. It found that four contained benzene levels above World Health Organization limits. The affected batches were removed from sale. Similar problems were reported by the FDA in the United States.
Contamination of water supply
In 2005, the water supply to the city of Harbin in China with a population of almost nine million people, was cut off because of a major benzene exposure. Benzene leaked into the Songhua River, which supplies drinking water to the city, after an explosion at a China National Petroleum Corporation (CNPC) factory in the city of Jilin on 13 November 2005.
When plastic water pipes are subject to high heat, the water may be contaminated with benzene.
Genocide
The Nazi German government used benzene administered via injection as one of their many methods for killing.
See also
BTEX
1,2,3-Cyclohexatriene
Industrial Union Department v. American Petroleum Institute
Six-membered aromatic rings with one carbon replaced by another element: borabenzene, silabenzene, germabenzene, stannabenzene, pyridine, phosphorine, arsabenzene, bismabenzene, pyrylium, thiopyrylium, selenopyrylium, telluropyrylium
Explanatory notes
References
External links
Benzene at The Periodic Table of Videos (University of Nottingham)
International Chemical Safety Card 0015
USEPA Summary of Benzene Toxicity
NIOSH Pocket Guide to Chemical Hazards
Dept. of Health and Human Services: TR-289: Toxicology and Carcinogenesis Studies of Benzene
Video Recording of Sir John Cadogan giving a lecture on Benzene at the Royal Institution, 22nd September 1991
Substance profile
NLM Hazardous Substances Databank – Benzene
Annulenes
Aromatic hydrocarbons
Aromatic solvents
Carcinogens
Commodity chemicals
GABAA receptor positive allosteric modulators
Hazardous air pollutants
Hydrocarbon solvents
IARC Group 1 carcinogens
Immunotoxins
Mutagens
Chemical hazards
Petrochemicals
Simple aromatic rings
Six-membered rings
Soil contamination
Sweet-smelling chemicals
Teratogens | Benzene | [
"Chemistry",
"Environmental_science"
] | 7,423 | [
"Toxicology",
"Commodity chemicals",
"Products of chemical industry",
"Environmental chemistry",
"Chemical hazards",
"Soil contamination",
"Carcinogens",
"Teratogens",
"Petrochemicals"
] |
18,582,230 | https://en.wikipedia.org/wiki/Methane | Methane ( , ) is a chemical compound with the chemical formula (one carbon atom bonded to four hydrogen atoms). It is a group-14 hydride, the simplest alkane, and the main constituent of natural gas. The abundance of methane on Earth makes it an economically attractive fuel, although capturing and storing it is difficult because it is a gas at standard temperature and pressure. In the Earth's atmosphere methane is transparent to visible light but absorbs infrared radiation, acting as a greenhouse gas. Methane is an organic compound, and among the simplest of organic compounds. Methane is also a hydrocarbon.
Naturally occurring methane is found both below ground and under the seafloor and is formed by both geological and biological processes. The largest reservoir of methane is under the seafloor in the form of methane clathrates. When methane reaches the surface and the atmosphere, it is known as atmospheric methane.
The Earth's atmospheric methane concentration has increased by about 160% since 1750, with the overwhelming percentage caused by human activity. It accounted for 20% of the total radiative forcing from all of the long-lived and globally mixed greenhouse gases, according to the 2021 Intergovernmental Panel on Climate Change report. Strong, rapid and sustained reductions in methane emissions could limit near-term warming and improve air quality by reducing global surface ozone.
Methane has also been detected on other planets, including Mars, which has implications for astrobiology research.
Properties and bonding
Methane is a tetrahedral molecule with four equivalent C–H bonds. Its electronic structure is described by four bonding molecular orbitals (MOs) resulting from the overlap of the valence orbitals on C and H. The lowest-energy MO is the result of the overlap of the 2s orbital on carbon with the in-phase combination of the 1s orbitals on the four hydrogen atoms. Above this energy level is a triply degenerate set of MOs that involve overlap of the 2p orbitals on carbon with various linear combinations of the 1s orbitals on hydrogen. The resulting "three-over-one" bonding scheme is consistent with photoelectron spectroscopic measurements.
Methane is an odorless, colourless and transparent gas at standard temperature and pressure. It does absorb visible light, especially at the red end of the spectrum, due to overtone bands, but the effect is only noticeable if the light path is very long. This is what gives Uranus and Neptune their blue or bluish-green colors, as light passes through their atmospheres containing methane and is then scattered back out.
The familiar smell of natural gas as used in homes is achieved by the addition of an odorant, usually blends containing tert-butylthiol, as a safety measure. Methane has a boiling point of −161.5 °C at a pressure of one atmosphere. As a gas, it is flammable over a range of concentrations (5.4%–17%) in air at standard pressure.
Solid methane exists in several modifications, of which nine are known. Cooling methane at normal pressure results in the formation of methane I. This substance crystallizes in the cubic system (space group Fmm). The positions of the hydrogen atoms are not fixed in methane I, i.e. methane molecules may rotate freely. Therefore, it is a plastic crystal.
Chemical reactions
The primary chemical reactions of methane are combustion, steam reforming to syngas, and halogenation. In general, methane reactions are difficult to control.
Selective oxidation
Partial oxidation of methane to methanol (CH3OH), a more convenient, liquid fuel, is challenging because the reaction typically progresses all the way to carbon dioxide and water even with an insufficient supply of oxygen. The enzyme methane monooxygenase produces methanol from methane, but cannot be used for industrial-scale reactions. Some homogeneously catalyzed systems and heterogeneous systems have been developed, but all have significant drawbacks. These generally operate by generating protected products which are shielded from overoxidation. Examples include the Catalytica system, copper zeolites, and iron zeolites stabilizing the alpha-oxygen active site.
One group of bacteria catalyze methane oxidation with nitrite as the oxidant in the absence of oxygen, giving rise to the so-called anaerobic oxidation of methane.
Acid–base reactions
Like other hydrocarbons, methane is an extremely weak acid. Its pKa in DMSO is estimated to be 56. It cannot be deprotonated in solution, but the conjugate base is known in forms such as methyllithium.
A variety of positive ions derived from methane have been observed, mostly as unstable species in low-pressure gas mixtures. These include methenium or methyl cation , methane cation , and methanium or protonated methane . Some of these have been detected in outer space. Methanium can also be produced as diluted solutions from methane with superacids. Cations with higher charge, such as and , have been studied theoretically and conjectured to be stable.
Despite the strength of its C–H bonds, there is intense interest in catalysts that facilitate C–H bond activation in methane (and other lower numbered alkanes).
Combustion
Methane's heat of combustion is 55.5 MJ/kg. Combustion of methane is a multiple step reaction summarized as follows:
(ΔH = −891 kJ/mol, at standard conditions)
Peters four-step chemistry is a systematically reduced four-step chemistry that explains the burning of methane.
Methane radical reactions
Given appropriate conditions, methane reacts with halogen radicals as follows:
where X is a halogen: fluorine (F), chlorine (Cl), bromine (Br), or iodine (I). This mechanism for this process is called free radical halogenation. It is initiated when UV light or some other radical initiator (like peroxides) produces a halogen atom. A two-step chain reaction ensues in which the halogen atom abstracts a hydrogen atom from a methane molecule, resulting in the formation of a hydrogen halide molecule and a methyl radical (). The methyl radical then reacts with a molecule of the halogen to form a molecule of the halomethane, with a new halogen atom as byproduct. Similar reactions can occur on the halogenated product, leading to replacement of additional hydrogen atoms by halogen atoms with dihalomethane, trihalomethane, and ultimately, tetrahalomethane structures, depending upon reaction conditions and the halogen-to-methane ratio.
This reaction is commonly used with chlorine to produce dichloromethane and chloroform via chloromethane. Carbon tetrachloride can be made with excess chlorine.
Uses
Methane may be transported as a refrigerated liquid (liquefied natural gas, or LNG). While leaks from a refrigerated liquid container are initially heavier than air due to the increased density of the cold gas, the gas at ambient temperature is lighter than air. Gas pipelines distribute large amounts of natural gas, of which methane is the principal component.
Fuel
Methane is used as a fuel for ovens, homes, water heaters, kilns, automobiles, turbines, etc.
As the major constituent of natural gas, methane is important for electricity generation by burning it as a fuel in a gas turbine or steam generator. Compared to other hydrocarbon fuels, methane produces less carbon dioxide for each unit of heat released. At about 891 kJ/mol, methane's heat of combustion is lower than that of any other hydrocarbon, but the ratio of the heat of combustion (891 kJ/mol) to the molecular mass (16.0 g/mol, of which 12.0 g/mol is carbon) shows that methane, being the simplest hydrocarbon, produces more heat per mass unit (55.7 kJ/g) than other complex hydrocarbons. In many areas with a dense enough population, methane is piped into homes and businesses for heating, cooking, and industrial uses. In this context it is usually known as natural gas, which is considered to have an energy content of 39 megajoules per cubic meter, or 1,000 BTU per standard cubic foot. Liquefied natural gas (LNG) is predominantly methane () converted into liquid form for ease of storage or transport.
Rocket propellant
Refined liquid methane as well as LNG is used as a rocket fuel, when combined with liquid oxygen, as in the TQ-12, BE-4, Raptor, and YF-215 engines. Due to the similarities between methane and LNG such engines are commonly grouped together under the term methalox.
As a liquid rocket propellant, a methane/liquid oxygen combination offers the advantage over kerosene/liquid oxygen combination, or kerolox, of producing small exhaust molecules, reducing coking or deposition of soot on engine components. Methane is easier to store than hydrogen due to its higher boiling point and density, as well as its lack of hydrogen embrittlement. The lower molecular weight of the exhaust also increases the fraction of the heat energy which is in the form of kinetic energy available for propulsion, increasing the specific impulse of the rocket. Compared to liquid hydrogen, the specific energy of methane is lower but this disadvantage is offset by methane's greater density and temperature range, allowing for smaller and lighter tankage for a given fuel mass. Liquid methane has a temperature range (91–112 K) nearly compatible with liquid oxygen (54–90 K). The fuel currently sees use in operational launch vehicles such as Zhuque-2, Vulcan and New Glenn as well as in-development launchers such as Starship, Neutron, and Terran R.
Chemical feedstock
Natural gas, which is mostly composed of methane, is used to produce hydrogen gas on an industrial scale. Steam methane reforming (SMR), or simply known as steam reforming, is the standard industrial method of producing commercial bulk hydrogen gas. More than 50 million metric tons are produced annually worldwide (2013), principally from the SMR of natural gas. Much of this hydrogen is used in petroleum refineries, in the production of chemicals and in food processing. Very large quantities of hydrogen are used in the industrial synthesis of ammonia.
At high temperatures (700–1100 °C) and in the presence of a metal-based catalyst (nickel), steam reacts with methane to yield a mixture of CO and , known as "water gas" or "syngas":
This reaction is strongly endothermic (consumes heat, 206 kJ/mol).
Additional hydrogen is obtained by the reaction of CO with water via the water-gas shift reaction:
This reaction is mildly exothermic (produces heat, −41 kJ/mol).
Methane is also subjected to free-radical chlorination in the production of chloromethanes, although methanol is a more typical precursor.
Hydrogen can also be produced via the direct decomposition of methane, also known as methane pyrolysis, which, unlike steam reforming, produces no greenhouse gases (GHG). The heat needed for the reaction can also be GHG emission free, e.g. from concentrated sunlight, renewable electricity, or burning some of the produced hydrogen. If the methane is from biogas then the process can be a carbon sink. Temperatures in excess of 1200 °C are required to break the bonds of methane to produce hydrogen gas and solid carbon.
However, through the use of a suitable catalyst the reaction temperature can be reduced to between 550 and 900 °C depending on the chosen catalyst. Dozens of catalysts have been tested, including unsupported and supported metal catalysts, carbonaceous and metal-carbon catalysts.
The reaction is moderately endothermic as shown in the reaction equation below.
( 74.8 kJ/mol)
Refrigerant
As a refrigerant, methane has the ASHRAE designation R-50.
Generation
Methane can be generated through geological, biological or industrial routes.
Geological routes
The two main routes for geological methane generation are (i) organic (thermally generated, or thermogenic) and (ii) inorganic (abiotic). Thermogenic methane occurs due to the breakup of organic matter at elevated temperatures and pressures in deep sedimentary strata. Most methane in sedimentary basins is thermogenic; therefore, thermogenic methane is the most important source of natural gas. Thermogenic methane components are typically considered to be relic (from an earlier time). Generally, formation of thermogenic methane (at depth) can occur through organic matter breakup, or organic synthesis. Both ways can involve microorganisms (methanogenesis), but may also occur inorganically. The processes involved can also consume methane, with and without microorganisms.
The more important source of methane at depth (crystalline bedrock) is abiotic. Abiotic means that methane is created from inorganic compounds, without biological activity, either through magmatic processes or via water-rock reactions that occur at low temperatures and pressures, like serpentinization.
Biological routes
Most of Earth's methane is biogenic and is produced by methanogenesis, a form of anaerobic respiration only known to be conducted by some members of the domain Archaea. Methanogens occur in landfills and soils, ruminants (for example, cattle), the guts of termites, and the anoxic sediments below the seafloor and the bottom of lakes.
This multistep process is used by these microorganisms for energy. The net reaction of methanogenesis is:
The final step in the process is catalyzed by the enzyme methyl coenzyme M reductase (MCR).
Wetlands
Wetlands are the largest natural sources of methane to the atmosphere, accounting for approximately 20 – 30% of atmospheric methane. Climate change is increasing the amount of methane released from wetlands due to increased temperatures and altered rainfall patterns. This phenomenon is called wetland methane feedback.
Rice cultivation generates as much as 12% of total global methane emissions due to the long-term flooding of rice fields.
Ruminants
Ruminants, such as cattle, belch methane, accounting for about 22% of the U.S. annual methane emissions to the atmosphere. One study reported that the livestock sector in general (primarily cattle, chickens, and pigs) produces 37% of all human-induced methane. A 2013 study estimated that livestock accounted for 44% of human-induced methane and about 15% of human-induced greenhouse gas emissions. Many efforts are underway to reduce livestock methane production, such as medical treatments and dietary adjustments, and to trap the gas to use its combustion energy.
Seafloor sediments
Most of the subseafloor is anoxic because oxygen is removed by aerobic microorganisms within the first few centimeters of the sediment. Below the oxygen-replete seafloor, methanogens produce methane that is either used by other organisms or becomes trapped in gas hydrates. These other organisms that utilize methane for energy are known as methanotrophs ('methane-eating'), and are the main reason why little methane generated at depth reaches the sea surface. Consortia of Archaea and Bacteria have been found to oxidize methane via anaerobic oxidation of methane (AOM); the organisms responsible for this are anaerobic methanotrophic Archaea (ANME) and sulfate-reducing bacteria (SRB).
Industrial routes
Given its cheap abundance in natural gas, there is little incentive to produce methane industrially. Methane can be produced by hydrogenating carbon dioxide through the Sabatier process. Methane is also a side product of the hydrogenation of carbon monoxide in the Fischer–Tropsch process, which is practiced on a large scale to produce longer-chain molecules than methane.
An example of large-scale coal-to-methane gasification is the Great Plains Synfuels plant, started in 1984 in Beulah, North Dakota as a way to develop abundant local resources of low-grade lignite, a resource that is otherwise difficult to transport for its weight, ash content, low calorific value and propensity to spontaneous combustion during storage and transport. A number of similar plants exist around the world, although mostly these plants are targeted towards the production of long chain alkanes for use as gasoline, diesel, or feedstock to other processes.
Power to methane is a technology that uses electrical power to produce hydrogen from water by electrolysis and uses the Sabatier reaction to combine hydrogen with carbon dioxide to produce methane.
Laboratory synthesis
Methane can be produced by protonation of methyl lithium or a methyl Grignard reagent such as methylmagnesium chloride. It can also be made from anhydrous sodium acetate and dry sodium hydroxide, mixed and heated above 300 °C (with sodium carbonate as byproduct). In practice, a requirement for pure methane can easily be fulfilled by steel gas bottle from standard gas suppliers.
Occurrence
Methane is the major component of natural gas, about 87% by volume. The major source of methane is extraction from geological deposits known as natural gas fields, with coal seam gas extraction becoming a major source (see coal bed methane extraction, a method for extracting methane from a coal deposit, while enhanced coal bed methane recovery is a method of recovering methane from non-mineable coal seams). It is associated with other hydrocarbon fuels, and sometimes accompanied by helium and nitrogen. Methane is produced at shallow levels (low pressure) by anaerobic decay of organic matter and reworked methane from deep under the Earth's surface. In general, the sediments that generate natural gas are buried deeper and at higher temperatures than those that contain oil.
Methane is generally transported in bulk by pipeline in its natural gas form, or by LNG carriers in its liquefied form; few countries transport it by truck.
Atmospheric methane and climate change
Methane is an important greenhouse gas, responsible for around 30% of the rise in global temperatures since the industrial revolution.
Methane has a global warming potential (GWP) of 29.8 ± 11 compared to (potential of 1) over a 100-year period, and 82.5 ± 25.8 over a 20-year period. This means that, for example, a leak of one tonne of methane is equivalent to emitting 82.5 tonnes of carbon dioxide. Burning methane and producing carbon dioxide also reduces the greenhouse gas impact compared to simply venting methane to the atmosphere.
As methane is gradually converted into carbon dioxide (and water) in the atmosphere, these values include the climate forcing from the carbon dioxide produced from methane over these timescales.
Annual global methane emissions are currently approximately 580 Mt, 40% of which is from natural sources and the remaining 60% originating from human activity, known as anthropogenic emissions. The largest anthropogenic source is agriculture, responsible for around one quarter of emissions, closely followed by the energy sector, which includes emissions from coal, oil, natural gas and biofuels.
Historic methane concentrations in the world's atmosphere have ranged between 300 and 400 nmol/mol during glacial periods commonly known as ice ages, and between 600 and 700 nmol/mol during the warm interglacial periods. A 2012 NASA website said the oceans were a potential important source of Arctic methane, but more recent studies associate increasing methane levels as caused by human activity.
Global monitoring of atmospheric methane concentrations began in the 1980s. The Earth's atmospheric methane concentration has increased 160% since preindustrial levels in the mid-18th century. In 2013, atmospheric methane accounted for 20% of the total radiative forcing from all of the long-lived and globally mixed greenhouse gases. Between 2011 and 2019 the annual average increase of methane in the atmosphere was 1866 ppb. From 2015 to 2019 sharp rises in levels of atmospheric methane were recorded.
In 2019, the atmospheric methane concentration was higher than at any time in the last 800,000 years. As stated in the AR6 of the IPCC, "Since 1750, increases in (47%) and (156%) concentrations far exceed, and increases in (23%) are similar to, the natural multi-millennial changes between glacial and interglacial periods over at least the past 800,000 years (very high confidence)".
In February 2020, it was reported that fugitive emissions and gas venting from the fossil fuel industry may have been significantly underestimated.
The largest annual increase occurred in 2021 with the overwhelming percentage caused by human activity.
Climate change can increase atmospheric methane levels by increasing methane production in natural ecosystems, forming a climate change feedback. Another explanation for the rise in methane emissions could be a slowdown of the chemical reaction that removes methane from the atmosphere.
Over 100 countries have signed the Global Methane Pledge, launched in 2021, promising to cut their methane emissions by 30% by 2030. This could avoid 0.2˚C of warming globally by 2050, although there have been calls for higher commitments in order to reach this target. The International Energy Agency's 2022 report states "the most cost-effective opportunities for methane abatement are in the energy sector, especially in oil and gas operations".
Clathrates
Methane clathrates (also known as methane hydrates) are solid cages of water molecules that trap single molecules of methane. Significant reservoirs of methane clathrates have been found in arctic permafrost and along continental margins beneath the ocean floor within the gas clathrate stability zone, located at high pressures (1 to 100 MPa; lower end requires lower temperature) and low temperatures (< 15 °C; upper end requires higher pressure). Methane clathrates can form from biogenic methane, thermogenic methane, or a mix of the two. These deposits are both a potential source of methane fuel as well as a potential contributor to global warming. The global mass of carbon stored in gas clathrates is still uncertain and has been estimated as high as 12,500 Gt carbon and as low as 500 Gt carbon. The estimate has declined over time with a most recent estimate of ≈1800 Gt carbon. A large part of this uncertainty is due to our knowledge gap in sources and sinks of methane and the distribution of methane clathrates at the global scale. For example, a source of methane was discovered relatively recently in an ultraslow spreading ridge in the Arctic. Some climate models suggest that today's methane emission regime from the ocean floor is potentially similar to that during the period of the Paleocene–Eocene Thermal Maximum (PETM) around 55.5 million years ago, although there are no data indicating that methane from clathrate dissociation currently reaches the atmosphere. Arctic methane release from permafrost and seafloor methane clathrates is a potential consequence and further cause of global warming; this is known as the clathrate gun hypothesis. Data from 2016 indicate that Arctic permafrost thaws faster than predicted.
Public safety and the environment
Methane "degrades air quality and adversely impacts human health, agricultural yields, and ecosystem productivity".
Methane is extremely flammable and may form explosive mixtures with air. Methane gas explosions are responsible for many deadly mining disasters. A methane gas explosion was the cause of the Upper Big Branch coal mine disaster in West Virginia on April 5, 2010, killing 29. Natural gas accidental release has also been a major focus in the field of safety engineering, due to past accidental releases that concluded in the formation of jet fire disasters.
The 2015–2016 methane gas leak in Aliso Canyon, California was considered to be the worst in terms of its environmental effect in American history. It was also described as more damaging to the environment than Deepwater Horizon's leak in the Gulf of Mexico.
In May 2023 The Guardian published a report blaming Turkmenistan as the worst in the world for methane super emitting. The data collected by Kayrros researchers indicate that two large Turkmen fossil fuel fields leaked 2.6 million and 1.8 million metric tonnes of methane in 2022 alone, pumping the equivalent of 366 million tonnes into the atmosphere, surpassing the annual emissions of the United Kingdom.
Methane is also an asphyxiant if the oxygen concentration is reduced to below about 16% by displacement, as most people can tolerate a reduction from 21% to 16% without ill effects. The concentration of methane at which asphyxiation risk becomes significant is much higher than the 5–15% concentration in a flammable or explosive mixture. Methane off-gas can penetrate the interiors of buildings near landfills and expose occupants to significant levels of methane. Some buildings have specially engineered recovery systems below their basements to actively capture this gas and vent it away from the building.
Extraterrestrial methane
Interstellar medium
Methane is abundant in many parts of the Solar System and potentially could be harvested on the surface of another Solar System body (in particular, using methane production from local materials found on Mars or Titan), providing fuel for a return journey.
Mars
Methane has been detected on all planets of the Solar System and most of the larger moons. With the possible exception of Mars, it is believed to have come from abiotic processes.
The Curiosity rover has documented seasonal fluctuations of atmospheric methane levels on Mars. These fluctuations peaked at the end of the Martian summer at 0.6 parts per billion.
Methane has been proposed as a possible rocket propellant on future Mars missions due in part to the possibility of synthesizing it on the planet by in situ resource utilization. An adaptation of the Sabatier methanation reaction may be used with a mixed catalyst bed and a reverse water-gas shift in a single reactor to produce methane and oxygen from the raw materials available on Mars, utilizing water from the Martian subsoil and carbon dioxide in the Martian atmosphere.
Methane could be produced by a non-biological process called serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars.
Titan
Methane has been detected in vast abundance on Titan, the largest moon of Saturn. It comprises a significant portion of its atmosphere and also exists in a liquid form on its surface, where it comprises the majority of the liquid in Titan's vast lakes of hydrocarbons, the second largest of which is believed to be almost pure methane in composition.
The presence of stable lakes of liquid methane on Titan, as well as the surface of Titan being highly chemically active and rich in organic compounds, has led scientists to consider the possibility of life existing within Titan's lakes, using methane as a solvent in the place of water for Earth-based life and using hydrogen in the atmosphere to derive energy with acetylene, in much the same way that Earth-based life uses glucose.
History
The discovery of methane is credited to Italian physicist Alessandro Volta, who characterized numerous properties including its flammability limit and origin from decaying organic matter.
Volta was initially motivated by reports of inflammable air present in marshes by his friend Father Carlo Guiseppe Campi. While on a fishing trip to Lake Maggiore straddling Italy and Switzerland in November 1776, he noticed the presence of bubbles in the nearby marshes and decided to investigate. Volta collected the gas rising from the marsh and demonstrated that the gas was inflammable.
Volta notes similar observations of inflammable air were present previously in scientific literature, including a letter written by Benjamin Franklin.
Following the Felling mine disaster of 1812 in which 92 men perished, Sir Humphry Davy established that the feared firedamp was in fact largely methane.
The name "methane" was coined in 1866 by the German chemist August Wilhelm von Hofmann. The name was derived from methanol.
Etymology
Etymologically, the word methane is coined from the chemical suffix "-ane", which denotes substances belonging to the alkane family; and the word methyl, which is derived from the German (1840) or directly from the French , which is a back-formation from the French (corresponding to English "methylene"), the root of which was coined by Jean-Baptiste Dumas and Eugène Péligot in 1834 from the Greek (wine) (related to English "mead") and (meaning "wood"). The radical is named after this because it was first detected in methanol, an alcohol first isolated by distillation of wood. The chemical suffix -ane is from the coordinating chemical suffix -ine which is from Latin feminine suffix -ina which is applied to represent abstracts. The coordination of "-ane", "-ene", "-one", etc. was proposed in 1866 by German chemist August Wilhelm von Hofmann.
Abbreviations
The abbreviation -C can mean the mass of carbon contained in a mass of methane, and the mass of methane is always 1.33 times the mass of -C. -C can also mean the methane-carbon ratio, which is 1.33 by mass. Methane at scales of the atmosphere is commonly measured in teragrams (Tg ) or millions of metric tons (MMT ), which mean the same thing. Other standard units are also used, such as nanomole (nmol, one billionth of a mole), mole (mol), kilogram, and gram.
See also
Explanatory notes
Citations
Cited sources
External links
Methane at The Periodic Table of Videos (University of Nottingham)
International Chemical Safety Card 0291
Gas (Methane) Hydrates – A New Frontier – United States Geological Survey (archived 6 February 2004)
CDC – Handbook for Methane Control in Mining (PDF)
Anaerobic digestion
Fuel gas
Fuels
Gaseous signaling molecules
Greenhouse gases
Industrial gases
Organic compounds with 1 carbon atom | Methane | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 6,143 | [
"Methane",
"Chemical energy sources",
"Environmental chemistry",
"Signal transduction",
"Organic compounds",
"Gaseous signaling molecules",
"Industrial gases",
"Anaerobic digestion",
"Fuels",
"Environmental engineering",
"Water technology",
"Greenhouse gases",
"Chemical process engineering",... |
18,583,225 | https://en.wikipedia.org/wiki/Buffer%20analysis | In geographic information systems (GIS) and spatial analysis, buffer analysis is the determination of a zone around a geographic feature containing locations that are within a specified distance of that feature, the buffer zone (or just buffer). A buffer is likely the most commonly used tool within the proximity analysis methods.
History
The buffer operation has been a core part of GIS functionality since the original integrated GIS software packages of the late 1970s and early 1980s, such as ARC/INFO, Odyssey, and MOSS. Although it has been one of the most widely used GIS operations in subsequent years, in a wide variety of applications, there has been little published research on the tool itself, except for the occasional development of a more efficient algorithm.
Basic algorithm
The fundamental method to create a buffer around a geographic feature stored in a vector data model, with a given radius r is as follows:
Single point: Create a circle around the point with radius r.
Polyline, which consists of an ordered list of points (vertices) connected by straight lines. This is also used for the boundary of a polygon.
Create a circle buffer around each vertex
Create a rectangle along each line segment by creating a duplicate line segment offset the distance r perpendicular to each side.
Merge or dissolve the rectangles and circles into a single polygon.
Software implementations of the buffer operation typically use alterations of this strategy to process more efficiently and accurately.
In Mathematics, GIS Buffer operation is a Minkowski Sum (or difference) of a geometry and a disk. Other terms used: Offsetting a Polygon.
Planar vs. geodesic distance
Traditional implementations assumed the buffer was being created on a planar cartesian coordinate space (i.e., created by a map projection) using Euclidean geometry, because the mathematics and computation involved is relatively simple, which was important given the computing power available in the late 1970s. Due to the inherent distortions caused by map projections, the buffer computed this way will not be identical to one drawn on the surface of the Earth; at a local scale, the difference is negligible, but at larger scales, the error can be significant.
Some current software, such as Esri ArcGIS Pro, offer the option to compute buffers using geodesic distance, using a similar algorithm but calculated using spherical trigonometry, including representing the lines between vertices as great circles. Other implementations use a workaround by first reprojecting the feature to a projection that minimizes distortion in that location, then computing the planar buffer.
Options
GIS software may offer variations on the basic algorithm, which may be useful in different applications:
Endcaps at the end of linear buffers are rounded by default, but may be squared off or a butt end (truncated at the final vertex).
Side preference may be important, such as needing the buffer on only one side of a line, or on a polygon, selecting only the outer buffer or the inner buffer (sometimes called a setback).
Variable width, in which the features in a layer may be buffered using different radii, usually given by an attribute.
Common buffers, in which the buffers for each feature in a layer are dissolved into a single polygon. This is most commonly used when one is not concerned about which feature is near each point in space, only that a point is nearby some (anonymous) feature.
See also
Dilation (morphology) (positive buffer)
Erosion (morphology) (negative buffer)
External links
OGC ST_Buffer function (PostGIS implementation)
buffer function in turfjs
BufferOp in JTS, the library at the foundation of many open-source GIS implementations
v.buffer command in GRASS
Buffer (Analysis) tool in Esri ArcGIS Pro
References
Geographic information systems
Geometry
Spatial analysis
es:Buffer (GIS) | Buffer analysis | [
"Physics",
"Mathematics",
"Technology"
] | 776 | [
"Spatial analysis",
"Information systems",
"Space",
"Geometry",
"Spacetime",
"Geographic information systems"
] |
18,584,624 | https://en.wikipedia.org/wiki/Redundant%20binary%20representation | A redundant binary representation (RBR) is a numeral system that uses more bits than needed to represent a single binary digit so that most numbers have several representations. An RBR is unlike usual binary numeral systems, including two's complement, which use a single bit for each digit. Many of an RBR's properties differ from those of regular binary representation systems. Most importantly, an RBR allows addition without using a typical carry. When compared to non-redundant representation, an RBR makes bitwise logical operation slower, but arithmetic operations are faster when a greater bit width is used. Usually, each digit has its own sign that is not necessarily the same as the sign of the number represented. When digits have signs, that RBR is also a signed-digit representation.
Conversion from RBR
An RBR is a place-value notation system. In an RBR, digits are pairs of bits, that is, for every place, an RBR uses a pair of bits. The value represented by a redundant digit can be found using a translation table. This table indicates the mathematical value of each possible pair of bits.
As in conventional binary representation, the integer value of a given representation is a weighted sum of the values of the digits. The weight starts at 1 for the rightmost position and goes up by a factor of 2 for each next position. Usually, an RBR allows negative values. There is no single sign bit that tells if a redundantly represented number is positive or negative. Most integers have several possible representations in an RBR.
Often one of the several possible representations of an integer is chosen as the "canonical" form, so each integer has only one possible "canonical" representation; non-adjacent form and two's complement are popular choices for that canonical form.
An integer value can be converted back from an RBR using the following formula, where n is the number of digits and dk is the interpreted value of the k-th digit, where k starts at 0 at the rightmost position:
The conversion from an RBR to n-bit two's complement can be done in O(log(n)) time using a prefix adder.
Example of redundant binary representation
Not all redundant representations have the same properties. For example, using the translation table on the right, the number 1 can be represented in this RBR in many ways: "01·01·01·11" (0+0+0+1), "01·01·10·11" (0+0+0+1), "01·01·11·00" (0+0+2−1), or "11·00·00·00" (8−4−2−1). Also, for this translation table, flipping all bits (NOT gate) corresponds to finding the additive inverse (multiplication by −1) of the integer represented.
In this case:
Arithmetic operations
Redundant representations are commonly used inside high-speed arithmetic logic units.
In particular, a carry-save adder uses a redundant representation.
Addition
The addition operation in all RBRs is carry-free, which means that the carry does not have to propagate through the full width of the addition unit. In effect, the addition in all RBRs is a constant-time operation. The addition will always take the same amount of time independently of the bit-width of the operands. This does not imply that the addition is always faster in an RBR than its two's complement equivalent, but that the addition will eventually be faster in an RBR with increasing bit width because the two's complement addition unit's delay is proportional to log(n) (where n is the bit width). Addition in an RBR takes a constant time because each digit of the result can be calculated independently of one another, implying that each digit of the result can be calculated in parallel.
Subtraction
Subtraction is the same as the addition except that the additive inverse of the second operand needs to be computed first. For common representations, this can be done on a digit-by-digit basis.
Multiplication
Many hardware multipliers internally use Booth encoding, a redundant binary representation.
Logical operations
Bitwise logical operations, such as AND, OR and XOR, are not possible in redundant representations. While it is possible to do bitwise operations directly on the underlying bits inside an RBR, it is not clear that this is a meaningful operation; there are many ways to represent a value in an RBR, and the value of the result would depend on the representation used.
To get the expected results, it is necessary to convert the two operands first to non-redundant representations. Consequently, logical operations are slower in an RBR. More precisely, they take a time proportional to log(n) (where n is the number of digit) compared to a constant-time in two's complement.
It is, however, possible to partially convert only the least-significant portion of a redundantly represented number to non-redundant form. This allows operations such as masking off the low k bits can be done in log(k) time.
References
Binary arithmetic
Non-standard positional numeral systems | Redundant binary representation | [
"Mathematics"
] | 1,062 | [
"Arithmetic",
"Binary arithmetic"
] |
18,589,212 | https://en.wikipedia.org/wiki/Reverse%20osmosis | Reverse osmosis (RO) is a water purification process that uses a semi-permeable membrane to separate water molecules from other substances. RO applies pressure to overcome osmotic pressure that favors even distributions. RO can remove dissolved or suspended chemical species as well as biological substances (principally bacteria), and is used in industrial processes and the production of potable water. RO retains the solute on the pressurized side of the membrane and the purified solvent passes to the other side. The relative sizes of the various molecules determines what passes through. "Selective" membranes reject large molecules, while accepting smaller molecules (such as solvent molecules, e.g., water).
RO is most commonly known for its use in drinking water purification from seawater, removing the salt and other effluent materials from the water molecules.
As of 2013 the world's largest RO desalination plant was in Sorek, Israel, outputting .
History
A process of osmosis through semi-permeable membranes was first observed in 1748 by Jean-Antoine Nollet. For the following 200 years, osmosis was only a laboratory phenomenon. In 1950, the University of California at Los Angeles (UCLA) first investigated osmotic desalination. Researchers at both UCLA and University of Florida desalinated seawater in the mid-1950s, but the flux was too low to be commercially viable. Sidney Loeb at UCLA and Srinivasa Sourirajan at the National Research Council of Canada, Ottawa, found techniques for making asymmetric membranes characterized by an effectively thin "skin" layer supported atop a highly porous and much thicker substrate region. John Cadotte, of Filmtec corporation, discovered that membranes with particularly high flux and low salt passage could be made by interfacial polymerization of m-phenylene diamine and trimesoyl chloride. Cadotte's patent on this process was the subject of litigation and expired. Almost all commercial RO membrane is now made by this method. By 2019, approximately 16,000 desalination plants operated around the world, producing around . Around half of this capacity was in the Middle East and North Africa region.
In 1977 Cape Coral, Florida became the first US municipality to use RO at scale, with an initial operating capacity of 11.35 million liters (3 million US gal) per day. By 1985, rapid growth led the city to operate the world's largest low-pressure RO plant, producing 56.8 million liters (15 million US gal) per day (MGD).
Osmosis
In (forward) osmosis, the solvent moves from an area of low solute concentration (high water potential), through a membrane, to an area of high solute concentration (low water potential). The driving force for the movement of the solvent is the reduction in the Gibbs free energy of the system in which the difference in solvent concentration between the sides of a membrane is reduced. This is called osmotic pressure. It reduces as the solvent moves into the more concentrated solution. Applying an external pressure to reverse the natural flow of pure solvent, thus, is reverse osmosis. The process is similar to other membrane technology applications.
RO differs from filtration in that the mechanism of fluid flow is reversed, as the solvent crosses membrane, leaving the solute behind. The predominant removal mechanism in membrane filtration is straining, or size exclusion, where the pores are 0.01 micrometers or larger, so the process can theoretically achieve perfect efficiency regardless of parameters such as the solution's pressure and concentration. RO instead involves solvent diffusion across a membrane that is either nonporous or uses nanofiltration with pores 0.001 micrometers in size. The predominant removal mechanism is from differences in solubility or diffusivity, and the process is dependent on pressure, solute concentration, and other conditions.
RO requires pressure between 2–17 bar (30–250 psi) for fresh and brackish water, and 40–82 bar (600–1200 psi) for seawater. Seawater has around 27 bar (390 psi) natural osmotic pressure that must be overcome. As for their energy consumption, seawater RO systems typically require 2.9-5.5 kWh/m3 , although state-of-the-art systems are around 2.3 kWh/m3 .
Membrane pore sizes vary from 0.1 to 5,000 nm. Particle filtration removes particles of 1 μm or larger. Microfiltration removes particles of 50 nm or larger. Ultrafiltration removes particles of roughly 3 nm or larger. Nanofiltration removes particles of 1 nm or larger. RO is in the final category of membrane filtration, hyperfiltration, and removes particles larger than ~0.2 nm.
Fresh water applications
Drinking water purification
Around the world, household drinking water purification systems, including an RO step, are commonly used for improving water for drinking and cooking.
Such systems typically include these steps:
a sediment filter to trap particles, including rust and calcium carbonate
a second sediment filter with smaller pores
an activated carbon filter to trap organic chemicals and chlorine, which degrades certain types of thin-film composite membrane
an RO thin-film composite membrane
an ultraviolet lamp for sterilizing any microbes that survive RO
a second carbon filter to capture chemicals that survive RO
In some systems, the carbon prefilter is replaced by a cellulose triacetate (CTA) membrane. CTA is a paper by-product membrane bonded to a synthetic layer that allows contact with chlorine in the water. These require a small amount of chlorine in the water source to prevent bacteria from forming on it. The typical rejection rate for CTA membranes is 85–95%.
The cellulose triacetate membrane rots unless protected by chlorinated water, while the thin-film composite membrane breaks down in the presence of chlorine. The thin-film composite (TFC) membrane is made of synthetic material, and requires the chlorine to be removed before the water enters the membrane. To protect the TFC membrane elements from chlorine damage, carbon filters are used as pre-treatment. TFC membranes have a higher rejection rate of 95–98% and a longer life than CTA membranes.
To work effectively, the water feeding to these units should be under pressure (typically 280 kPa (40 psi) or greater).
Though Portable RO Water Purifiers are commercially available and extensively used in areas lacking cleaning potable water, in Europe such processing of natural mineral water (as defined by a European directive) is not allowed. In practice, a fraction of the living bacteria pass through RO through membrane imperfections or bypass the membrane entirely through leaks in seals.
Solar-powered RO
A solar-powered desalination unit produces potable water from saline water by using a photovoltaic system to supply the energy. Solar power works well for water purification in settings lacking grid electricity and can reduce operating costs and greenhouse emissions. For example, a solar-powered desalination unit designed passed tests in Australia's Northern Territory.
Sunlight's intermittent nature makes output prediction difficult without an energy storage capability. However batteries or thermal energy storage systems can provide power when the sun does not.
Military
Larger scale reverse osmosis water purification units (ROWPU) exist for military use. These have been adopted by the United States armed forces and the Canadian Forces. Some models are containerized, some are trailers, and some are themselves vehicles.
The water is treated with a polymer to initiate coagulation. Next, it is run through a multi-media filter where it undergoes primary treatment, removing turbidity. It is then pumped through a cartridge filter which is usually spiral-wound cotton. This process strips any particles larger than 5 μm and eliminates almost all turbidity.
The clarified water is then fed through a high-pressure piston pump into a series of RO vessels. 90.00–99.98% of the raw water's total dissolved solids are removed and military standards require that the result have no more than 1000–1500 parts per million by measure of electrical conductivity. It is then disinfected with chlorine.
Water and wastewater purification
RO-purified rainwater collected from storm drains is used for landscape irrigation and industrial cooling in Los Angeles and other cities.
In industry, RO removes minerals from boiler water at power plants. The water is distilled multiple times to ensure that it does not leave deposits on the machinery or cause corrosion.
RO is used to clean effluent and brackish groundwater. The effluent in larger volumes (more than 500 m3/day) is treated in a water treatment plant first, and then the effluent runs through RO. This hybrid process reduces treatment cost significantly and lengthens membrane life.
RO can be used for the production of deionized water.
In 2002, Singapore announced that a process named NEWater would be a significant part of its water plans. RO would be used to treat wastewater before discharging the effluent into reservoirs.
Food industry
Reverse osmosis is a more economical way to concentrate liquids (such as fruit juices) than conventional heat-treatment. Concentration of orange and tomato juice has advantages including a lower operating cost and the ability to avoid heat-treatment, which makes it suitable for heat-sensitive substances such as protein and enzymes.
RO is used in the dairy industry to produce whey protein powders and concentrate milk. The whey (liquid remaining after cheese manufacture) is concentrated with RO from 6% solids to 10–20% solids before ultrafiltration processing. The retentate can then be used to make whey powders, including whey protein isolate. Additionally, the permeate, which contains lactose, is concentrated by RO from 5% solids to 18–total solids to reduce crystallization and drying costs.
Although RO was once avoided in the wine industry, it is now widespread. An estimated 60 RO machines were in use in Bordeaux, France, in 2002. Known users include many of elite firms, such as Château Léoville-Las Cases.
Maple syrup production
In 1946, some maple syrup producers started using RO to remove water from sap before boiling the sap to syrup. RO allows about 75–90% of the water to be removed, reducing energy consumption and exposure of the syrup to high temperatures.
Low-alcohol beer
When beer at typical concentration is subjected to reverse osmosis, both water and alcohol pass across the membrane more readily than other components, leaving a "beer concentrate". The concentrate is then diluted with fresh water to restore the non-volatile components to their original intensity.
Hydrogen production
For small-scale hydrogen production, RO is sometimes used to prevent formation of mineral deposits on the surface of electrodes.
Aquariums
Many reef aquarium keepers use RO systems to make fish-friendly seawater. Ordinary tap water can contain excessive chlorine, chloramines, copper, nitrates, nitrites, phosphates, silicates, or other chemicals detrimental to marine organisms. Contaminants such as nitrogen and phosphates can lead to unwanted algae growth. An effective combination of both RO and deionization is popular among reef aquarium keepers, and is preferred above other water purification processes due to the low cost of ownership and operating costs. Where chlorine and chloramines are found in the water, carbon filtration is needed before RO, as common residential membranes do not address these compounds.
Freshwater aquarists also use RO to duplicate the soft waters found in many tropical waters. While many tropical fish can survive in treated tap water, breeding can be impossible. Many aquatic shops sell containers of RO water for this purpose.
Window cleaning
An increasingly popular method of cleaning windows is the "water-fed pole" system. Instead of washing windows with conventional detergent, they are scrubbed with purified water, typically containing less than 10 ppm dissolved solids, using a brush on the end of a pole wielded from ground level. RO is commonly used to purify the water.
Landfill leachate purification
Treatment with RO is limited, resulting in low recoveries on high concentration (measured with electrical conductivity) and membrane fouling. RO applicability is limited by conductivity, organics, and scaling inorganic elements such as CaSO4, Si, Fe and Ba. Low organic scaling can use two different technologies: spiral wound membrane, and (for high organic scaling, high conductivity and higher pressure (up to 90 bars)), disc tube modules with RO membranes can be used. Disc tube modules were redesigned for landfill leachate purification that is usually contaminated with organic material. Due to the cross-flow, it is given a flow booster pump that recirculates the flow over the membrane between 1.5 and 3 times before it is released as a concentrate. High velocity protects against membrane scaling and allows membrane cleaning.
Power consumption for a disc tube module system
Desalination
Areas that have limited surface water or groundwater may choose to desalinate. RO is an increasingly common method, because of its relatively low energy consumption.
Energy consumption is around , with the development of more efficient energy recovery devices and improved membrane materials. According to the International Desalination Association, for 2011, RO was used in 66% of installed desalination capacity (0.0445 of 0.0674 km3/day), and nearly all new plants. Other plants use thermal distillation methods: multiple-effect distillation, and multi-stage flash.
Sea-water RO (SWRO) desalination requires around 3 kWh/m3, much higher than those required for other forms of water supply, including RO treatment of wastewater, at 0.1 to 1 kWh/m3. Up to 50% of the seawater input can be recovered as fresh water, though lower recovery rates may reduce membrane fouling and energy consumption.
Brackish water reverse osmosis (BWRO) is the desalination of water with less salt than seawater, usually from river estuaries or saline wells. The process is substantially the same as SWRO, but requires lower pressures and less energy. Up to 80% of the feed water input can be recovered as fresh water, depending on feed salinity.
The Ashkelon desalination plant in Israel is the world's largest.
The typical single-pass SWRO system consists of:
Intake
Pretreatment
High-pressure pump (if not combined with energy recovery)
Membrane assembly
Energy recovery (if used)
Remineralisation and pH adjustment
Disinfection
Alarm/control panel
Pretreatment
Pretreatment is important when working nanofiltration membranes due to their spiral-wound design. The material is engineered to allow one-way flow. The design does not allow for backpulsing with water or air agitation to scour its surface and remove accumulated solids. Since material cannot be removed from the membrane surface, it is susceptible to fouling (loss of production capacity). Therefore, pretreatment is a necessity for any RO or nanofiltration system. Pretreatment has four major components:
Screening solids: Solids must be removed and the water treated to prevent membrane fouling by particle or biological growth, and reduce the risk of damage to high-pressure components.
Cartridge filtration: String-wound polypropylene filters are typically used to remove particles of 1–5 μm diameter.
Dosing: Oxidizing biocides, such as chlorine, are added to kill bacteria, followed by bisulfite dosing to deactivate the chlorine that can destroy a thin-film composite membrane. Biofouling inhibitors do not kill bacteria, while preventing them from growing slime on the membrane surface and plant walls.
Prefiltration pH adjustment: If the pH, hardness and the alkalinity in the feedwater result in scaling while concentrated in the reject stream, acid is dosed to maintain carbonates in their soluble carbonic acid form.
CO32− + H3O+ = HCO3− + H2O
HCO3− + H3O+ = H2CO3 + H2O
Carbonic acid cannot combine with calcium to form calcium carbonate scale. Calcium carbonate scaling tendency is estimated using the Langelier saturation index. Adding too much sulfuric acid to control carbonate scales may result in calcium sulfate, barium sulfate, or strontium sulfate scale formation on the membrane.
Prefiltration antiscalants: Scale inhibitors (also known as antiscalants) prevent formation of more scales than acid, which can only prevent formation of calcium carbonate and calcium phosphate scales. In addition to inhibiting carbonate and phosphate scales, antiscalants inhibit sulfate and fluoride scales and disperse colloids and metal oxides. Despite claims that antiscalants can inhibit silica formation, no concrete evidence proves that silica polymerization is inhibited by antiscalants. Antiscalants can control acid-soluble scales at a fraction of the dosage required to control the same scale using sulfuric acid.
Some small-scale desalination units use 'beach wells'. These are usually drilled on the seashore. These intake facilities are relatively simple to build and the seawater they collect is pretreated via slow filtration through subsurface sand/seabed formations. Raw seawater collected using beach wells is often of better quality in terms of solids, silt, oil, grease, organic contamination, and microorganisms, compared to open seawater intakes. Beach intakes may also yield source water of lower salinity.
High pressure pump
The high pressure pump pushes water through the membrane. Typical pressures for brackish water range from 1.6 to 2.6 MPa (225 to 376 psi). In the case of seawater, they range from 5.5 to 8 MPa (800 to 1,180 psi). This requires substantial energy. Where energy recovery is used, part of the high pressure pump's work is done by the energy recovery device, reducing energy inputs.
Membrane assembly
The membrane assembly consists of a pressure vessel with a membrane that allows feedwater to be pushed against it. The membrane must be strong enough to withstand the pressure. RO membranes are made in a variety of configurations. The two most common are spiral-wound and hollow-fiber.
Only part of the water pumped onto the membrane passes through. The left-behind "concentrate" passes along the saline side of the membrane and flushes away the salt and other remnants. The percentage of desalinated water is the "recovery ratio". This varies with salinity and system design parameters: typically 20% for small seawater systems, 40% – 50% for larger seawater systems, and 80% – 85% for brackish water. The concentrate flow is typically 3 bar/50 psi less than the feed pressure, and thus retains much of the input energy.
The desalinated water purity is a function of the feed water salinity, membrane selection and recovery ratio. To achieve higher purity a second pass can be added which generally requires another pumping cycle. Purity expressed as total dissolved solids typically varies from 100 to 400 parts per million (ppm or mg/litre) on a seawater feed. A level of 500 ppm is generally the upper limit for drinking water, while the US Food and Drug Administration classifies mineral water as water containing at least 250 ppm.
Energy recovery
Energy recovery can reduce energy consumption by 50% or more. Much of the input energy can be recovered from the concentrate flow, and the increasing efficiency of energy recovery devices greatly reduces energy requirements. Devices used, in order of invention, are:
Turbine or Pelton wheel: a water turbine driven by the concentrate flow, connected to the pump drive shaft provides part of the input power. Positive displacement axial piston motors have been used in place of turbines on smaller systems.
Turbocharger: a water turbine driven by concentrate flow, directly connected to a centrifugal pump that boosts the output pressure, reducing the pressure needed from the pump and thereby its energy input, similar in construction principle to car engine turbochargers.
Pressure exchanger: using the pressurized concentrate flow, via direct contact or a piston, to pressurize part of the membrane feed flow to near concentrate flow pressure. A boost pump then raises this pressure by typically 3 bar / 50 psi to the membrane feed pressure. This reduces flow needed from the high-pressure pump by an amount equal to the concentrate flow, typically 60%, and thereby its energy input. These are widely used on larger low-energy systems. They are capable of 3 kWh/m3 or less energy consumption.
Energy-recovery pump: a reciprocating piston pump. The pressurized concentrate flow is applied to one side of each piston to help drive the membrane feed flow from the opposite side. These are the simplest energy recovery devices to apply, combining the high pressure pump and energy recovery in a single self-regulating unit. These are widely used on smaller low-energy systems. They are capable of 3 kWh/m3 or less energy consumption.
Batch operation: RO systems run with a fixed volume of fluid (thermodynamically a closed system) do not suffer from wasted energy in the brine stream, as the energy to pressurize a virtually incompressible fluid (water) is negligible. Such systems have the potential to reach second-law efficiencies of 60%. Such systems can be created multiple ways, including using pressurized tanks with pistons or bladders, or low-pressure tanks with conventional ERD's.
Remineralisation and pH adjustment
The desalinated water is stabilized to protect downstream pipelines and storage, usually by adding lime or caustic soda to prevent corrosion of concrete-lined surfaces. Liming material is used to adjust pH between 6.8 and 8.1 to meet the potable water specifications, primarily for effective disinfection and for corrosion control. Remineralisation may be needed to replace minerals removed from the water by desalination, although this process has proved to be costly and inconvenient in order to meet mineral demand by humans and plants as found in typical freshwater. For instance water from Israel's national water carrier typically contains dissolved magnesium levels of 20 to 25 mg/liter, while water from the Ashkelon plant has no magnesium. Ashkelon water created magnesium-deficiency symptoms in crops, including tomatoes, basil, and flowers, and had to be remedied by fertilization. Israeli drinking water standards require a minimum calcium level of 20 mg/liter. Askelon's post-desalination treatment uses sulfuric acid to dissolve calcite (limestone), resulting in calcium concentrations of 40 to 46 mg/liter, lower than the 45 to 60 mg/liter found in typical Israeli fresh water.
Disinfection
Post-treatment disinfection provides secondary protection against compromised membranes and downstream problems. Disinfection by means of ultraviolet (UV) lamps (sometimes called germicidal or bactericidal) may be employed to sterilize pathogens that evade the RO process. Chlorination or chloramination (chlorine and ammonia) protects against pathogens that may have lodged in the distribution system downstream.
Disadvantages
Large-scale industrial/municipal systems recover typically 75% to 80% of the feed water, or as high as 90%, because they can generate the required higher pressure.
Wastewater
Household RO units use a lot of water because they have low back pressure. Household RO water purifiers typically produce one liter of usable water and 3-25 liters of wastewater. The remainder is discharged, usually into the drain. Because wastewater carries the rejected contaminants, recovering this water is not practical for household systems. Wastewater is typically delivered to house drains. A RO unit delivering of treated water per day also discharge between . This led India's National Green Tribunal to propose a ban on RO water purification systems in areas where the total dissolved solids (TDS) measure in water is less than 500 mg/liter. In Delhi, large-scale use of household RO devices has increased the total water demand of the already water-parched National Capital Territory of India.
Health
RO removes both harmful contaminants and desirable minerals. Some studies report some relation between long-term health effects and consumption of water low on calcium and magnesium, although these studies are of low quality.
Waste-stream considerations
Depending upon the desired product, either the solvent or solute stream of RO will be waste. For food concentration applications, the concentrated solute stream is the product and the solvent stream is waste. For water treatment applications, the solvent stream is purified water and the solute stream is concentrated waste. The solvent waste stream from food processing may be used as reclaimed water, but there may be fewer options for disposal of a concentrated waste solute stream. Ships may use marine dumping and coastal desalination plants typically use marine outfalls. Landlocked RO plants may require evaporation ponds or injection wells to avoid polluting groundwater or surface runoff.
Research
Improving Current Membranes
Current RO membranes, thin-film composite (TFC) polyamide membranes, are being studied to find ways of improving their permeability. Through new imaging methods, researchers were able to make 3D models of membranes and examine how water flowed through them. They found that TFC membranes with areas of low flow significantly decreased water permeability. By ensuring uniformity of the membranes and allowing water to flow continuously without slowing down, membrane permeability could be improved by 30%-40%.
Electrodialysis
Research has examined integrating RO with electrodialysis to improve recovery of valuable deionized products, or to reduce concentrate volumes.
Low-pressure High-recovery (LPHR)
Another approach is low-pressure high-recovery multistage RO (LPHR). It produces concentrated brine and freshwater by cycling the output repeatedly through a relatively porous membrane at relatively low pressure. Each cycle removes additional impurities. Once the output is relatively pure, it is sent through a conventional RO membrane at conventional pressure to complete the filtration step. LPHR was found to be economically feasible, recovering more than 70% with an OPD between 58 and 65 bar and leaving no more than 350 ppm TDS from a seawater feed with 35,000 ppm TDS.
Carbon Nanotubes (CNTs)
Carbon nanotubes are meant to potentially solve the typical tradeoff between the permeability and the selectivity of RO membranes. CNTs present many ideal characteristics including: mechanical strength, electron affinity, and also exhibiting flexibility during modification. By restructuring carbon nanotubes and coating or impregnating them with other chemical compounds, scientists can manufacture these membranes to have all of the most desirable traits. The hope with CNT membranes is to find a combination of high water permeability while also decreasing the amount of neutral solutes taken out of the water. This would help decrease energy costs and the cost of remineralization after purification through the membrane.
Graphene
Graphene membranes are meant to take advantage of their thinness to increase efficiency. Graphene is a singular layer of carbon atoms, so it is about 1000 times thinner than existing membranes. Graphene membranes are around 100 nm thick while current membranes are about 100 μm. Many researchers were concerned with the durability of graphene and if it would be able to handle RO pressures. New research finds that depending on the substrate (a supporting layer that does no filtration and only provides structural support), graphene membranes can withstand 57MPa of pressure which is about 10 times the typical pressures for seawater RO.
Batch RO may offer increased energy efficiency, more durable equipment and higher salinity limits.
The conventional approach claimed that molecules cross the membrane individually. A research team devised a "solution-friction" theory, claiming that molecules in groups through transient pores. Characterizing that process could guide membrane development. The accepted theory is that individual water molecules diffuse through the membrane, termed the "solution-diffusion" model.
See also
Electrodeionization
ERDLator
Forward osmosis
Microfiltration
Reverse osmosis plant
Richard Stover, pioneered the development of an energy-recovery device currently in use in most seawater reverse-osmosis desalination plants
Silt density index
Salinity gradient
Milli-Q water
Water pollution
Water quality
References
Sources
Food processing
Water desalination
Filters
Water technology
Membrane technology
Separation processes
Industrial water treatment | Reverse osmosis | [
"Chemistry",
"Engineering"
] | 5,868 | [
"Water desalination",
"Separation processes",
"Water treatment",
"Chemical equipment",
"Filters",
"Industrial water treatment",
"Membrane technology",
"Filtration",
"nan",
"Water technology"
] |
18,589,852 | https://en.wikipedia.org/wiki/Frozen%20tissue%20array | Frozen tissue array consists of fresh frozen tissues in which up to 50 separate tissue cores are assembled in array fashion to allow simultaneous histological analysis.
History
Paraffin tissue array was developed during late years in the 1980s; this array can help scientists high throughput analyze gene and protein expressions in multiple tissue samples, especially analyze different protein levels with antibodies by immunohistochemistry. Various paraffin tissue arrays are now commercially available from many biotech companies. Most of the arrays can be easily made by microarraying instrument (Beecher Instruments Inc.). However, paraffin embedded tissues have limitations. Buffered formalin solutions cross link proteins and nucleic acids when they are used for fix tissues. The DNA, RNA, and protein within the tissues are damaged in various levels during the fixation. Therefore, many scientific experimental results from formalin fixed tissues are not reliable. Since frozen tissue sections don't go through any fixation procedures, and therefore, the DNA, RNA, and protein in frozen tissues retain their native characteristics much more than in paraffin embedded tissues. Scientists who have been developing antibodies for therapeutic purpose all need their preliminary results from frozen tissues to get approval from FDA. Consequently, frozen tissue array should be the best tool for high throughput analysis on this purpose.
Procedure
Frozen tissue cores with 2 mm diameter from the regions of interest are removed from frozen tissue OCT blocks at different freezing temperature since each tissue type has their own temperature preference at frozen stage. Then all the frozen tissue cores are inserted in a recipient OCT frozen block in a precisely spaced, array pattern. Sections from this block are cut using a cryostat, mounted on a microscope slide and then analyzed by any method of standard histological analysis. Each frozen tissue array block can be cut into 100–500 sections, which can be subjected to independent tests. Tests commonly employed in frozen tissue array include immunohistochemistry, and in situ hybridization.
See also
Cytomics
Frozen section procedure
MicroArray and Gene Expression (MAGE)
References
Battifora H: The multitumor (sausage) tissue block: novel method for immunohistochemical antibody testing. Lab Invest 1986, 55:244-248.
Battifora H, Mehta P: The checkerboard tissue block. An improved multitissue control block. Lab Invest 1990, 63:722-724.
Kononen J, Bubendorf L, Kallioniemi A, Barlund M, Schraml P, Leighton S, Torhorst J, Mihatsch MJ, Sauter G, Kallioniemi OP: Tissue microarrays for high-throughput molecular profiling of tumor specimens. Nat Med 1998, 4:844-847.
Schoenberg Fejzo M, Slamon DJ: Frozen tumor tissue microarray technology for analysis of tumor RNA, DNA, and proteins. American Journal of Pathology 2001, 159(5): 1645–50.
Schoenberg Fejzo M, Slamon DJ. Tissue microarrays from frozen tissues-OCT technique. Methods Mol Biol. 2010, 664:73-80.
"Frozen Tissue Array". BioChain Institute Inc. Retrieved 2017-06-24.
Tissues (biology)
Microarrays | Frozen tissue array | [
"Chemistry",
"Materials_science",
"Biology"
] | 675 | [
"Biochemistry methods",
"Genetics techniques",
"Microtechnology",
"Microarrays",
"Bioinformatics",
"Molecular biology techniques"
] |
468,843 | https://en.wikipedia.org/wiki/List%20of%20moments%20of%20inertia | Moment of inertia, denoted by , measures the extent to which an object resists rotational acceleration about a particular axis; it is the rotational analogue to mass (which determines an object's resistance to linear acceleration). The moments of inertia of a mass have units of dimension ML2 ([mass] × [length]2). It should not be confused with the second moment of area, which has units of dimension L4 ([length]4) and is used in beam calculations. The mass moment of inertia is often also known as the rotational inertia, and sometimes as the angular mass.
For simple objects with geometric symmetry, one can often determine the moment of inertia in an exact closed-form expression. Typically this occurs when the mass density is constant, but in some cases the density can vary throughout the object as well. In general, it may not be straightforward to symbolically express the moment of inertia of shapes with more complicated mass distributions and lacking symmetry. When calculating moments of inertia, it is useful to remember that it is an additive function and exploit the parallel axis and perpendicular axis theorems.
This article mainly considers symmetric mass distributions, with constant density throughout the object, and the axis of rotation is taken to be through the center of mass unless otherwise specified.
Moments of inertia
Following are scalar moments of inertia. In general, the moment of inertia is a tensor, see below.
List of 3D inertia tensors
This list of moment of inertia tensors is given for principal axes of each object.
To obtain the scalar moments of inertia I above, the tensor moment of inertia I is projected along some axis defined by a unit vector n according to the formula:
where the dots indicate tensor contraction and the Einstein summation convention is used. In the above table, n would be the unit Cartesian basis ex, ey, ez to obtain Ix, Iy, Iz respectively.
See also
List of second moments of area
Parallel axis theorem
Perpendicular axis theorem
Notes
References
External links
The inertia tensor of a tetrahedron
Tutorial on deriving moment of inertia for common shapes
Moment of inertia
Moments of inertia
Physical quantities
Rigid bodies
Tensors
Moment (physics)
he:טנזור התמד#דוגמאות | List of moments of inertia | [
"Physics",
"Mathematics",
"Engineering"
] | 492 | [
"Physical phenomena",
"Tensors",
"Physical quantities",
"Quantity",
"Mechanics",
"Mechanical engineering",
"Physical properties",
"Moment (physics)"
] |
468,893 | https://en.wikipedia.org/wiki/Geometric%20dimensioning%20and%20tolerancing | Geometric dimensioning and tolerancing (GD&T) is a system for defining and communicating engineering tolerances via a symbolic language on engineering drawings and computer-generated 3D models that describes a physical object's nominal geometry and the permissible variation thereof. GD&T is used to define the nominal (theoretically perfect) geometry of parts and assemblies, the allowable variation in size, form, orientation, and location of individual features, and how features may vary in relation to one another such that a component is considered satisfactory for its intended use. Dimensional specifications define the nominal, as-modeled or as-intended geometry, while tolerance specifications define the allowable physical variation of individual features of a part or assembly.
There are several standards available worldwide that describe the symbols and define the rules used in GD&T. One such standard is American Society of Mechanical Engineers (ASME) Y14.5. This article is based on that standard. Other standards, such as those from the International Organization for Standardization (ISO) describe a different system which has some nuanced differences in its interpretation and rules (see GPS&V). The Y14.5 standard provides a fairly complete set of rules for GD&T in one document. The ISO standards, in comparison, typically only address a single topic at a time. There are separate standards that provide the details for each of the major symbols and topics below (e.g. position, flatness, profile, etc.). BS 8888 provides a self-contained document taking into account a lot of GPS&V standards.
Origin
The origin of GD&T is credited to Stanley Parker, who developed the concept of "true position". While little is known about Parker's life, it is known that he worked at the Royal Torpedo Factory in Alexandria, West Dunbartonshire, Scotland. His work increased production of naval weapons by new contractors.
In 1940, Parker published Notes on Design and Inspection of Mass Production Engineering Work, the earliest work on geometric dimensioning and tolerancing. In 1956, Parker published Drawings and Dimensions, which became the basic reference in the field.
Fundamental concepts
Dimensions
A dimension is defined in ASME Y14.5 as "a numerical value(s) or mathematical expression in appropriate units of measure used to define the form, size, orientation, or location, of a part or feature." Special types of dimensions include basic dimensions (theoretically exact dimensions) and reference dimensions (dimensions used to inform, not define a feature or part).
Units of measure
The units of measure in a drawing that follows GD&T can be selected by the creator of the drawing. Most often drawings are standardized to either SI linear units, millimeters (denoted "mm"), or US customary linear units, decimal inches (denoted "IN"). Dimensions can contain only a number without units if all dimensions are the same units and there is a note on the drawing that clearly specifies what the units are.
Angular dimensions can be expressed in decimal degrees or degrees, minutes, and seconds.
Tolerances
Every feature on every manufactured part is subject to variation, therefore, the limits of allowable variation must be specified. Tolerances can be expressed directly on a dimension by limits, plus/minus tolerances, or geometric tolerances, or indirectly in tolerance blocks, notes, or tables.
Geometric tolerances are described by feature control frames, which are rectangular boxes on a drawing that indicate the type of geometric control, tolerance value, modifier(s) and/or datum(s) relevant to the feature. The type of tolerances used with symbols in feature control frames can be:
equal bilateral
unequal bilateral
unilateral
no particular distribution (a "floating" zone)
Tolerances for the profile symbols are equal bilateral unless otherwise specified, and for the position symbol tolerances are always equal bilateral. For example, the position of a hole has a tolerance of .020 inches. This means the hole can move ±.010 inches, which is an equal bilateral tolerance. It does not mean the hole can move +.015/−.005 inches, which is an unequal bilateral tolerance. Unequal bilateral and unilateral tolerances for profile are specified by adding further information to clearly show this is what is required.
Datums and datum references
A datum is a theoretically exact plane, line, point, or axis. A datum feature is a physical feature of a part identified by a datum feature symbol and corresponding datum feature triangle, e.g.,
These are then referred to by one or more 'datum references' which indicate measurements that should be made with respect to the corresponding datum feature. The datum reference frame can describe how the part fits or functions.
Purpose & rules
The purpose of GD&T is to describe the engineering intent of parts and assemblies. GD&T can more accurately define the dimensional requirements for a part, allowing over 50% more tolerance zone than coordinate (or linear) dimensioning in some cases. Proper application of GD&T will ensure that the part defined on the drawing has the desired form, fit (within limits) and function with the largest possible tolerances. GD&T can add quality and reduce cost at the same time through producibility.
According to ASME Y14.5, the fundamental rules of GD&T are as follows,
All dimensions must have a tolerance. Plus and minus tolerances may be applied directly to dimensions or applied from a general tolerance block or general note. For basic dimensions, geometric tolerances are indirectly applied in a related feature control frame. The only exceptions are for dimensions marked as minimum, maximum, stock or reference.
Dimensions and tolerancing shall fully define each feature. Measurement directly from the drawing or assuming dimensions is not allowed except for special undimensioned drawings.
A drawing should have the minimum number of dimensions required to fully define the end product. The use of reference dimensions should be minimized.
Dimensions should be applied to features and arranged to represent the function and mating relationship of the part. There should only be one way to interpret dimensions.
Part geometry should be defined without explicitly specifying manufacturing methods.
If dimensions are required during manufacturing but not the final geometry (due to shrinkage or other causes) they should be marked as non-mandatory.
Dimensions should be arranged for maximum readability and should be applied to visible lines in true profiles.
When geometry is normally controlled by gage sizes or by code (e.g. stock materials), the dimension(s) shall be included with the gage or code number in parentheses following the dimension.
Angles of 90° are assumed when lines (including center lines) are shown at right angles, but no angle is specified.
Basic 90° angles are assumed where center lines of features in a pattern or surfaces shown at right angles on a 2D orthographic drawing are located or defined by basic dimensions and no angle is specified.
A basic dimension of zero is assumed where axes, center planes, or surfaces are shown coincident on a drawing, and the relationship between features is defined by geometric tolerances.
Dimensions and tolerances are valid at and unless stated otherwise.
Unless explicitly stated, dimensions and tolerances only apply in a free-state condition.
Unless explicitly stated, tolerances apply to the full length, width, and depth of a feature.
Dimensions and tolerances only apply at the level of the drawing where specified. It is not mandatory that they apply at other levels (such as an assembly drawing).
Coordinate systems shown on drawings should be right-handed. Each axis should be labeled and the positive direction should be shown.
Symbols
List of geometric characteristics
List of modifiers
The following table shows only some of the more commonly used modifiers in GD&T. It is not an exhaustive list.
Certification
The American Society of Mechanical Engineers (ASME) provides two levels of certification:
Technologist GDTP, which provides an assessment of an individual's ability to understand drawings that have been prepared using the language of Geometric Dimensioning & Tolerancing.
Senior GDTP, which provides the additional measure of an individual's ability to select proper geometric controls as well as to properly apply them to drawings.
Data exchange
Exchange of geometric dimensioning and tolerancing (GD&T) information between CAD systems is available on different levels of fidelity for different purposes:
In the early days of CAD, exchange-only lines, texts and symbols were written into the exchange file. A receiving system could display them on the screen or print them out, but only a human could interpret them.
GD&T presentation: On a next higher level the presentation information is enhanced by grouping them together into callouts for a particular purpose, e.g. a datum feature callout and a datum reference frame. And there is also the information which of the curves in the file are leader, projection or dimension curves and which are used to form the shape of a product.
GD&T representation: Unlike GD&T presentation, the GD&T representation does not deal with how the information is presented to the user but only deals with which element of a shape of a product has which GD&T characteristic. A system supporting GD&T representation may display GD&T information in some tree and other dialogs and allow the user to directly select and highlight the corresponding feature on the shape of the product, 2D and 3D.
Ideally both GD&T presentation and representation are available in the exchange file and are associated with each other. Then a receiving system can allow a user to select a GD&T callout and get the corresponding feature highlighted on the shape of the product.
An enhancement of GD&T representation is defining a formal language for GD&T (similar to a programming language) which also has built-in rules and restrictions for the proper GD&T usage. This is still a research area (see below reference to McCaleb and ISO 10303-1666).
GD&T validation: Based on GD&T representation data (but not on GD&T presentation) and the shape of a product in some useful format (e.g. a boundary representation), it is possible to validate the completeness and consistency of the GD&T information. The software tool FBTol from the Kansas City Plant is probably the first one in this area.
GD&T representation information can also be used for the software assisted manufacturing planning and cost calculation of parts. See ISO 10303-224 and 238 below.
Documents and standards
ISO TC 10 Technical product documentation
ISO 129 Technical drawings – Indication of dimensions and tolerances
ISO 7083 Symbols for geometrical tolerancing – Proportions and dimensions
ISO 13715 Technical drawings – Edges of undefined shape – Vocabulary and indications
ISO 15786 Simplified representation and dimensioning of holes
ISO 16792:2021 Technical product documentation—Digital product definition data practices (Note: ISO 16792:2006 was derived from ASME Y14.41-2003 by permission of ASME)
ISO/TC 213 Dimensional and geometrical product specifications and verification
In ISO/TR 14638 GPS – Masterplan the distinction between fundamental, global, general and complementary GPS standards is made.
Fundamental GPS standards
ISO 8015 Concepts, principles and rules
Global GPS standards
ISO 14660-1 Geometrical features
ISO/TS 17, orientation and location
ISO 1101 Geometrical tolerancing – Tolerances of form, orientation, location and run-out
Amendment 1 Representation of specifications in the form of a 3D model
ISO 1119 Series of conical tapers and taper angles
ISO 2692 Geometrical tolerancing – Maximum material requirement (MMR), least material requirement (LMR) and reciprocity requirement (RPR)
ISO 3040 Dimensioning and tolerancing – Cones
ISO 5458 Geometrical tolerancing – Positional tolerancing
ISO 5459 Geometrical tolerancing – Datums and datum systems
ISO 10578 Tolerancing of orientation and location – Projected tolerance zone
ISO 10579 Dimensioning and tolerancing – Non-rigid parts
ISO 14406 Extraction
ISO 22432 Features used in specification and verification
General GPS standards: Areal and profile surface texture
ISO 1302 Indication of surface texture in technical product documentation
ISO 3274 Surface texture: Profile method – Nominal characteristics of contact (stylus) instruments
ISO 4287 Surface texture: Profile method – Terms, definitions and surface texture parameters
ISO 4288 Surface texture: Profile method – Rules and procedures for the assessment of surface texture
ISO 8785 Surface imperfections – Terms, definitions and parameters
Form of a surface independent of a datum or datum system. Each of them has a part 1 for the Vocabulary and parameters and a part 2 for the Specification operators:
ISO 12180 Cylindricity
ISO 12181 Roundness
ISO 12780 Straightness
ISO 12781 Flatness
ISO 25178 Surface texture: Areal
General GPS standards: Extraction and filtration techniques
ISO/TS 1661 Filtration
ISO 11562 Surface texture: Profile method – Metrological characteristics of phase correct filters
ISO 12085 Surface texture: Profile method – Motif parameters
ISO 13565 Profile method; Surfaces having stratified functional properties
ASME standards
ASME Y14.41 Digital Product Definition Data Practices
ASME Y14.5 Dimensioning and Tolerancing
ASME Y14.5.1M Mathematical Definition of Dimensioning and Tolerancing Principles
ASME is also working on a Spanish translation for the ASME Y14.5 – Dimensioning and Tolerancing Standard.
GD&T standards for data exchange and integration
ISO 10303 Industrial automation systems and integration – Product data representation and exchange
ISO 10303-47 Integrated generic resource: Shape variation tolerances
ISO/TS 10303-1130 Application module: Derived shape element
ISO/TS 10303-1050 Application module: Dimension tolerance
ISO/TS 10303-1051 Application module: Geometric tolerance
ISO/TS 10303-1052 Application module: Default tolerance
ISO/TS 10303-1666 Application module: Extended geometric tolerance
ISO 10303-203 Application protocol: Configuration controlled 3D design of mechanical parts and assemblies
ISO 10303-210 Application protocol: Electronic assembly, interconnection, and packaging design
ISO 10303-214 Application protocol: Core data for automotive mechanical design processes
ISO 10303-224 Application protocol: Mechanical product definition for process planning using machining features
ISO 10303-238 Application protocol: Application interpreted model for computerized numerical controllers (STEP-NC)
ISO 10303-242 Application protocol: Managed model based 3D engineering
See also
Dimensional instruments
Engineering fit
Engineering tolerance
Gauge (instrument)
Geometrical Product Specification and Verification
Position sensor
Specification of surface finish
References
Further reading
External links
General tolerances for linear and angular dimensions according to ISO 2768
What is GD&T
The importance of GD&T
GD&T Glossary of Terms and Definitions
GDT: Introduction
ASME Certification
Changes and Additions to ASME Y14.5M
NIST MBE PMI Validation and Conformance Testing Project Tests implementations of GD&T in CAD software
STEP File Analyzer and Viewer - Analyze GD&T in a STEP file
Technical drawing
Applied geometry
Geometric measurement | Geometric dimensioning and tolerancing | [
"Physics",
"Mathematics",
"Engineering"
] | 3,138 | [
"Geometric measurement",
"Design engineering",
"Physical quantities",
"Applied mathematics",
"Quantity",
"Civil engineering",
"Geometry",
"Applied geometry",
"Technical drawing"
] |
469,201 | https://en.wikipedia.org/wiki/Plymax | Plymax is a composite material. It consists of a thin sheet of metal, such as aluminium, copper, or duralumin, that is bonded to a thicker sheet of plywood, giving it strength and rigidity at a relatively low weight.
Plymax was introduced during the 1920s, quickly spreading throughout Europe and the world by the 1930s. The material has been extensively used in the aviation industry; various military aircraft throughout the 1930s and 1940s made use of the material. Plymax was also adopted by the automotive industry, being used on the bodies of several cars, such as the 1931 Triumph Super 9 and the articulated body of the Trojan Tasker. It has also found use within the construction industry for various large fixtures, typically partitions and doors, as well as by furniture manufacturers for various home furnishings.
History
Plymax first emerged during the early 1920s, one of the first compositions was a copper-faced plywood, manufactured by Luterma. The term 'Plymax' was a registered trademark of Venesta, which introduced the material to the British market. During the 1930s, the material was being extensively used within the architectural and construction sectors, where items such as partitions, doors, cubicals, counters, and stalls were constructed and installed throughout large buildings such as schools, hospitals, offices, and factories.
According to Venesta, the typical metals used in Plymax are steel and aluminium; however, alternatives have included copper, bronze, stainless steel and others. Key qualities of the material are its rigidity and relatively light weight. By itself, each metal sheet lacks stiffness, when paired to the plywood to become a composite, it becomes far more rigid, being similar in principle to that of a steel girder in that the vertical web contributes little to the overall strength in comparison to the flanges set at the top and bottom, but plays a critical role in the rigidity.
Due to its high strength-to-weight ratio, the material attracted the attention of various aircraft manufacturers. One notable example of its use was on the Morane-Saulnier M.S.406, a Second World War-era fighter aircraft that was extensively used by the French Air Force. Various aircraft would adopt the material for portions of their design; in one case, the interior floor of the Handley Page Halifax was composed of Plymax.
Plymax was reportedly once a popular material for furniture use, although its use has somewhat diminished following the arrival of new types of processed woods.
References
External links
Plywood: Material of the Modern World
From cockpit to domestic interior: the Great War and the architecture of Wells Coates
Composite materials | Plymax | [
"Physics"
] | 542 | [
"Materials",
"Composite materials",
"Matter"
] |
469,365 | https://en.wikipedia.org/wiki/Fundamental%20domain | Given a topological space and a group acting on it, the images of a single point under the group action form an orbit of the action. A fundamental domain or fundamental region is a subset of the space which contains exactly one point from each of these orbits. It serves as a geometric realization for the abstract set of representatives of the orbits.
There are many ways to choose a fundamental domain. Typically, a fundamental domain is required to be a connected subset with some restrictions on its boundary, for example, smooth or polyhedral. The images of a chosen fundamental domain under the group action then tile the space. One general construction of fundamental domains uses Voronoi cells.
Hints at a general definition
Given an action of a group G on a topological space X by homeomorphisms, a fundamental domain for this action is a set D of representatives for the orbits. It is usually required to be a reasonably nice set topologically, in one of several precisely defined ways. One typical condition is that D is almost an open set, in the sense that D is the symmetric difference of an open set in X with a set of measure zero, for a certain (quasi)invariant measure on X. A fundamental domain always contains a free regular set U, an open set moved around by G into disjoint copies, and nearly as good as D in representing the orbits. Frequently D is required to be a complete set of coset representatives with some repetitions, but the repeated part has measure zero. This is a typical situation in ergodic theory. If a fundamental domain is used to calculate an integral on X/G, sets of measure zero do not matter.
For example, when X is Euclidean space Rn of dimension n, and G is the lattice Zn acting on it by translations, the quotient X/G is the n-dimensional torus. A fundamental domain D here can be taken to be [0,1)n, which differs from the open set (0,1)n by a set of measure zero, or the closed unit cube [0,1]n, whose boundary consists of the points whose orbit has more than one representative in D.
Examples
Examples in the three-dimensional Euclidean space R3.
for n-fold rotation: an orbit is either a set of n points around the axis, or a single point on the axis; the fundamental domain is a sector
for reflection in a plane: an orbit is either a set of 2 points, one on each side of the plane, or a single point in the plane; the fundamental domain is a half-space bounded by that plane
for reflection in a point: an orbit is a set of 2 points, one on each side of the center, except for one orbit, consisting of the center only; the fundamental domain is a half-space bounded by any plane through the center
for 180° rotation about a line: an orbit is either a set of 2 points opposite to each other with respect to the axis, or a single point on the axis; the fundamental domain is a half-space bounded by any plane through the line
for discrete translational symmetry in one direction: the orbits are translates of a 1D lattice in the direction of the translation vector; the fundamental domain is an infinite slab
for discrete translational symmetry in two directions: the orbits are translates of a 2D lattice in the plane through the translation vectors; the fundamental domain is an infinite bar with parallelogrammatic cross section
for discrete translational symmetry in three directions: the orbits are translates of the lattice; the fundamental domain is a primitive cell which is e.g. a parallelepiped, or a Wigner-Seitz cell, also called Voronoi cell/diagram.
In the case of translational symmetry combined with other symmetries, the fundamental domain is part of the primitive cell. For example, for wallpaper groups the fundamental domain is a factor 1, 2, 3, 4, 6, 8, or 12 smaller than the primitive cell.
Fundamental domain for the modular group
The diagram to the right shows part of the construction of the fundamental domain for the action of the modular group Γ on the upper half-plane H.
This famous diagram appears in all classical books on modular functions. (It was probably well known to C. F. Gauss, who dealt with fundamental domains in the guise of the reduction theory of quadratic forms.) Here, each triangular region (bounded by the blue lines) is a free regular set of the action of Γ on H. The boundaries (the blue lines) are not a part of the free regular sets. To construct a fundamental domain of H/Γ, one must also consider how to assign points on the boundary, being careful not to double-count such points. Thus, the free regular set in this example is
The fundamental domain is built by adding the boundary on the left plus half the arc on the bottom including the point in the middle:
The choice of which points of the boundary to include as a part of the fundamental domain is arbitrary, and varies from author to author.
The core difficulty of defining the fundamental domain lies not so much with the definition of the set per se, but rather with how to treat integrals over the fundamental domain, when integrating functions with poles and zeros on the boundary of the domain.
See also
Free regular set
Fundamental polygon
Brillouin zone
Fundamental pair of periods
Petersson inner product
Cusp neighborhood
External links
Topological groups
Ergodic theory
Riemann surfaces
Group actions (mathematics) | Fundamental domain | [
"Physics",
"Mathematics"
] | 1,117 | [
"Group actions",
"Space (mathematics)",
"Ergodic theory",
"Topological spaces",
"Topological groups",
"Symmetry",
"Dynamical systems"
] |
470,084 | https://en.wikipedia.org/wiki/Cholecalciferol | Cholecalciferol, also known as vitamin D3 or colecalciferol, is a type of vitamin D that is produced by the skin when exposed to UVB light; it is found in certain foods and can be taken as a dietary supplement.
Cholecalciferol is synthesised in the skin following sunlight exposure. It is then converted in the liver to calcifediol (25-hydroxycholecalciferol D), which is further converted in the kidney to calcitriol (1,25-dihydroxycholecalciferol D). One of calcitriol’s most important functions is to promote calcium uptake by the intestines. Cholecalciferol is present in food such as fatty fish, beef liver, eggs, and cheese. In some countries, cholecalciferol is also added to products like plants, cow milk, fruit juice, yogurt, and margarine.
Cholecalciferol can be taken orally as a dietary supplement to prevent vitamin D deficiency or as a medication to treat associated diseases, including rickets. It is also used in the management of familial hypophosphatemia, hypoparathyroidism that is causing low blood calcium, and Fanconi syndrome. Vitamin-D supplements may not be effective in people with severe kidney disease. Excessive doses in humans can result in vomiting, constipation, muscle weakness, and confusion. Other risks include kidney stones. Doses greater than IU () per day are generally required before high blood calcium occurs. Normal doses, IU per day, are safe in pregnancy.
Cholecalciferol was first described in 1936. It is on the World Health Organization's List of Essential Medicines. In 2022, it was the 62nd most commonly prescribed medication in the United States, with more than 10million prescriptions. Cholecalciferol is available as a generic medication.
Medical uses
Cholecalciferol (vitamin D3) appears to stimulate the body's interferon type I signaling system that protects against bacteria and viruses, unlike vitamin D2.
Vitamin D deficiency
Cholecalciferol is a form of vitamin D which is naturally synthesized in skin and functions as a pro-hormone, being converted to calcitriol. This is important for maintaining calcium levels and promoting bone health and development. As a medication, cholecalciferol may be taken as a dietary supplement to prevent or to treat vitamin D deficiency. One gram is () IU, equivalently is , or . Dietary reference intake values for vitamin D (ergocalciferol, which is D2, or cholecalciferol, which is D3), or both, have been established and recommendations vary depending on the country:
In the US: () for all individuals (males, females, pregnant/lactating women) between the ages of 1 and 70 years, inclusive. For all individuals older than 70 years, () is recommended.
In the EU: () for all people older than 1 year and () for infants aged 7–11 months, assuming minimal cutaneous vitamin D synthesis.
In the UK: a ‘Safe Intake’ (SI) of () for infants < 1 year (including exclusively breastfed infants) and an SI of () for children aged 1 to <4 years; for all other population groups aged 4 years and more (including pregnant/lactating women) a Reference Nutrient Intake (RNI) of ().
Low levels of vitamin D3 are more commonly found in individuals living in northern latitudes or with other reasons for a lack of regular sun exposure, including being housebound, frail, elderly, or obese, having darker skin, and wearing clothes that cover most of the skin. Supplements are recommended for these groups of people.
The Institute of Medicine in 2010 recommended a maximum uptake of vitamin D of , finding that the dose for lowest observed adverse effect level is 40,000 IU daily for at least 12 weeks, and that there was a single case of toxicity above after more than seven years of daily intake; this case of toxicity occurred in circumstances that have led other researchers to dispute whether it is a credible case to consider when making vitamin D intake recommendations. Patients with severe vitamin D deficiency will require treatment with a loading dose; its magnitude can be calculated based on the actual serum 25-hydroxy-vitamin D level and body weight.
There are conflicting reports concerning the relative effectiveness of cholecalciferol (D3) versus ergocalciferol (D2), with some studies suggesting less efficacy of D2, and others showing no difference. There are differences in absorption, binding and inactivation of the two forms, with evidence usually favoring cholecalciferol in raising levels in blood, although more research is needed.
A much less common use of cholecalciferol therapy in rickets utilizes a single large dose and has been called stoss therapy. Treatment is given either orally or by intramuscular injection of () to ( = ), in a single dose, or sometimes in two to four divided doses. There are concerns about the safety of such large doses.
Low circulating vitamin D levels have been associated with lower total testosterone levels in males. Vitamin D supplementation could potentially improve total testosterone concentration, although more research is needed.
Other diseases
A meta-analysis of 2007 concluded that daily intake of of vitamin D3 could reduce the incidence of colorectal cancer with minimal risk. Also a 2008 study published in Cancer Research has shown the addition of vitamin D3 (along with calcium) to the diet of some mice fed a regimen similar in nutritional content to a new Western diet with 1000 IU cholecalciferol per day prevented colon cancer development. In humans, with daily, there was no effect of cholecalciferol supplements on the risk of colorectal cancer.
Supplements are not recommended for prevention of cancer as any effects of cholecalciferol are very small. Although correlations exist between low levels of blood serum cholecalciferol and higher rates of various cancers, multiple sclerosis, tuberculosis, heart disease, and diabetes, the consensus is that supplementing levels is not beneficial. It is thought that tuberculosis may result in lower levels. It, however, is not entirely clear how the two are related.
Biochemistry
Structure
Cholecalciferol is one of the five forms of vitamin D. Cholecalciferol is a secosteroid, that is, a steroid molecule with one ring open.
Mechanism of action
By itself cholecalciferol is inactive. It is converted to its active form by two hydroxylations: the first in the liver, by CYP2R1 or CYP27A1, to form 25-hydroxycholecalciferol (calcifediol, 25-OH vitamin D3). The second hydroxylation occurs mainly in the kidney through the action of CYP27B1 to convert 25-OH vitamin D3 into 1,25-dihydroxycholecalciferol (calcitriol, 1,25-(OH)2vitamin D3). All these metabolites are bound in blood to the vitamin D-binding protein. The action of calcitriol is mediated by the vitamin D receptor, a nuclear receptor which regulates the synthesis of hundreds of proteins and is present in virtually every cell in the body.
Biosynthesis
Click on icon in lower right corner to open.
7-Dehydrocholesterol is the precursor of cholecalciferol. Within the epidermal layer of skin, 7-dehydrocholesterol undergoes an electrocyclic reaction as a result of UVB light at wavelengths between , with peak synthesis occurring at . This results in the opening of the vitamin precursor B-ring through a conrotatory pathway making previtamin D3 (pre-cholecalciferol). In a process which is independent of UV light, the pre-cholecalciferol then undergoes a [1,7] antarafacial sigmatropic rearrangement and therein finally isomerizes to form vitamin D3.
The active UVB wavelengths are little present in sunlight, and sufficient amounts of cholecalciferol can be produced with moderate exposure of the skin, depending on the strength of the sun. Time of day, season, latitude, and altitude affect the strength of the sun, and pollution, cloud cover or glass all reduce the amount of UVB exposure. Exposure of face, arms and legs, averaging minutes twice per week, may be sufficient, but the darker the skin, and the weaker the sunlight, the more minutes of exposure are needed. Vitamin D overdose is impossible from UV exposure; the skin reaches an equilibrium where the vitamin degrades as fast as it is created.
Cholecalciferol can be produced in skin from the light emitted by the UV lamps in tanning beds, which produce ultraviolet primarily in the UVA spectrum, but typically produce 4% to 10% of the total UV emissions as UVB. Levels in blood are higher in frequent users of tanning salons.
A 293 nanometer UVB light emitting diode (LED) was found to be 2.4 times more efficient in producing vitamin D3 than the sun in less than the time. (https://pubmed.ncbi.nlm.nih.gov/28904394/).
Whether cholecalciferol and all forms of vitamin D are by definition "vitamins" can be disputed, since the definition of vitamins includes that the substance cannot be synthesized by the body and must be ingested. Cholecalciferol is synthesized by the body during UVB radiation exposure.
The three steps in the synthesis and activation of vitamin D3 are regulated as follows:
Cholecalciferol is synthesized in the skin from 7-dehydrocholesterol under the action of ultraviolet B (UVB) light. It reaches an equilibrium after several minutes depending on the intensity of the UVB in the sunlight – determined by latitude, season, cloud cover, and altitude – and the age and degree of pigmentation of the skin.
Hydroxylation in the endoplasmic reticulum of liver hepatocytes of cholecalciferol to calcifediol (25-hydroxycholecalciferol) by 25-hydroxylase is loosely regulated, if at all, and blood levels of this molecule largely reflect the amount of cholecalciferol produced in the skin combined with any vitamin D2 or D3 ingested.
Hydroxylation in the kidneys of calcifediol to calcitriol by 1-alpha-hydroxylase is tightly regulated: it is stimulated by parathyroid hormone and serves as the major control point in the production of the active circulating hormone calcitriol (1,25-dihydroxyvitamin D3).
Industrial production
Cholecalciferol is produced industrially for use in vitamin supplements and to fortify foods. As a pharmaceutical drug it is called cholecalciferol (USAN) or colecalciferol (INN, BAN). It is produced by the ultraviolet irradiation of 7-dehydrocholesterol extracted from lanolin found in sheep's wool. Cholesterol is extracted from wool grease and wool wax alcohols obtained from the cleaning of wool after shearing. The cholesterol undergoes a four-step process to make 7-dehydrocholesterol, the same compound that is produced in the skin of animals. The 7-dehydrocholesterol is then irradiated with ultraviolet light. Some unwanted isomers are formed during irradiation: these are removed by various techniques, leaving a resin which melts at about room temperature and usually has a potency of .
Cholecalciferol is also produced industrially for use in vitamin supplements from lichens, which is suitable for vegans.
Stability
Cholecalciferol is very sensitive to UV radiation and will rapidly, but reversibly, break down to form supra-sterols, which can further irreversibly convert to ergosterol.
Pesticide
Rodents are somewhat more susceptible to high doses than other species, and cholecalciferol has been used in poison bait for the control of these pests.
The mechanism of high dose cholecalciferol is that it can produce "hypercalcemia, which results in systemic calcification of soft tissue, leading to kidney failure, cardiac abnormalities, hypertension, CNS depression, and GI upset. Signs generally develop within of ingestion and can include depression, loss of appetite, polyuria, and polydipsia." High-dose cholecalciferol will tend to rapidly accumulate in adipose tissue yet release more slowly which will tend to delay time of death for several days from the time that high-dose bait is introduced.
In New Zealand, possums have become a significant pest animal. For possum control, cholecalciferol has been used as the active ingredient in lethal baits. The LD50 is 16.8 mg/kg, but only 9.8 mg/kg if calcium carbonate is added to the bait. Kidneys and heart are target organs. LD50 of 4.4 mg/kg has been reported in rabbits, with lethality to almost all rabbits ingesting doses greater than 15 mg/kg. Toxicity has been reported across a wide range of cholecalciferol dosages, with LD50 as high as 88 mg/kg or LDLo as low as 2 mg/kg reported for dogs.
Researchers have reported that the compound is less toxic to non-target species than earlier generations of anticoagulant rodenticides (Warfarin and congeners) or Bromethalin, and that relay toxicosis (poisoning by eating a poisoned animal) has not been documented. Nevertheless, the same source reports that use of cholecalciferol in rodenticides may still pose a significant hazard to other animals, such as dogs and cats, when rodenticide bait or other forms of cholecalciferol are directly ingested.
See also
Hypervitaminosis D, Vitamin D poisoning
Ergocalciferol, vitamin D2
25-Hydroxyvitamin D 1-alpha-hydroxylase, a kidney enzyme that converts calcifediol to calcitriol
References
External links
Ultraviolet B Light Emitting Diodes (LEDs) Are More Efficient and Effective in Producing Vitamin D3 in Human Skin Compared to Natural Sunlight
NIST Chemistry WebBook page for cholecalciferol
Vitamin D metabolism, sex hormones, and male reproductive function.
Cyclopentanes
CYP2D6 inhibitors
Polyenes
Rodenticides
Secosteroids
Vitamers
Vitamin D
World Health Organization essential medicines
Wikipedia medicine articles ready to translate
Vinylidene compounds | Cholecalciferol | [
"Biology"
] | 3,141 | [
"Biocides",
"Rodenticides"
] |
470,107 | https://en.wikipedia.org/wiki/Ferritin | Ferritin is a universal intracellular and extracellular protein that stores iron and releases it in a controlled fashion. The protein is produced by almost all living organisms, including archaea, bacteria, algae, higher plants, and animals. It is the primary intracellular iron-storage protein in both prokaryotes and eukaryotes, keeping iron in a soluble and non-toxic form. In humans, it acts as a buffer against iron deficiency and iron overload.
Ferritin is found in most tissues as a cytosolic protein, but small amounts are secreted into the serum where it functions as an iron carrier. Plasma ferritin is also an indirect marker of the total amount of iron stored in the body; hence, serum ferritin is used as a diagnostic test for iron-deficiency anemia and iron overload. Aggregated ferritin transforms into a water insoluble, crystalline and amorphous form of storage iron called hemosiderin.
Ferritin is a globular protein complex consisting of 24 protein subunits forming a hollow spherical nanocage with multiple metal–protein interactions. Ferritin with iron removed is called apoferritin.
Gene
Ferritin genes are highly conserved between species. All vertebrate ferritin genes have three introns and four exons. In human ferritin, introns are present between amino acid residues 14 and 15, 34 and 35, and 82 and 83; in addition, there are one to two hundred untranslated bases at either end of the combined exons. The tyrosine residue at amino acid position 27 is thought to be associated with biomineralization.
Protein structure
Ferritin is a hollow globular protein of mass 474 kDa and comprising 24 subunits. Typically it has internal and external diameters of about 8 and 12 nm, respectively. The nature of these subunits varies by class of organism:
In vertebrates, the subunits are of two types, light (L) and heavy (H), which have apparent molecular mass of 19 kDa and 21 kDa, respectively; their sequences are homologous (about 50% identical).
Amphibians have an additional ("M") type of ferritin.
Plants and bacteria have a single ferritin; it most closely resembles the vertebrate H-type.
In the gastropods of the genus Lymnaea, two types have been recovered, from somatic cells and the yolk, respectively (see below).
In the pearl oyster Pinctada fucata, an additional subunit resembling Lymnaea soma ferritin is associated with shell formation.
In the parasite Schistosoma, two types are present: one in males, the other in females.
All the aforementioned ferritins are similar, in terms of their primary sequence, with the vertebrate H-type. In E. coli, a 20% similarity to human H-ferritin is observed. Some ferritin complexes in vertebrates are hetero-oligomers of two highly related gene products with slightly different physiological properties. The ratio of the two homologous proteins in the complex depends on the relative expression levels of the two genes.
Cytosolic ferritin shell (apoferritin) is a heteropolymer of 24 subunits of heavy (H) and light (L) peptides that form a hollow spherical nanocage that covers an iron core composed of crystallites together with phosphate and hydroxide ions. The resulting particle is similar to ferrihydrite (5Fe2O3·9H2O). Each ferritin complex can store about 4500 iron (Fe3+) ions. The proportion of H to L subunits varies in ferritin from different tissues, explaining its heterogeneity on isoelectric focusing. L-rich ferritins (from spleen and liver) are more basic than H-rich ferritins (from heart and red blood cells).
Serum ferritin, which is typically iron-poor, consists almost exclusively of L subunits. Serum ferritin is heterogeneous due to glycosylation. The glycosylation and direct relationship of serum ferritin concentration to iron storage in macrophages suggest it is secreted by macrophages in response to changing iron levels.
Human mitochondrial ferritin, MtF, was found to express as a pro-protein. When a mitochondrion takes it up, it processes it into a mature protein similar to the ferritins found in the cytoplasm, which it assembles to form functional ferritin shells. Unlike other human ferritins, it is a homopolymer of H type ferritin and appears to have no introns (intronless) in its genetic code. The mitochondrial ferritin's Ramachandran plot shows its structure to be mainly alpha helical with a low prevalence of beta sheets. It accumulates in large amounts in the erythroblasts of subjects with impaired heme synthesis.
Function
Iron storage
Ferritin is present in every cell type. It serves to store iron in a non-toxic form, to deposit it in a safe form, and to transport it to areas where it is required. The function and structure of the expressed ferritin protein varies in different cell types. This is controlled primarily by the amount and stability of messenger RNA (mRNA), but also by changes in how the mRNA is stored and how efficiently it is transcribed. One major trigger for the production of many ferritins is the mere presence of iron; an exception is the yolk ferritin of Lymnaea sp., which lacks an iron-responsive unit.
Free iron is toxic to cells as it acts as a catalyst in the formation of free radicals from reactive oxygen species via the Fenton reaction. Hence vertebrates have an elaborate set of protective mechanisms to bind iron in various tissue compartments. Within cells, iron is stored in a protein complex as ferritin or the related complex hemosiderin. Apoferritin binds to free ferrous iron and stores it in the ferric state. As ferritin accumulates within cells of the reticuloendothelial system, protein aggregates are formed as hemosiderin. Iron in ferritin or hemosiderin can be extracted for release by the RE cells, although hemosiderin is less readily available. Under steady-state conditions, the level of ferritin in the blood serum correlates with total body stores of iron; thus, the serum ferritin FR5Rl is the most convenient laboratory test to estimate iron stores.
Because iron is an important mineral in mineralization, ferritin is employed in the shells of organisms such as molluscs to control the concentration and distribution of iron, thus sculpting shell morphology and colouration. It also plays a role in the haemolymph of the polyplacophora, where it serves to rapidly transport iron to the mineralizing radula.
Iron is released from ferritin for use by ferritin degradation, which is performed mainly by lysosomes.
Ferroxidase activity
Vertebrate ferritin consists of two or three subunits which are named based on their molecular weight: L "light", H "heavy", and M "middle" subunits. The M subunit has only been reported in bullfrogs. In bacteria and archaea, ferritin consists of one subunit type. H and M subunits of eukaryotic ferritin and all subunits of bacterial and archaeal ferritin are H-type and have ferroxidase activity, which means they are able to convert iron from the ferrous (Fe2+) to ferric (Fe3+) forms. This limits the deleterious reaction which occurs between ferrous iron and hydrogen peroxide known as the Fenton reaction which produces the highly damaging hydroxyl radical. The ferroxidase activity occurs at a diiron binding site in the middle of each H-type subunits. After oxidation of Fe(II), the Fe(III) product stays metastably in the ferroxidase center and is displaced by Fe(II), a mechanism that appears to be common among ferritins of all three domains of life. The light chain of ferritin has no ferroxidase activity but may be responsible for the electron transfer across the protein cage.<ref
name="pmid25348725"></ref>
Immune response
Ferritin concentrations increase drastically in the presence of an infection or cancer. Endotoxins are an up-regulator of the gene coding for ferritin, thus causing the concentration of ferritin to rise. By contrast, organisms such as Pseudomonas, although possessing endotoxin, cause plasma ferritin levels to drop significantly within the first 48 hours of infection. Thus, the iron stores of the infected body are denied to the infective agent, impeding its metabolism.
Stress response
The concentration of ferritin has been shown to increase in response to stresses such as anoxia, which implies that it is an acute phase protein.
Mitochondria
Mitochondrial ferritin has many roles pertaining to molecular function. It participates in ferroxidase activity, binding, iron ion binding, oxidoreductase activity, ferric iron binding, metal ion binding as well as transition metal binding. Within the realm of biological processes it participates in oxidation-reduction, iron ion transport across membranes and cellular iron ion homeostasis.
Yolk
In some snails, the protein component of the egg yolk is primarily ferritin. This is a different ferritin, with a different genetic sequence, from the somatic ferritin. It is produced in the midgut glands and secreted into the haemolymph, whence it is transported to the eggs.
Tissue distribution
In vertebrates, ferritin is usually found within cells, although it is also present in smaller quantities in the plasma.
Diagnostic uses
Serum ferritin levels are measured in medical laboratories as part of the iron studies workup for iron-deficiency anemia. They are measured in nanograms per milliliter (ng/mL) or micrograms per liter (μg/L); the two units are equivalent.
The ferritin levels measured usually have a direct correlation with the total amount of iron stored in the body. However, ferritin levels may be artificially high in cases of anemia of chronic disease, where ferritin is elevated in its capacity as an inflammatory acute phase protein and not as a marker for iron overload.
Normal ranges
A normal ferritin blood level, referred to as the reference interval is determined by many testing laboratories. The ranges for ferritin can vary between laboratories but typical ranges would be between 40 and 300 ng/mL (=μg/L) for males, and 20–200 ng/mL (=μg/L) for females.
Deficiency
According to a 2014 review in the New England Journal of Medicine stated that a ferritin level below 30 ng/mL indicates iron deficiency, while a level below 10 ng/mL indicates iron-deficiency anemia. A 2020 World Health Organization guideline states that ferritin indicates iron deficiency below 12 ng/mL in apparently-healthy children under 5 and 15 ng/mL in apparently-healthy individuals of 5 and over.
Some studies suggest that women with fatigue and ferritin below 50 ng/mL see reduced fatigue after iron supplementation.
In the setting of anemia, low serum ferritin is the most specific lab finding for iron-deficiency anemia. However it is less sensitive, since its levels are increased in the blood by infection or any type of chronic inflammation, and these conditions may convert what would otherwise be a low level of ferritin from lack of iron, into a value in the normal range. For this reason, low ferritin levels carry more information than those in the normal range. A falsely low blood ferritin (equivalent to a false positive test) is very uncommon, but can result from a hook effect of the measuring tools in extreme cases.
Low ferritin may also indicate hypothyroidism, vitamin C deficiency or celiac disease.
Low serum ferritin levels are seen in some patients with restless legs syndrome, not necessarily related to anemia, but perhaps due to low iron stores short of anemia.
Vegetarianism is not a cause of low serum ferritin levels, according to the American Dietetic Association's position in 2009: "Incidence of iron-deficiency anemia among vegetarians is similar to that of non-vegetarians. Although vegetarian adults have lower iron stores than non-vegetarians, their serum ferritin levels are usually within the normal range."
Excess
If ferritin is high, there is iron in excess or else there is an acute inflammatory reaction in which ferritin is mobilized without iron excess. For example, ferritins may be high in infection without signaling body iron overload.
Ferritin is also used as a marker for iron overload disorders, such as hemochromatosis or hemosiderosis. Adult-onset Still's disease, some porphyrias, and hemophagocytic lymphohistiocytosis/macrophage activation syndrome are diseases in which the ferritin level may be abnormally raised.
As ferritin is also an acute-phase reactant, it is often elevated in the course of disease. A normal C-reactive protein can be used to exclude elevated ferritin caused by acute phase reactions.
Ferritin has been shown to be elevated in some cases of COVID-19 and may correlate with worse clinical outcome. Ferritin and IL-6 are considered to be possible immunological biomarkers for severe and fatal cases of COVID-19. Ferritin and C-reactive protein may be possible screening tools for early diagnosis of systemic inflammatory response syndrome in cases of COVID-19.
According to a study of anorexia nervosa patients, ferritin can be elevated during periods of acute malnourishment, perhaps due to iron going into storage as intravascular volume and thus the number of red blood cells falls.
Another study suggests that due to the catabolic nature of anorexia nervosa, isoferritins may be released. Furthermore, ferritin has significant non-storage roles within the body, such as protection from oxidative damage. The rise of these isoferritins may contribute to an overall increase in ferritin concentration. The measurement of ferritin through immunoassay or immunoturbidimeteric methods may also be picking up these isoferritins thus not a true reflection of iron storage status.
Studies reveal that a transferrin saturation (serum iron concentration ÷ total iron binding capacity) over 60 percent in men and over 50 percent in women identified the presence of an abnormality in iron metabolism (hereditary hemochromatosis, heterozygotes, and homozygotes) with approximately 95 percent accuracy. This finding helps in the early diagnosis of hereditary hemochromatosis, especially while serum ferritin still remains low. The retained iron in hereditary hemochromatosis is primarily deposited in parenchymal cells, with reticuloendothelial cell accumulation occurring very late in the disease. This is in contrast to transfusional iron overload in which iron deposition occurs first in the reticuloendothelial cells and then in parenchymal cells. This explains why ferritin levels remain relative low in hereditary hemochromatosis, while transferrin saturation is high.
In chronic liver diseases
Hematological abnormalities often associate with chronic liver diseases. Both iron overload and iron deficient anemia have been reported in patients with liver cirrhosis. The former is mainly due to reduced hepcidin level caused by the decreased synthetic capacity of the liver, while the latter is due to acute and chronic bleeding caused by portal hypertension. Inflammation is also present in patients with advanced chronic liver disease. As a consequence, elevated hepatic and serum ferritin levels are consistently reported in chronic liver diseases.
Studies showed association between high serum ferritin levels and increased risk of short-term mortality in cirrhotic patients with acute decompensation and acute-on-chronic liver failure. An other study found association between high serum ferritin levels and increased risk of long-term mortality in compensated and stable decompensated cirrhotic patients. The same study demonstrated that increased serum ferritin levels could predict the development of bacterial infection in stable decompensated cirrhotic patients, while in compensated cirrhotic patients the appearance of the very first acute decompensation episode showed higher incidence in patients with low serum ferritin levels. This latter finding was explained by the association between chronic bleeding and increased portal pressure.
Discovery
Ferritin was discovered in 1937 by the Czechoslovakian scientist . Sam Granick and Leonor Michaelis produced apoferritin in 1942
Applications
Ferritin is used in materials science as a precursor in making iron nanoparticles (NP) for carbon nanotube growth by chemical vapor deposition. It has also been shown to effectively store electrons for hours and to facilitate electron tunneling under ambient conditions, properties that may be involved in biological processes.
Cavities formed by ferritin and mini-ferritins (Dps) proteins have been successfully used as the reaction chamber for the fabrication of metal nanoparticles. Protein shells served as a template to restrain particle growth and as a coating to prevent coagulation/aggregation between NPs. Using various sizes of protein shells, various sizes of NPs can be easily synthesized for chemical, physical and bio-medical applications.
Experimental COVID-19 vaccines have been produced that display the spike protein's receptor binding domain on the surface of ferritin nanoparticles.
Notes
The primary peptide sequence of human ferritin is:
MTTASTSQVR QNYHQDSEAA INRQINLELY ASYVYLSMSY YFDRDDVALK NFAKYFLHQS HEEREHAEKL MKLQNQRGGR IFLQDIKKPD CDDWESGLNA MECALHLEKN VNQSLLEFPS PISPSPSCWH HYTTNRPQPQ HHLLRPRRRK RPHSIPTPIL IFRSP.
See also
Bacterioferritin
DNA-binding protein from starved cells
Ferritin light chain
Transferrin
References
External links
Ferritin at Lab Tests Online
Iron metabolism
Blood tests
Chemical pathology
Acute-phase proteins
Storage proteins | Ferritin | [
"Chemistry",
"Biology"
] | 3,932 | [
"Biochemistry",
"Blood tests",
"Chemical pathology"
] |
470,140 | https://en.wikipedia.org/wiki/Transferrin | Transferrins are glycoproteins found in vertebrates which bind and consequently mediate the transport of iron (Fe) through blood plasma. They are produced in the liver and contain binding sites for two Fe3+ ions. Human transferrin is encoded by the TF gene and produced as a 76 kDa glycoprotein.
Transferrin glycoproteins bind iron tightly, but reversibly. Although iron bound to transferrin is less than 0.1% (4 mg) of total body iron, it forms the most vital iron pool with the highest rate of turnover (25 mg/24 h). Transferrin has a molecular weight of around 80 kDa and contains two specific high-affinity Fe(III) binding sites. The affinity of transferrin for Fe(III) is extremely high (association constant is 1020 M−1 at pH 7.4) but decreases progressively with decreasing pH below neutrality. Transferrins are not limited to only binding to iron but also to different metal ions. These glycoproteins are located in various bodily fluids of vertebrates. Some invertebrates have proteins that act like transferrin found in the hemolymph.
When not bound to iron, transferrin is known as "apotransferrin" (see also apoprotein).
Occurrence and function
Transferrins are glycoproteins that are often found in biological fluids of vertebrates. When a transferrin protein loaded with iron encounters a transferrin receptor on the surface of a cell, e.g., erythroid precursors in the bone marrow, it binds to it and is transported into the cell in a vesicle by receptor-mediated endocytosis. The pH of the vesicle is reduced by hydrogen ion pumps ( ATPases) to about 5.5, causing transferrin to release its iron ions. Iron release rate is dependent on several factors including pH levels, interactions between lobes, temperature, salt, and chelator. The receptor with its ligand bound transferrin is then transported through the endocytic cycle back to the cell surface, ready for another round of iron uptake.
Each transferrin molecule has the ability to carry two iron ions in the ferric form ().
Humans and other mammals
The liver is the main site of transferrin synthesis but other tissues and organs, including the brain, also produce transferrin. A major source of transferrin secretion in the brain is the choroid plexus in the ventricular system. The main role of transferrin is to deliver iron from absorption centers in the duodenum and white blood cell macrophages to all tissues. Transferrin plays a key role in areas where erythropoiesis and active cell division occur. The receptor helps maintain iron homeostasis in the cells by controlling iron concentrations.
The gene coding for transferrin in humans is located in chromosome band 3q21.
Medical professionals may check serum transferrin level in iron deficiency and in iron overload disorders such as hemochromatosis.
Other species
Drosophila melanogaster has three transferrin genes and is highly divergent from all other model clades, Ciona intestinalis one, Danio rerio has three highly divergent from each other, as do Takifugu rubripes and Xenopus tropicalis and Gallus gallus, while Monodelphis domestica has two divergent orthologs, and Mus musculus has two relatively close and one more distant ortholog. Relatedness and orthology/paralogy data are also available for Dictyostelium discoideum, Arabidopsis thaliana, and Pseudomonas aeruginosa.
Structure
In humans, transferrin consists of a polypeptide chain containing 679 amino acids and two carbohydrate chains. The protein is composed of alpha helices and beta sheets that form two domains. The N- and C- terminal sequences are represented by globular lobes and between the two lobes is an iron-binding site.
The amino acids which bind the iron ion to the transferrin are identical for both lobes; two tyrosines, one histidine, and one aspartic acid. For the iron ion to bind, an anion is required, preferably carbonate ().
Transferrin also has a transferrin iron-bound receptor; it is a disulfide-linked homodimer. In humans, each monomer consists of 760 amino acids. It enables ligand bonding to the transferrin, as each monomer can bind to one or two atoms of iron. Each monomer consists of three domains: the protease, the helical, and the apical domains. The shape of a transferrin receptor resembles a butterfly based on the intersection of three clearly shaped domains. Two main transferrin receptors found in humans denoted as transferrin receptor 1 (TfR1) and transferrin receptor 2 (TfR2). Although both are similar in structure, TfR1 can only bind specifically to human TF where TfR2 also has the capability to interact with bovine TF.
Immune system
Transferrin is also associated with the innate immune system. It is found in the mucosa and binds iron, thus creating an environment low in free iron that impedes bacterial survival in a process called iron withholding. The level of transferrin decreases in inflammation.
Role in disease
An increased plasma transferrin level is often seen in patients with iron deficiency anemia, during pregnancy, and with the use of oral contraceptives, reflecting an increase in transferrin protein expression. When plasma transferrin levels rise, there is a reciprocal decrease in percent transferrin iron saturation, and a corresponding increase in total iron binding capacity in iron deficient states
A decreased plasma transferrin level can occur in iron overload diseases and protein malnutrition. An absence of transferrin results from a rare genetic disorder known as atransferrinemia, a condition characterized by anemia and hemosiderosis in the heart and liver that leads to heart failure and many other complications as well as to H63D syndrome.
Studies reveal that a transferrin saturation (serum iron concentration ÷ total iron binding capacity) over 60 percent in men and over 50 percent in women identified the presence of an abnormality in iron metabolism (Hereditary hemochromatosis, heterozygotes and homozygotes) with approximately 95 percent accuracy. This finding helps in the early diagnosis of Hereditary hemochromatosis, especially while serum ferritin still remains low. The retained iron in Hereditary hemochromatosis is primarily deposited in parenchymal cells, with reticuloendothelial cell accumulation occurring very late in the disease. This is in contrast to transfusional iron overload in which iron deposition occurs first in the reticuloendothelial cells and then in parenchymal cells. This explains why ferritin levels remain relative low in Hereditary hemochromatosis, while transferrin saturation is high.
Transferrin and its receptor have been shown to diminish tumour cells when the receptor is used to attract antibodies.
Transferrin and nanomedicine
Many drugs are hindered when providing treatment when crossing the blood-brain barrier yielding poor uptake into areas of the brain. Transferrin glycoproteins are able to bypass the blood-brain barrier via receptor-mediated transport for specific transferrin receptors found in the brain capillary endothelial cells. Due to this functionality, it is theorized that nanoparticles acting as drug carriers bound to transferrin glycoproteins can penetrate the blood-brain barrier allowing these substances to reach the diseased cells in the brain. Advances with transferrin conjugated nanoparticles can lead to non-invasive drug distribution in the brain with potential therapeutic consequences of central nervous system (CNS) targeted diseases (e.g. Alzheimer's or Parkinson's disease).
Other effects
Carbohydrate deficient transferrin increases in the blood with heavy ethanol consumption and can be monitored through laboratory testing.
Transferrin is an acute phase protein and is seen to decrease in inflammation, cancers, and certain diseases (in contrast to other acute phase proteins, e.g., C-reactive protein, which increase in case of acute inflammation).
Pathology
Atransferrinemia is associated with a deficiency in transferrin.
In nephrotic syndrome, urinary loss of transferrin, along with other serum proteins such as thyroxine-binding globulin, gammaglobulin, and anti-thrombin III, can manifest as iron-resistant microcytic anemia.
Reference ranges
An example reference range for transferrin is 204–360 mg/dL. Laboratory test results should always be interpreted using the reference range provided by the laboratory that performed the test.
A high transferrin level may indicate an iron deficiency anemia. Levels of serum iron and total iron binding capacity (TIBC) are used in conjunction with transferrin to specify any abnormality. See interpretation of TIBC. Low transferrin likely indicates malnutrition.
Interactions
Transferrin has been shown to interact with insulin-like growth factor 2 and IGFBP3. Transcriptional regulation of transferrin is upregulated by retinoic acid.
Related proteins
Members of the family include blood serotransferrin (or siderophilin, usually simply called transferrin); lactotransferrin (lactoferrin); milk transferrin; egg white ovotransferrin (conalbumin); and membrane-associated melanotransferrin.
See also
Beta-2 transferrin
Transferrin receptor
Total iron-binding capacity
Transferrin saturation
Ferritin
Optiferrin recombinant human transferrin
Atransferrinemia
Hypotransferrinemia
HFE H63D gene mutation
References
Further reading
External links
Chemical pathology
Iron metabolism
Transferrins | Transferrin | [
"Chemistry",
"Biology"
] | 2,067 | [
"Biochemistry",
"Chemical pathology"
] |
471,167 | https://en.wikipedia.org/wiki/Liquid%20crystal%20on%20silicon | Liquid crystal on silicon (LCoS or LCOS) is a miniaturized reflective active-matrix liquid-crystal display or "microdisplay" using a liquid crystal layer on top of a silicon backplane. It is also known as a spatial light modulator. LCoS initially was developed for projection televisions, but has since found additional uses in wavelength selective switching, structured illumination, near-eye displays and optical pulse shaping.
LCoS is distinct from other LCD projector technologies which use transmissive LCD, allowing light to pass through the light processing unit (s). LCoS is more similar to DLP micro-mirror displays.
Technology
The Hughes liquid crystal light valve (LCLV) was designed to modulate a high-intensity light beam using a weaker light source, conceptually similar to how an amplifier increases the amplitude of an electrical signal; LCLV was named after the common name for the triode vacuum tube. A high-resolution, low-intensity light source (typically a CRT) was used to "write" an image in the photosensor layer, which is energized by a transparent indium tin oxide electrode, driven by an alternating current source at approximately 10 mV. A light-blocking layer prevents the low-intensity writing light from shining through the device; the photosensor and light-blocking layer together form a rectifying junction, producing a DC voltage bias across the liquid crystal layer, transferring the image to the reflecting side by changing the rotation of polarization in the twisted nematic liquid crystal. On the reflecting side, a high-intensity, polarized projection light source reflects selectively from the dielectric mirror based on the polarization within the liquid crystal being controlled by the photosensor. The dielectric mirror is formed by sputtering alternating layers of and , with the final layer etched to align the liquid crystal material. Later development of the LCLV used similar semiconductor materials arranged in the same basic structures.
The LCLV principle is carried forward in a digital LCoS display device, which features an array of pixels, each equivalent to the reflecting side of a single LCLV. These pixels on the LCoS device are driven directly by signals to modulate the intensity of reflected light, rather than a low intensity "writing light" source in the LCLV. For example, a chip with XGA resolution has an array of 1024×768 pixels, each with an independently addressable transistor. In the LCoS device, a complementary metal–oxide–semiconductor (CMOS) chip controls the voltage on square reflective aluminium electrodes buried just below the chip surface, each controlling one pixel. Typical chips are approximately square and approximately thick, with pixel pitch as small as . A common voltage for all the pixels is supplied by a transparent conductive layer made of indium tin oxide on the cover glass.
Displays
History
The history of LCoS projectors dates back to June 1972, when LCLV technology was first developed by scientists at Hughes Research Laboratories working on an internal research and development project. General Electric demonstrated a low-resolution LCoS display in the late 1970s. LCLV projectors were used primarily for military flight simulators due to their large and bulky size. A joint venture between Hughes Electronics and JVC (Hughes-JVC) was founded in 1992 to develop LCLV technology for commercial movie theaters under the branding ILA (Image Light Amplifer). One example was tall and weighed , using a 7 kW Xenon arc lamp.
In 1997, engineers at JVC developed the D-ILA (Direct-Drive Image Light Amplifier) from the Hughes LCLV, which led to smaller and more affordable digital LCoS projectors, using three-chip D-ILA devices. Although these were not as bright and had less resolution than the cinema ILA projectors, they were more portable, starting at .
The early LCoS projectors had their challenges. They suffered from a phenomenon called "image sticking," where the image would remain on the screen after it was supposed to be gone. This was due to the mirrors sticking in their positions, which resulted in ghosting on the screen. However, manufacturers continued to refine the technology, and today's LCoS projectors have largely overcome this issue.
Sony introduced its SXRD (Silicon X-tal Reflective Display) technology in 2004. SXRD was an evolution of LCoS technology that used even smaller pixels and a higher resolution, resulting in an even more accurate image. The SXRD technology was used in Sony's high-end home theater projectors, and it quickly gained a reputation for its exceptional picture quality.
JVC introduced an updated D-ILA technology in 2006, which eliminated the need for a polarizing filter, resulting in a brighter and more vibrant image. The D-ILA technology has since become a popular choice for home theater enthusiasts.
LCoS projectors have continued to evolve, with manufacturers introducing features like 4K resolution and HDR (High Dynamic Range) support. LCoS projectors are now available at a range of price points, from affordable models for home theater use to high-end professional models used in commercial installations.
Display system architectures
LCoS display technology is a type of microdisplay that has gained popularity due to its high image quality and ability to display high-resolution images. LCos display systems typically consist of three main components: the LCos panel, the light source, and the optical system.
The LCos panel is the heart of the display system. It consists of an array of pixels that are arranged in a grid pattern. Each pixel is made up of a liquid crystal layer, a reflective layer, and a silicon substrate. The liquid crystal layer controls the polarization of light that passes through it, while the reflective layer reflects the light back towards the optical system. The silicon substrate is used to control the individual pixels and provides the necessary electronics to drive the LCos panel.
The light source is used to provide the necessary illumination for the LCos panel. The most common light source used in LCos display systems is a high-intensity lamp. This lamp emits a broad spectrum of light that is filtered through a color wheel or other optical components to provide the necessary color gamut for the display system.
The optical system is responsible for directing the light from the light source onto the LCos panel and projecting the resulting image onto a screen or other surface. The optical system consists of a number of lenses, mirrors, and other optical components that are carefully designed and calibrated to provide the necessary magnification, focus, and color correction for the display system.
Three-panel designs
The white light is separated into three components (red, green and blue) and then combined back after modulation by the 3 LCoS devices. The light is additionally polarized by beam splitters.
One-panel designs
Both Toshiba's and Intel's single-panel LCOS display program were discontinued in 2004 before any units reached final-stage prototype. There were single-panel LCoS displays in production: One by Philips and one by Microdisplay Corporation. Forth Dimension Displays continues to offer a Ferroelectric LCoS display technology (known as Time Domain Imaging) available in QXGA, SXGA and WXGA resolutions which today is used for high resolution near-eye applications such as Training & Simulation, structured light pattern projection for AOI. Citizen Finedevice (CFD) also continues to manufacturer single panel RGB displays using FLCoS technology (Ferroelectric Liquid Crystals). They manufacture displays in multiple resolutions and sizes that are currently used in pico-projectors, electronic viewfinders for high end digital cameras, and head-mounted displays.
Pico projectors, near-eye and head-mounted displays
Whilst initially developed for large-screen projectors, LCoS displays have found a consumer niche in the area of pico-projectors, where their small size and low power consumption are well-matched to the constraints of such devices.
LCoS devices are also used in near-eye applications such as electronic viewfinders for digital cameras, film cameras, and head-mounted displays (HMDs). These devices are made using ferroelectric liquid crystals (so the technology is named FLCoS) which are inherently faster than other types of liquid crystals to produce high quality images. Google's initial foray into wearable computing, Google glass, also uses a near-eye LCoS display.
At CES 2018, Hong Kong Applied Science and Technology Research Institute Company Limited (ASTRI) and OmniVision showcased a reference design for a wireless augmented reality headset that could achieve 60 degree field of view (FoV). It combined a single-chip 1080p LCOS display and image sensor from OmniVision with ASTRI's optics and electronics. The headset is said to be smaller and lighter than others because of its single-chip design with integrated driver and memory buffer.
Wavelength-selective switches
LCoS is particularly attractive as a switching mechanism in a wavelength-selective switch (WSS). LCoS-based WSS were initially developed by Australian company Engana, now part of Finisar. The LCoS can be employed to control the phase of light at each pixel to produce beam-steering where the large number of pixels allow a near continuous addressing capability. Typically, a large number of phase steps are used to create a highly efficient, low-insertion loss switch shown. This simple optical design incorporates polarisation diversity, control of mode size and a 4-f wavelength optical imaging in the dispersive axis of the LCoS providing integrated switching and optical power control.
In operation, the light passes from a fibre array through the polarisation imaging optics which separates physically and aligns orthogonal polarisation states to be in the high efficiency s-polarisation state of the diffraction grating. The input light from a chosen fibre of the array is reflected from the imaging mirror and then angularly dispersed by the grating which is at near Littrow incidence, reflecting the light back to the imaging optics which directs each channel to a different portion of the LCoS. The path for each wavelength is then retraced upon reflection from the LCoS, with the beam-steering image applied on the LCOS directing the light to a particular port of the fibre array. As the wavelength channels are separated on the LCoS the switching of each wavelength is independent of all others and can be switched without interfering with the light on other channels. There are many different algorithms that can be implemented to achieve a given coupling between ports including less efficient "images" for attenuation or power splitting.
WSS based on MEMS and/or liquid crystal technologies allocate a single switching element (pixel) to each channel which means the bandwidth and centre frequency of each channel are fixed at the time of manufacture and cannot be changed in service. In addition, many designs of first-generation WSS (particularly those based on MEMs technology) show pronounced dips in the transmission spectrum between each channel due to the limited spectral ‘fill factor’ inherent in these designs. This prevents the simple concatenation of adjacent channels to create a single broader channel.
LCoS-based WSS, however, permit dynamic control of channel centre frequency and bandwidth through on-the-fly modification of the pixel arrays via embedded software. The degree of control of channel parameters can be very fine-grained, with independent control of the centre frequency and either upper- or lower-band-edge of a channel with better than 1 GHz resolution possible. This is advantageous from a manufacturability perspective, with different channel plans being able to be created from a single platform and even different operating bands (such as C and L) being able to use an identical switch matrix. Additionally, it is possible to take advantage of this ability to reconfigure channels while the device is operating. Products have been introduced allowing switching between 50 GHz channels and 100 GHz channels, or a mix of channels, without introducing any errors or "hits" to the existing traffic. More recently, this has been extended to support the whole concept of Flexible or Elastic networks under ITU G.654.2 through products such as Finisar's Flexgrid™ WSS.
Other LCoS applications
Optical pulse shaping
The ability of an LCoS-based WSS to independently control both the amplitude and phase of the transmitted signal leads to the more general ability to manipulate the amplitude and/or phase of an optical pulse through a process known as Fourier-domain pulse shaping. This process requires full characterisation of the input pulse in both the time and spectral domains.
As an example, an LCoS-based Programmable Optical Processor (POP) has been used to broaden a mode-locked laser output into a 20 nm supercontinuum source whilst a second such device was used to compress the output to 400 fs, transform-limited pulses. Passive mode-locking of fiber lasers has been demonstrated at high repetition rates, but inclusion of an LCoS-based POP allowed the phase content of the spectrum to be changed to flip the pulse train of a passively mode-locked laser from bright to dark pulses. A similar approach uses spectral shaping of optical frequency combs to create multiple pulse trains. For example, a 10 GHz optical frequency comb was shaped by the POP to generate dark parabolic pulses and Gaussian pulses, at 1540 nm and 1560 nm, respectively.
Light structuring
Structured light using a fast ferroelectric LCoS is used in 3D-superresolution microscopy techniques and in fringe projection for 3D-automated optical inspection.
Modal switching in space division multiplexed optical communications systems
One of the interesting applications of LCoS is the ability to transform between modes of few-moded optical fibers which have been proposed as the basis of higher capacity transmission systems in the future. Similarly LCoS has been used to steer light into selected cores of multicore fiber transmission systems, again as a type of Space Division Multiplexing.
Tunable lasers
LCoS has been used as a filtering technique, and hence a tuning mechanism, for both semiconductor diode and fiber lasers.
See also
References
External links
Biever, Celeste. 'Intel inside' comes to flat panel TVs (January 9, 2004 – No longer planned for development) New Scientist
Everything You Need to Know About TV Technologies from Hardware Secrets
LCoS projectors at Projectors Pick
Projectors
Display technology
Silicon
Liquid crystal displays | Liquid crystal on silicon | [
"Engineering"
] | 2,961 | [
"Electronic engineering",
"Display technology"
] |
471,487 | https://en.wikipedia.org/wiki/Cosmic%20distance%20ladder | The cosmic distance ladder (also known as the extragalactic distance scale) is the succession of methods by which astronomers determine the distances to celestial objects. A direct distance measurement of an astronomical object is possible only for those objects that are "close enough" (within about a thousand parsecs) to Earth. The techniques for determining distances to more distant objects are all based on various measured correlations between methods that work at close distances and methods that work at larger distances. Several methods rely on a standard candle, which is an astronomical object that has a known luminosity.
The ladder analogy arises because no single technique can measure distances at all ranges encountered in astronomy. Instead, one method can be used to measure nearby distances, a second can be used to measure nearby to intermediate distances, and so on. Each rung of the ladder provides information that can be used to determine the distances at the next higher rung.
Direct measurement
At the base of the ladder are fundamental distance measurements, in which distances are determined directly, with no physical assumptions about the nature of the object in question. The precise measurement of stellar positions is part of the discipline of astrometry. Early fundamental distances—such as the radii of the earth, moon and sun, and the distances between them—were well estimated with very low technology by the ancient Greeks.
Astronomical unit
Direct distance measurements are based upon the astronomical unit (AU), which is defined as the mean distance between the Earth and the Sun.
Kepler's laws provide precise ratios of the orbit sizes of objects orbiting the Sun, but provide no measurement of the overall scale of the orbit system. Radar is used to measure the distance between the orbits of the Earth and of a second body. From that measurement and the ratio of the two orbit sizes, the size of Earth's orbit is calculated. The Earth's orbit is known with an absolute precision of a few meters and a relative precision of a few parts in 100 billion ().
Historically, observations of Venus transits were crucial in determining the AU; in the first half of the 20th century, observations of asteroids were also important. Presently the orbit of Earth is determined with high precision using radar measurements of distances to Venus and other nearby planets and asteroids, and by tracking interplanetary spacecraft in their orbits around the Sun through the Solar System.
Parallax
Standard candles
Almost all astronomical objects used as physical distance indicators belong to a class that has a known brightness. By comparing this known luminosity to an object's observed brightness, the distance to the object can be computed using the inverse-square law. These objects of known brightness are termed standard candles, coined by Henrietta Swan Leavitt.
The brightness of an object can be expressed in terms of its absolute magnitude. This quantity is derived from the logarithm of its luminosity as seen from a distance of 10 parsecs. The apparent magnitude, the magnitude as seen by the observer (an instrument called a bolometer is used), can be measured and used with the absolute magnitude to calculate the distance d to the object in parsecs as follows:
or
where m is the apparent magnitude, and M the absolute magnitude. For this to be accurate, both magnitudes must be in the same frequency band and there can be no relative motion in the radial direction.
Some means of correcting for interstellar extinction, which also makes objects appear fainter and more red, is needed, especially if the object lies within a dusty or gaseous region. The difference between an object's absolute and apparent magnitudes is called its distance modulus, and astronomical distances, especially intergalactic ones, are sometimes tabulated in this way.
Problems
Two problems exist for any class of standard candle. The principal one is calibration, that is the determination of exactly what the absolute magnitude of the candle is. This includes defining the class well enough that members can be recognized, and finding enough members of that class with well-known distances to allow their true absolute magnitude to be determined with enough accuracy. The second problem lies in recognizing members of the class, and not mistakenly using a standard candle calibration on an object which does not belong to the class. At extreme distances, which is where one most wishes to use a distance indicator, this recognition problem can be quite serious.
A significant issue with standard candles is the recurring question of how standard they are. For example, all observations seem to indicate that Type Ia supernovae that are of known distance have the same brightness, corrected by the shape of the light curve. The basis for this closeness in brightness is discussed below; however, the possibility exists that the distant Type Ia supernovae have different properties than nearby Type Ia supernovae. The use of Type Ia supernovae is crucial in determining the correct cosmological model. If indeed the properties of Type Ia supernovae are different at large distances, i.e. if the extrapolation of their calibration to arbitrary distances is not valid, ignoring this variation can dangerously bias the reconstruction of the cosmological parameters, in particular the reconstruction of the matter density parameter.
That this is not merely a philosophical issue can be seen from the history of distance measurements using Cepheid variables. In the 1950s, Walter Baade discovered that the nearby Cepheid variables used to calibrate the standard candle were of a different type than the ones used to measure distances to nearby galaxies. The nearby Cepheid variables were population I stars with much higher metal content than the distant population II stars. As a result, the population II stars were actually much brighter than believed, and when corrected, this had the effect of doubling the estimates of distances to the globular clusters, the nearby galaxies, and the diameter of the Milky Way.
Most recently kilonova have been proposed as another type of standard candle. "Since kilonovae explosions are spherical, astronomers could compare the apparent size of a supernova explosion with its actual size as seen by the gas motion, and thus measure the rate of cosmic expansion at different distances."
Standard siren
Gravitational waves originating from the inspiral phase of compact binary systems, such as neutron stars or black holes, have the useful property that energy emitted as gravitational radiation comes exclusively from the orbital energy of the pair, and the resultant shrinking of their orbits is directly observable as an increase in the frequency of the emitted gravitational waves. To leading order, the rate of change of frequency is given by
where is the gravitational constant, is the speed of light, and is a single (therefore computable) number called the chirp mass of the system, a combination of the masses of the two objects
By observing the waveform, the chirp mass can be computed and thence the power (rate of energy emission) of the gravitational waves. Thus, such a gravitational wave source is a standard siren of known loudness.
Just as with standard candles, given the emitted and received amplitudes, the inverse-square law determines the distance to the source. There are some differences with standard candles, however. Gravitational waves are not emitted isotropically, but measuring the polarisation of the wave provides enough information to determine the angle of emission. Gravitational wave detectors also have anisotropic antenna patterns, so the position of the source on the sky relative to the detectors is needed to determine the angle of reception.
Generally, if a wave is detected by a network of three detectors at different locations, the network will measure enough information to make these corrections and obtain the distance. Also unlike standard candles, gravitational waves need no calibration against other distance measures. The measurement of distance does of course require the calibration of the gravitational wave detectors, but then the distance is fundamentally given as a multiple of the wavelength of the laser light being used in the gravitational wave interferometer.
There are other considerations that limit the accuracy of this distance, besides detector calibration. Fortunately, gravitational waves are not subject to extinction due to an intervening absorbing medium. But they are subject to gravitational lensing, in the same way as light. If a signal is strongly lensed, then it might be received as multiple events, separated in time, the analogue of multiple images of a quasar, for example. Less easy to discern and control for is the effect of weak lensing, where the signal's path through space is affected by many small magnification and demagnification events. This will be important for signals originating at cosmological redshifts greater than 1. It is difficult for detector networks to measure the polarization of a signal accurately if the binary system is observed nearly face-on. Such signals suffer significantly larger errors in the distance measurement. Unfortunately, binaries radiate most strongly perpendicular to the orbital plane, so face-on signals are intrinsically stronger and the most commonly observed.
If the binary consists of a pair of neutron stars, their merger will be accompanied by a kilonova/hypernova explosion that may allow the position to be accurately identified by electromagnetic telescopes. In such cases, the redshift of the host galaxy allows a determination of the Hubble constant . This was the case for GW170817, which was used to make the first such measurement. Even if no electromagnetic counterpart can be identified for an ensemble of signals, it is possible to use a statistical method to infer the value of .
Standard ruler
Another class of physical distance indicator is the standard ruler. In 2008, galaxy diameters have been proposed as a possible standard ruler for cosmological parameter determination. More recently the physical scale imprinted by baryon acoustic oscillations (BAO) in the early universe has been used.
In the early universe (before recombination) the baryons and photons scatter off each other, and form a tightly coupled fluid that can support sound waves. The waves are sourced by primordial density perturbations, and travel at speed that can be predicted from the baryon density and other cosmological parameters.
The total distance that these sound waves can travel before recombination determines a fixed scale, which simply expands with the universe after recombination. BAO therefore provide a standard ruler that can be measured in galaxy surveys from the effect of baryons on the clustering of galaxies. The method requires an extensive galaxy survey in order to make this scale visible, but has been measured with percent-level precision (see baryon acoustic oscillations). The scale does depend on cosmological parameters like the baryon and matter densities, and the number of neutrinos, so distances based on BAO are more dependent on cosmological model than those based on local measurements.
Light echos can be also used as standard rulers, although it is challenging to correctly measure the source geometry.
Galactic distance indicators
With few exceptions, distances based on direct measurements are available only out to about a thousand parsecs, which is a modest portion of our own Galaxy. For distances beyond that, measures depend upon physical assumptions, that is, the assertion that one recognizes the object in question, and the class of objects is homogeneous enough that its members can be used for meaningful estimation of distance.
Physical distance indicators, used on progressively larger distance scales, include:
Dynamical parallax, uses orbital parameters of visual binaries to measure the mass of the system, and hence use the mass–luminosity relation to determine the luminosity
Eclipsing binaries — In the last decade, measurement of eclipsing binaries' fundamental parameters has become possible with 8-meter class telescopes. This makes it feasible to use them as indicators of distance. Recently, they have been used to give direct distance estimates to the Large Magellanic Cloud (LMC), Small Magellanic Cloud (SMC), Andromeda Galaxy and Triangulum Galaxy. Eclipsing binaries offer a direct method to gauge the distance to galaxies to a new improved 5% level of accuracy which is feasible with current technology to a distance of around 3 Mpc (3 million parsecs).
RR Lyrae variables — used for measuring distances within the galaxy and in nearby globular clusters.
The following four indicators all use stars in the old stellar populations (Population II):
Tip of the red-giant branch (TRGB) distance indicator.
Planetary nebula luminosity function (PNLF)
Globular cluster luminosity function (GCLF)
Surface brightness fluctuation (SBF)
In galactic astronomy, X-ray bursts (thermonuclear flashes on the surface of a neutron star) are used as standard candles. Observations of X-ray burst sometimes show X-ray spectra indicating radius expansion. Therefore, the X-ray flux at the peak of the burst should correspond to Eddington luminosity, which can be calculated once the mass of the neutron star is known (1.5 solar masses is a commonly used assumption). This method allows distance determination of some low-mass X-ray binaries. Low-mass X-ray binaries are very faint in the optical, making their distances extremely difficult to determine.
Interstellar masers can be used to derive distances to galactic and some extragalactic objects that have maser emission.
Cepheids and novae
The Tully–Fisher relation
The Faber–Jackson relation
Type Ia supernovae that have a very well-determined maximum absolute magnitude as a function of the shape of their light curve and are useful in determining extragalactic distances up to a few hundred Mpc. A notable exception is SN 2003fg, the "Champagne Supernova", a Type Ia supernova of unusual nature.
Redshifts and Hubble's law
Main sequence fitting
When the absolute magnitude for a group of stars is plotted against the spectral classification of the star, in a Hertzsprung–Russell diagram, evolutionary patterns are found that relate to the mass, age and composition of the star. In particular, during their hydrogen burning period, stars lie along a curve in the diagram called the main sequence. By measuring these properties from a star's spectrum, the position of a main sequence star on the H–R diagram can be determined, and thereby the star's absolute magnitude estimated. A comparison of this value with the apparent magnitude allows the approximate distance to be determined, after correcting for interstellar extinction of the luminosity because of gas and dust.
In a gravitationally-bound star cluster such as the Hyades, the stars formed at approximately the same age and lie at the same distance. This allows relatively accurate main sequence fitting, providing both age and distance determination.
Extragalactic distance scale
The extragalactic distance scale is a series of techniques used today by astronomers to determine the distance of cosmological bodies beyond our own galaxy, which are not easily obtained with traditional methods. Some procedures use properties of these objects, such as stars, globular clusters, nebulae, and galaxies as a whole. Other methods are based more on the statistics and probabilities of things such as entire galaxy clusters.
Wilson–Bappu effect
Discovered in 1956 by Olin Wilson and M.K. Vainu Bappu, the Wilson–Bappu effect uses the effect known as spectroscopic parallax. Many stars have features in their spectra, such as the calcium K-line, that indicate their absolute magnitude. The distance to the star can then be calculated from its apparent magnitude using the distance modulus.
There are major limitations to this method for finding stellar distances. The calibration of the spectral line strengths has limited accuracy and it requires a correction for interstellar extinction. Though in theory this method has the ability to provide reliable distance calculations to stars up to 7 megaparsecs (Mpc), it is generally only used for stars at hundreds of kiloparsecs (kpc).
Classical Cepheids
Beyond the reach of the Wilson–Bappu effect, the next method relies on the period-luminosity relation of classical Cepheid variable stars. The following relation can be used to calculate the distance to Galactic and extragalactic classical Cepheids:
Several problems complicate the use of Cepheids as standard candles and are actively debated, chief among them are: the nature and linearity of the period-luminosity relation in various passbands and the impact of metallicity on both the zero-point and slope of those relations, and the effects of photometric contamination (blending) and a changing (typically unknown) extinction law on Cepheid distances.
These unresolved matters have resulted in cited values for the Hubble constant ranging between 60 km/s/Mpc and 80 km/s/Mpc. Resolving this discrepancy is one of the foremost problems in astronomy since some cosmological parameters of the Universe may be constrained significantly better by supplying a precise value of the Hubble constant.
Cepheid variable stars were the key instrument in Edwin Hubble's 1923 conclusion that M31 (Andromeda) was an external galaxy, as opposed to a smaller nebula within the Milky Way. He was able to calculate the distance of M31 to 285 kpc, today's value being 770 kpc.
As detected thus far, NGC 3370, a spiral galaxy in the constellation Leo, contains the farthest Cepheids yet found at a distance of 29 Mpc. Cepheid variable stars are in no way perfect distance markers: at nearby galaxies they have an error of about 7% and up to a 15% error for the most distant.
Supernovae
There are several different methods for which supernovae can be used to measure extragalactic distances.
Measuring a supernova's photosphere
We can assume that a supernova expands in a spherically symmetric manner. If the supernova is close enough such that we can measure the angular extent, θ(t), of its photosphere, we can use the equation
where ω is angular velocity, θ is angular extent. In order to get an accurate measurement, it is necessary to make two observations separated by time Δt. Subsequently, we can use
where d is the distance to the supernova, Vej is the supernova's ejecta's radial velocity (it can be assumed that Vej equals Vθ if spherically symmetric).
This method works only if the supernova is close enough to be able to measure accurately the photosphere. Similarly, the expanding shell of gas is in fact not perfectly spherical nor a perfect blackbody. Also interstellar extinction can hinder the accurate measurements of the photosphere. This problem is further exacerbated by core-collapse supernova. All of these factors contribute to the distance error of up to 25%.
Type Ia light curves
Type Ia supernovae are some of the best ways to determine extragalactic distances. Ia's occur when a binary white dwarf star begins to accrete matter from its companion star. As the white dwarf gains matter, eventually it reaches its Chandrasekhar limit of .
Once reached, the star becomes unstable and undergoes a runaway nuclear fusion reaction. Because all Type Ia supernovae explode at about the same mass, their absolute magnitudes are all the same. This makes them very useful as standard candles. All Type Ia supernovae have a standard blue and visual magnitude of
Therefore, when observing a Type Ia supernova, if it is possible to determine what its peak magnitude was, then its distance can be calculated. It is not intrinsically necessary to capture the supernova directly at its peak magnitude; using the multicolor light curve shape method (MLCS), the shape of the light curve (taken at any reasonable time after the initial explosion) is compared to a family of parameterized curves that will determine the absolute magnitude at the maximum brightness. This method also takes into effect interstellar extinction/dimming from dust and gas.
Similarly, the stretch method fits the particular supernovae magnitude light curves to a template light curve. This template, as opposed to being several light curves at different wavelengths (MLCS) is just a single light curve that has been stretched (or compressed) in time. By using this Stretch Factor, the peak magnitude can be determined.
Using Type Ia supernovae is one of the most accurate methods, particularly since supernova explosions can be visible at great distances (their luminosities rival that of the galaxy in which they are situated), much farther than Cepheid Variables (500 times farther). Much time has been devoted to the refining of this method. The current uncertainty approaches a mere 5%, corresponding to an uncertainty of just 0.1 magnitudes.
Novae in distance determinations
Novae can be used in much the same way as supernovae to derive extragalactic distances. There is a direct relation between a nova's max magnitude and the time for its visible light to decline by two magnitudes. This relation is shown to be:
Where is the time derivative of the nova's mag, describing the average rate of decline over the first 2 magnitudes.
After novae fade, they are about as bright as the most luminous Cepheid variable stars, therefore both these techniques have about the same max distance: ~ 20 Mpc. The error in this method produces an uncertainty in magnitude of about ±0.4
Globular cluster luminosity function
Based on the method of comparing the luminosities of globular clusters (located in galactic halos) from distant galaxies to that of the Virgo Cluster, the globular cluster luminosity function carries an uncertainty of distance of about 20% (or 0.4 magnitudes).
US astronomer William Alvin Baum first attempted to use globular clusters to measure distant elliptical galaxies. He compared the brightest globular clusters in Virgo A galaxy with those in Andromeda, assuming the luminosities of the clusters were the same in both. Knowing the distance to Andromeda, Baum has assumed a direct correlation and estimated Virgo A's distance.
Baum used just a single globular cluster, but individual formations are often poor standard candles. Canadian astronomer René Racine assumed the use of the globular cluster luminosity function (GCLF) would lead to a better approximation. The number of globular clusters as a function of magnitude is given by:
where m0 is the turnover magnitude, M0 is the magnitude of the Virgo cluster, and sigma is the dispersion ~ 1.4 mag.
It is assumed that globular clusters all have roughly the same luminosities within the universe. There is no universal globular cluster luminosity function that applies to all galaxies.
Planetary nebula luminosity function
Like the GCLF method, a similar numerical analysis can be used for planetary nebulae within far off galaxies. The planetary nebula luminosity function (PNLF) was first proposed in the late 1970s by Holland Cole and David Jenner. They suggested that all planetary nebulae might all have similar maximum intrinsic brightness, now calculated to be M = −4.53. This would therefore make them potential standard candles for determining extragalactic distances.
Astronomer George Howard Jacoby and his colleagues later proposed that the PNLF function equaled:
Where N(M) is number of planetary nebula, having absolute magnitude M. M* is equal to the nebula with the brightest magnitude.
Surface brightness fluctuation method
The following method deals with the overall inherent properties of galaxies. These methods, though with varying error percentages, have the ability to make distance estimates beyond 100 Mpc, though it is usually applied more locally.
The surface brightness fluctuation (SBF) method takes advantage of the use of CCD cameras on telescopes. Because of spatial fluctuations in a galaxy's surface brightness, some pixels on these cameras will pick up more stars than others. As distance increases, the picture will become increasingly smoother. Analysis of this describes a magnitude of the pixel-to-pixel variation, which is directly related to a galaxy's distance.
Sigma-D relation
The Sigma-D relation (or Σ-D relation), used in elliptical galaxies, relates the angular diameter (D) of the galaxy to its velocity dispersion. It is important to describe exactly what D represents, in order to understand this method. It is, more precisely, the galaxy's angular diameter out to the surface brightness level of 20.75 B-mag arcsec−2. This surface brightness is independent of the galaxy's actual distance from us. Instead, D is inversely proportional to the galaxy's distance, represented as d. Thus, this relation does not employ standard candles. Rather, D provides a standard ruler. This relation between D and Σ is
where C is a constant which depends on the distance to the galaxy clusters.
This method has the potential to become one of the strongest methods of galactic distance calculators, perhaps exceeding the range of even the Tully–Fisher method. As of today, however, elliptical galaxies are not bright enough to provide a calibration for this method through the use of techniques such as Cepheids. Instead, calibration is done using more crude methods.
Overlap and scaling
A succession of distance indicators, which is the distance ladder, is needed for determining distances to other galaxies. The reason is that objects bright enough to be recognized and measured at such distances are so rare that few or none are present nearby, so there are too few examples close enough with reliable trigonometric parallax to calibrate the indicator. For example, Cepheid variables, one of the best indicators for nearby spiral galaxies, cannot yet be satisfactorily calibrated by parallax alone, though the Gaia space mission can now weigh in on that specific problem. The situation is further complicated by the fact that different stellar populations generally do not have all types of stars in them.
Cepheids in particular are massive stars, with short lifetimes, so they will only be found in places where stars have very recently been formed. Consequently, because elliptical galaxies usually have long ceased to have large-scale star formation, they will not have Cepheids. Instead, distance indicators whose origins are in an older stellar population (like novae and RR Lyrae variables) must be used. RR Lyrae variables are less luminous than Cepheids, and novae are unpredictable and an intensive monitoring program—and luck during that program—is needed to gather enough novae in the target galaxy for a good distance estimate.
Because the more distant steps of the cosmic distance ladder depend upon the nearer ones, the more distant steps include the effects of errors in the nearer steps, both systematic and statistical ones. The result of these propagating errors means that distances in astronomy are rarely known to the same level of precision as measurements in the other sciences, and that the precision necessarily is poorer for more distant types of object.
Another concern, especially for the very brightest standard candles, is their "standardness": how homogeneous the objects are in their true absolute magnitude. For some of these different standard candles, the homogeneity is based on theories about the formation and evolution of stars and galaxies, and is thus also subject to uncertainties in those aspects. For the most luminous of distance indicators, the Type Ia supernovae, this homogeneity is known to be poor. However, no other class of object is bright enough to be detected at such large distances, so the class is useful simply because there is no real alternative.
The observational result of Hubble's law, the proportional relationship between distance and the speed with which a galaxy is moving away from us, usually referred to as redshift, is a product of the cosmic distance ladder. Edwin Hubble observed that fainter galaxies are more redshifted. Finding the value of the Hubble constant was the result of decades of work by many astronomers, both in amassing the measurements of galaxy redshifts and in calibrating the steps of the distance ladder. Hubble's law is the primary means we have for estimating the distances of quasars and distant galaxies in which individual distance indicators cannot be seen.
See also
Araucaria Project
Distance measure
Orders of magnitude (length)#Astronomical
Standard ruler
Footnotes
References
Bibliography
Measuring the Universe The Cosmological Distance Ladder, Stephen Webb, copyright 2001.
The Astrophysical Journal, The Globular Cluster Luminosity Function as a Distance Indicator: Dynamical Effects, Ostriker and Gnedin, May 5, 1997.
An Introduction to Distance Measurement in Astronomy, Richard de Grijs, Chichester: John Wiley & Sons, 2011, .
External links
The ABC's of distances (UCLA)
The Extragalactic Distance Scale by Bill Keel
The Hubble Space Telescope Key Project on the Extragalactic Distance Scale
The Hubble Constant, a historical discussion
NASA Cosmic Distance Scale
PNLF information database
The Astrophysical Journal
Astrometry
Physical cosmological concepts
Standard candles
Length, distance, or range measuring devices
Concepts in astronomy | Cosmic distance ladder | [
"Physics",
"Astronomy"
] | 5,964 | [
"Physical cosmological concepts",
"Standard candles",
"Concepts in astrophysics",
"Concepts in astronomy",
"Astrometry",
"Astrophysics",
"Astronomical sub-disciplines"
] |
12,840,716 | https://en.wikipedia.org/wiki/Lanthanum%20strontium%20manganite | Lanthanum strontium manganite (LSM or LSMO) is an oxide ceramic material with the general formula La1−xSrxMnO3, where x describes the doping level.
It has a perovskite-based crystal structure, which has the general form ABO3. In the crystal, the 'A' sites are occupied by lanthanum and strontium atoms, and the 'B' sites are occupied by the smaller manganese atoms. In other words, the material consists of lanthanum manganite with some of the lanthanum atoms substitutionally doped with strontium atoms. The strontium (valence 2+) doping on lanthanum (valence 3+) introduces extra holes in the valence band and thus increases electronic conductivity.
Depending on the x value in La1−xSrxMnO3, the unit cell of LSMO can be rhombohedral, cubic, or hexagonal. This change in the unit cell is explained on the basis of the Goldschmidt tolerance factor for perovskites. The change in the oxidation state of the Mn cation in LSMO can be readily observed through the position of the XPS peak for the Mn 2p3/2 orbital, and the interesting ferromagnetic ordering obtained when x=0.5 and 0.7 in the La1−xSrxMnO3.
LSM has a rich electronic phase diagram, including a doping-dependent metal-insulator transition, paramagnetism and ferromagnetism. The existence of a Griffith phase has been reported as well.
LSM is black in color and has a density of approximately 6.5 g/cm3. The actual density will vary depending on the processing method and actual stoichiometry. LSM is primarily an electronic conductor, with transference number close to 1.
This material is commonly used as a cathode material in commercially produced solid oxide fuel cells (SOFCs) because it has a high electrical conductivity at higher temperatures, and its thermal expansion coefficient is well matched with yttria-stabilized zirconia (YSZ), a common material for SOFC electrolytes.
In research, LSM is one of the perovskite manganites that show the colossal magnetoresistance (CMR) effect, and is also an observed half-metal for compositions around x=0.3.
LSM behaves like a half-metal, suggesting its possible use in spintronics. It displays a colossal magnetoresistance effect. Above its Curie temperature (about 350 K) Jahn-Teller polarons are formed; the material's ability to conduct electricity depends on the presence of the polarons.
See also
Solid oxide fuel cell
magnetic tunnel junction
References
Lanthanum compounds
Strontium compounds
Manganates
Ceramic materials
Oxides
Perovskites | Lanthanum strontium manganite | [
"Chemistry",
"Engineering"
] | 604 | [
"Manganates",
"Oxides",
"Salts",
"Ceramic materials",
"Ceramic engineering"
] |
12,842,385 | https://en.wikipedia.org/wiki/Respimat | Respimat, also known as Respimat Soft Mist Inhaler, is a drug delivery device used for the treatment of asthma, chronic obstructive pulmonary disease (COPD), and other respiratory conditions.
Its developer, Boehringer Ingelheim, is currently conducting approved in the U.S. with a variety of their products, such as tiotropium and ipratropium/salbutamol. According to the manufacturer, the reusability of the inhaler reduces its carbon footprint by 71%.
References
Sources
Home — Respimat International
Drug delivery devices
Medical treatments | Respimat | [
"Chemistry"
] | 127 | [
"Pharmacology",
"Pharmacology stubs",
"Drug delivery devices",
"Medicinal chemistry stubs"
] |
12,843,792 | https://en.wikipedia.org/wiki/Genetic%20pollution | Genetic pollution is a term for uncontrolled gene flow into wild populations. It is defined as "the dispersal of contaminated altered genes from genetically engineered organisms to natural organisms, esp. by cross-pollination", but has come to be used in some broader ways. It is related to the population genetics concept of gene flow, and genetic rescue, which is genetic material intentionally introduced to increase the fitness of a population. It is called genetic pollution when it negatively impacts the fitness of a population, such as through outbreeding depression and the introduction of unwanted phenotypes which can lead to extinction.
Conservation biologists and conservationists have used the term to describe gene flow from domestic, feral, and non-native species into wild indigenous species, which they consider undesirable. They promote awareness of the effects of introduced invasive species that may "hybridize with native species, causing genetic pollution". In the fields of agriculture, agroforestry and animal husbandry, genetic pollution is used to describe gene flows between genetically engineered species and wild relatives. The use of the word "pollution" is meant to convey the idea that mixing genetic information is bad for the environment, but because the mixing of genetic information can lead to a variety of outcomes, "pollution" may not always be the most accurate descriptor.
Gene flow to wild population
Some conservation biologists and conservationists have used genetic pollution for a number of years as a term to describe gene flow from a non-native, invasive subspecies, domestic, or genetically-engineered population to a wild indigenous population.
Importance
The introduction of genetic material into the gene pool of a population by human intervention can have both positive and negative effects on populations. When genetic material is intentionally introduced to increase the fitness of a population, this is called genetic rescue. When genetic material is unintentionally introduced to a population, this is called genetic pollution and can negatively affect the fitness of a population (primarily through outbreeding depression), introduce other unwanted phenotypes, or theoretically lead to extinction.
Introduced species
An introduced species is one that is not native to a given population that is either intentionally or accidentally brought into a given ecosystem. Effects of introduction are highly variable, but if an introduced species has a major negative impact on its new environment, it can be considered an invasive species. One such example is the introduction of the Asian Longhorned beetle in North America, which was first detected in 1996 in Brooklyn, New York. It is believed that these beetles were introduced through cargo at trade ports. The beetles are highly damaging to the environment, and are estimated to cause risk to 35% of urban trees, excluding natural forests. These beetles cause severe damage to the wood of trees by larval funneling. Their presence in the ecosystem destabilizes community structure, having a negative influence on many species in the system.
Introduced species are not always disruptive to an environment, however. Tomás Carlo and Jason Gleditch of Penn State University found that the number of "invasive" honeysuckle plants in the area correlated with the number and diversity of the birds in the Happy Valley Region of Pennsylvania, suggesting introduced honeysuckle plants and birds formed a mutually beneficial relationship. Presence of introduced honeysuckle was associated with higher diversity of the bird populations in that area, demonstrating that introduced species are not always detrimental to a given environment and it is completely context dependent.
Invasive species
Conservation biologists and conservationists have, for a number of years, used the term to describe gene flow from domestic, feral, and non-native species into wild indigenous species, which they consider undesirable. For example, TRAFFIC is the international wildlife trade monitoring network that works to limit trade in wild plants and animals so that it is not a threat to conservationist goals. They promote awareness of the effects of introduced invasive species that may "hybridize with native species, causing genetic pollution". Furthermore, the Joint Nature Conservation Committee, the statutory adviser to the UK government, has stated that invasive species "will alter the genetic pool (a process called genetic pollution), which is an irreversible change."
Invasive species can invade both large and small native populations and have a profound effect. Upon invasion, invasive species interbreed with native species to form sterile or more evolutionarily fit hybrids that can outcompete the native populations. Invasive species can cause extinctions of small populations on islands that are particularly vulnerable due to their smaller amounts of genetic diversity. In these populations, local adaptations can be disrupted by the introduction of new genes that may not be as suitable for the small island environments. For example, the Cercocarpus traskiae of the Catalina Island off the coast of California has faced near extinction with only a single population remaining due to the hybridization of its offspring with Cercocarpus betuloides.
Domestic populations
Increased contact between wild and domesticated populations of organisms can lead to reproductive interactions that are detrimental to the wild population's ability to survive. A wild population is one that lives in natural areas and is not regularly looked after by humans. This contrasts with domesticated populations that live in human controlled areas and are regularly, and historically, in contact with humans. Genes from domesticated populations are added to wild populations as a result of reproduction. In many crop populations this can be the result of pollen traveling from farmed crops to neighboring wild plants of the same species. For farmed animals, this reproduction may happen as the result of escaped or released animals.
A popular example of this phenomenon is the gene flow between wolves and domesticated dogs. The New York Times cites, from the words of biologist Luigi Boitani, "Although wolves and dogs have always lived in close contact in Italy and have presumably mated in the past, the newly worrisome element, in Dr. Boitani's opinion, is the increasing disparity in numbers, which suggests that interbreeding will become fairly common. As a result, 'genetic pollution of the wolf gene pool might reach irreversible levels', he warned. 'By hybridization, dogs can easily absorb the wolf genes and destroy the wolf, as it is,' he said. The wolf might survive as a more doglike animal, better adapted to living close to people, he said, but it would not be 'what we today call a wolf.'"
Aquaculture
Aquaculture is the practice of farming aquatic animals or plants for the purpose of consumption. This practice is becoming increasingly common for the production of salmon. This is specifically termed aquaculture of salmonoids. One of the dangers of this practice is the possibility of domesticated salmon breaking free from their containment. The occurrence of escaping incidents is becoming increasingly common as aquaculture gains popularity. Farming structures may be ineffective at holding the vast number of fast growing animals they house. Natural disasters, high tides, and other environmental occurrences can also trigger aquatic animal escapes. The reason these escapes are considered dangers is the impact they pose for the wild population they reproduce with after escaping. In many instances the wild population experiences a decreased likelihood of survival after reproducing with domesticated populations of salmon.
The Washington Department of Fish and Wildlife cites that "commonly expressed concerns surrounding escaped Atlantic salmon include competition with native salmon, predation, disease transfer, hybridization, and colonization." A report done by that organization in 1999 did not find that escaped salmon posed a significant risk to wild populations.
Crops
Crops refer to groups of plants grown for consumption. Despite domestication over many years, these plants are not so far removed from their wild relatives that they couldn't reproduce if brought together. Many crops are still grown in the areas they originated and gene flow between crops and wild relatives impacts the evolution of wild populations. Farmers can avoid reproduction between the different populations by timing their planting of crops so that crops are not flowering when wild relatives would be. Domesticated crops have been changed through artificial selection and genetic engineering. The genetic make-ups of many crops is different from those of their wild relatives, but the closer they grow to one another the more likely they are to share genes through pollen. Gene flow persists between crops and wild counterparts.
Genetically engineered organisms
Genetically engineered organisms are genetically modified in a laboratory, and therefore distinct from those that were bred through artificial selection. In the fields of agriculture, agroforestry and animal husbandry, genetic pollution is being used to describe gene flows between GE species and wild relatives.
An early use of the term "genetic pollution" in this later sense appears in a wide-ranging review of the potential ecological effects of genetic engineering in The Ecologist magazine in July 1989. It was also popularized by environmentalist Jeremy Rifkin in his 1998 book The Biotech Century. While intentional crossbreeding between two genetically distinct varieties is described as hybridization with the subsequent introgression of genes, Rifkin, who had played a leading role in the ethical debate for over a decade before, used genetic pollution to describe what he considered to be problems that might occur due to the unintentional process of (modernly) genetically modified organisms (GMOs) dispersing their genes into the natural environment by breeding with wild plants or animals.
Concerns about negative consequences from gene flow between genetically engineered organisms and wild populations are valid. Most corn and soybean crops grown in the midwestern USA are genetically modified. There are corn and soybean varieties that are resistant to herbicides like glyphosate and corn that produces neonicotinoid pesticide within all of its tissues. These genetic modifications are meant to increase yields of crops but there is little evidence that yields actually increase. While scientists are concerned genetically engineered organisms can have negative effects on surrounding plant and animal communities, the risk of gene flow between genetically engineered organisms and wild populations is yet another concern. Many farmed crops may be weed resistant and reproduce with wild relatives. More research is necessary to understand how much gene flow between genetically engineered crops and wild populations occurs, and the impacts of genetic mixing.
Mutated organisms
Mutations within organisms can be executed through the process of exposing the organism to chemicals or radiation in order to generate mutations. This has been done in plants in order to create mutants that have a desired trait. These mutants can then be bred with other mutants or individuals that are not mutated in order to maintain the mutant trait. However, similar to the risks associated with introducing individuals to a certain environment, the variation created by mutated individuals could have a negative impact on native populations as well.
Preventive measures
Since 2005 there has existed a GM Contamination Register, launched for GeneWatch UK and Greenpeace International that records all incidents of intentional or accidental release of organisms genetically modified using modern techniques.
Genetic use restriction technologies (GURTs) were developed for the purpose of property protection, but could be beneficial in preventing the dispersal of transgenes. GeneSafe technologies introduced a method that became known as "Terminator." This method is based on seeds that produce sterile plants. This would prevent movement of transgenes into wild populations as hybridization would not be possible. However, this technology has never been deployed as it disproportionately negatively affects farmers in developing countries, who save seeds to use each year (whereas in developed countries, farmers generally buy seeds from seed production companies).
Physical containment has also been utilized to prevent the escape of transgenes. Physical containment includes barriers such as filters in labs, screens in greenhouses, and isolation distances in the field. Isolation distances have not always been successful, such as transgene escape from an isolated field into the wild in herbicide-resistant bentgrass Agrostis stolonifera.
Another suggested method that applies specifically to protection traits (e.g. pathogen resistance) is mitigation. Mitigation involves linking the positive trait (beneficial to fitness) to a trait that is negative (harmful to fitness) to wild but not domesticated individuals. In this case, if the protection trait was introduced to a weed, the negative trait would also be introduced in order to decrease overall fitness of the weed and decrease possibility of the individual’s reproduction and thus propagation of the transgene.
Risks
Not all genetically engineered organisms cause genetic pollution. Genetic engineering has a variety of uses and is specifically defined as a direct manipulation of the genome of an organism. Genetic pollution can occur in response to the introduction of a species that is not native to a particular environment, and genetically engineered organisms are examples of individuals that could cause genetic pollution following introduction. Due to these risks, studies have been done in order to assess the risks of genetic pollution associated with organisms that have been genetically engineered:
Genetic In a 10-year study of four different crops, none of the genetically engineered plants were found to be more invasive or more persistent than their conventional counterparts. An often cited claimed example of genetic pollution is the reputed discovery of transgenes from GE maize in landraces of maize in Oaxaca, Mexico. The report from Quist and Chapela, has since been discredited on methodological grounds. The scientific journal that originally published the study concluded that "the evidence available is not sufficient to justify the publication of the original paper." More recent attempts to replicate the original studies have concluded that genetically modified corn is absent from southern Mexico in 2003 and 2004.
A 2009 study verified the original findings of the controversial 2001 study, by finding transgenes in about 1% of 2000 samples of wild maize in Oaxaca, Mexico, despite Nature retracting the 2001 study and a second study failing to back up the findings of the initial study. The study found that the transgenes are common in some fields, but non-existent in others, hence explaining why a previous study failed to find them. Furthermore, not every laboratory method managed to find the transgenes.
A 2004 study performed near an Oregon field trial for a genetically modified variety of creeping bentgrass (Agrostis stolonifera) revealed that the transgene and its associate trait (resistance to the glyphosate herbicide) could be transmitted by wind pollination to resident plants of different Agrostis species, up to from the test field. In 2007, the Scotts Company, producer of the genetically modified bentgrass, agreed to pay a civil penalty of $500,000 to the United States Department of Agriculture (USDA). The USDA alleged that Scotts "failed to conduct a 2003 Oregon field trial in a manner which ensured that neither glyphosate-tolerant creeping bentgrass nor its offspring would persist in the environment".
Not only are there risks in terms of genetic engineering, but there are risks that emerge from species hybridization. In Czechoslovakia, ibex were introduced from Turkey and Sinai to help promote the ibex population there, which caused hybrids that produced offspring too early, which caused the overall population to disappear completely. The genes of each population of the ibex in Turkey and Sinai were locally adapted to their environments so when placed in a new environmental context did not flourish. Additionally, the environmental toll that may arise from the introduction of a new species may be so disruptive that the ecosystem is no longer able to sustain certain populations.
Controversy
Environmentalist perspectives
The use of the word "pollution" in the term genetic pollution has a deliberate negative connotation and is meant to convey the idea that mixing genetic information is bad for the environment. However, because the mixing of genetic information can lead to a variety of outcomes, "pollution" may not be the most accurate descriptor. Gene flow is undesirable according to some environmentalists and conservationists, including groups such as Greenpeace, TRAFFIC, and GeneWatch UK.
"Invasive species have been a major cause of extinction throughout the world in the past few hundred years. Some of them prey on native wildlife, compete with it for resources, or spread disease, while others may hybridize with native species, causing "genetic pollution". In these ways, invasive species are as big a threat to the balance of nature as the direct overexploitation by humans of some species."
It can also be considered undesirable if it leads to a loss of fitness in the wild populations. The term can be associated with the gene flow from a mutation bred, synthetic organism or genetically engineered organism to a non GE organism, by those who consider such gene flow detrimental. These environmentalist groups stand in complete opposition to the development and production of genetically engineered organisms.
Governmental definition
From a governmental perspective, genetic pollution is defined as follows by the Food and Agriculture Organization of the United Nations: "Uncontrolled spread of genetic information (frequently referring to transgenes) into the genomes of organisms in which such genes are not present in nature."
Scientific perspectives
Use of the term 'genetic pollution' and similar phrases such as genetic deterioration, genetic swamping, genetic takeover, and genetic aggression, are being debated by scientists as many do not find it scientifically appropriate. Rhymer and Simberloff argue that these types of terms:
...imply either that hybrids are less fit than the parentals, which need not be the case, or that there is an inherent value in "pure" gene pools.They recommend that gene flow from invasive species be termed genetic mixing since:
"Mixing" need not be value-laden, and we use it here to denote mixing of gene pools whether or not associated with a decline in fitness.
See also
Back-breeding
Biodiversity
Bioethics
Conservation biology
Dysgenics
Gene pool
Genetic erosion
Genetic monitoring
Introgression
Miscegenation
Seeds of Destruction: Hidden Agenda of Genetic Manipulation
Starlink corn recall
References
Ecology
Conservation biology
Biological contamination
Genetically modified organisms
Genetic engineering
Invasive species
Population genetics
Hybridisation (biology)
Habitat
Breeding
Evolutionary biology
Environmental terminology | Genetic pollution | [
"Chemistry",
"Engineering",
"Biology"
] | 3,601 | [
"Evolutionary biology",
"Biological engineering",
"Behavior",
"Genetically modified organisms",
"Reproduction",
"Invasive species",
"Ecology",
"Genetic engineering",
"Breeding",
"Molecular biology",
"Pests (organism)",
"Conservation biology"
] |
12,845,111 | https://en.wikipedia.org/wiki/Genetic%20erosion | Genetic erosion (also known as genetic depletion) is a process where the limited gene pool of an endangered species diminishes even more when reproductive individuals die off before reproducing with others in their endangered low population. The term is sometimes used in a narrow sense, such as when describing the loss of particular alleles or genes, as well as being used more broadly, as when referring to the loss of a phenotype or whole species.
Genetic erosion occurs because each individual organism has many unique genes which get lost when it dies without getting a chance to breed. Low genetic diversity in a population of wild animals and plants leads to a further diminishing gene pool – inbreeding and a weakening immune system can then "fast-track" that species towards eventual extinction.
By definition, endangered species suffer varying degrees of genetic erosion. Many species benefit from a human-assisted breeding program to keep their population viable, thereby avoiding extinction over long time-frames. Small populations are more susceptible to genetic erosion than larger populations.
Genetic erosion gets compounded and accelerated by habitat loss and habitat fragmentation – many endangered species are threatened by habitat loss and (fragmentation) habitat. Fragmented habitat create barriers in gene flow between populations.
The gene pool of a species or a population is the complete set of unique alleles that would be found by inspecting the genetic material of every living member of that species or population. A large gene pool indicates extensive genetic diversity, which is associated with robust populations that can survive bouts of intense selection. Meanwhile, low genetic diversity (see inbreeding and population bottlenecks) can cause reduced biological fitness and increase the chance of extinction of that species or population.
Processes and consequences
Population bottlenecks create shrinking gene pools, which leave fewer and fewer fertile mating partners. The genetic implications can be illustrated by considering the analogy of a high-stakes poker game with a crooked dealer. Consider that the game begins with a 52-card deck (representing high genetic diversity). Reduction of the number of breeding pairs with unique genes resembles the situation where the dealer deals only the same five cards over and over, producing only a few limited "hands".
As specimens begin to inbreed, both physical and reproductive congenital effects and defects appear more often. Abnormal sperm increases, infertility rises, and birthrates decline. "Most perilous are the effects on the immune defense systems, which become weakened and less and less able to fight off an increasing number of bacterial, viral, fungal, parasitic, and other disease-producing threats. Thus, even if an endangered species in a bottleneck can withstand whatever human development may be eating away at its habitat, it still faces the threat of an epidemic that could be fatal to the entire population."
Loss of agricultural and livestock biodiversity
Genetic erosion in agricultural and livestock is the loss of biological genetic diversity – including the loss of individual genes, and the loss of particular recombinants of genes (or gene complexes) – such as those manifested in locally adapted landraces of domesticated animals or plants that have become adapted to the natural environment in which they originated.
The major driving forces behind genetic erosion in crops are variety replacement, land clearing, overexploitation of species, population pressure, environmental degradation, overgrazing, governmental policy, and changing agricultural systems. The main factor, however, is the replacement of local varieties of domestic plants and animals by other varieties or species that are non-local. A large number of varieties can also often be dramatically reduced when commercial varieties are introduced into traditional farming systems. Many researchers believe that the main problem related to agro-ecosystem management is the general tendency towards genetic and ecological uniformity imposed by the development of modern agriculture.
In the case of Animal Genetic Resources for Food and Agriculture, major causes of genetic erosion are reported to include indiscriminate cross-breeding, increased use of exotic breeds, weak policies and institutions in animal genetic resources management, neglect of certain breeds because of a lack of profitability or competitiveness, the intensification of production systems, the effects of diseases and disease management, loss of pastures or other elements of the production environment, and poor control of inbreeding.
Prevention by human intervention, modern science and safeguards
In situ conservation
With advances in modern bioscience, several techniques and safeguards have emerged to check the relentless advance of genetic erosion and the resulting acceleration of endangered species towards eventual extinction. However, many of these techniques and safeguards are too expensive yet to be practical, and so the best way to protect species is to protect their habitat and to let them live in it as naturally as possible. Complicating matters, the conservation of substantial amounts of genetic diversity often requires the maintenance of multiple independent populations across a species distribution. For example, to conserve at least 90% of the genetic diversity of the northern quoll requires the conservation of at least eight populations across the continent of Australia.
Wildlife sanctuaries and national parks have been created to preserve entire ecosystems with all the web of species native to the area. Wildlife corridors are created to join fragmented habitats (see Habitat fragmentation) to enable endangered species to travel, meet, and breed with others of their kind. Scientific conservation and modern wildlife management techniques, with the expertise of scientifically trained staff, help manage these protected ecosystems and the wildlife found in them. Wild animals are also translocated and reintroduced to other locations physically when fragmented wildlife habitats are too far and isolated to be able to link together via a wildlife corridor, or when local extinctions have already occurred.
Ex situ conservation
Modern policies of zoo associations and zoos around the world have begun putting dramatically increased emphasis on keeping and breeding wild-sourced species and subspecies of animals in their registered endangered species breeding programs. These specimens are intended to have a chance to be reintroduced and survive back in the wild. The main objectives of zoos today have changed, and greater resources are being invested in breeding species and subspecies for then ultimate purpose of assisting conservation efforts in the wild. Zoos do this by maintaining extremely detailed scientific breeding records (i.e. studbooks)) and by loaning their wild animals to other zoos around the country (and often globally) for breeding, to safeguard against inbreeding by attempting to maximize genetic diversity however possible.
Costly (and sometimes controversial) ex-situ conservation techniques aim to increase the genetic biodiversity on our planet, as well as the diversity in local gene pools. by guarding against genetic erosion. Modern concepts like seedbanks, sperm banks, and tissue banks have become much more commonplace and valuable. Sperm, eggs, and embryos can now be frozen and kept in banks, which are sometimes called "Modern Noah's Arks" or "Frozen Zoos". Cryopreservation techniques are used to freeze these living materials and keep them alive in perpetuity by storing them submerged in liquid nitrogen tanks at very low temperatures. Thus, preserved materials can then be used for artificial insemination, in vitro fertilization, embryo transfer, and cloning methodologies to protect diversity in the gene pool of critically endangered species.
It can be possible to save an endangered species from extinction by preserving only parts of specimens, such as tissues, sperm, eggs, etc. – even after the death of a critically endangered animal, or collected from one found freshly dead, in captivity or from the wild. A new specimen can then be "resurrected" with the help of cloning, so as to give it another chance to breed its genes into the living population of the respective threatened species. Resurrection of dead critically endangered wildlife specimens with the help of cloning is still being perfected, and is still too expensive to be practical, but with time and further advancements in science and methodology it may well become a routine procedure not too far into the future.
See also
Center of origin
Conservation biology
Crop wild relative
Gene bank
Genetic pollution
Genetics
Mutational meltdown
Neglected and underutilized crops
Population genetics
References
Ecology
Conservation biology
Endangered species
Genetic engineering
Population genetics
Hybridisation (biology)
Breeding
Evolutionary biology | Genetic erosion | [
"Chemistry",
"Engineering",
"Biology"
] | 1,626 | [
"Evolutionary biology",
"Biological engineering",
"Behavior",
"Reproduction",
"Biota by conservation status",
"Genetic engineering",
"Ecology",
"Breeding",
"Molecular biology",
"Conservation biology",
"Endangered species"
] |
12,845,371 | https://en.wikipedia.org/wiki/Borel%20fixed-point%20theorem | In mathematics, the Borel fixed-point theorem is a fixed-point theorem in algebraic geometry generalizing the Lie–Kolchin theorem. The result was proved by .
Statement
If G is a connected, solvable, linear algebraic group acting regularly on a non-empty, complete algebraic variety V over an algebraically closed field k, then there is a G fixed-point of V.
A more general version of the theorem holds over a field k that is not necessarily algebraically closed. A solvable algebraic group G is split over k or k-split if G admits a composition series whose composition factors are isomorphic (over k) to the additive group or the multiplicative group . If G is a connected, k-split solvable algebraic group acting regularly on a complete variety V having a k-rational point, then there is a G fixed-point of V.
References
External links
Fixed-point theorems
Group actions (mathematics)
Theorems in algebraic geometry | Borel fixed-point theorem | [
"Physics",
"Mathematics"
] | 198 | [
"Theorems in algebraic geometry",
"Theorems in mathematical analysis",
"Group actions",
"Fixed-point theorems",
"Theorems in topology",
"Theorems in geometry",
"Symmetry"
] |
12,846,099 | https://en.wikipedia.org/wiki/Helper/suppressor%20ratio | The T-Lymphocyte Helper/Suppressor Profile (Helper/Suppressor ratio, T4:T8 ratio, CD4:CD8 ratio) is a basic laboratory test in which the percentage of CD3-positive lymphocytes in the blood positive for CD4 (T helper cells) and CD8 (a class of regulatory T cells) are counted and compared. Normal values (95% confidence intervals) are approximately 30-60% CD4 and 10-30% CD8 depending on age (ratio 0.9 to 3.7 in adults). One reason for abnormal results is the loss of CD4-positive cells to the human immunodeficiency virus (HIV) infection. The loss of CD4-positive cells to HIV infection can result in various distortions in the ratio, as in the initial period, production of HIV specific CD8 positive cells will cause a large fall in the ratio, but subsequent immunosuppression over time may lead to overall non production of immune cells and inversion of the ratio. It has been shown that the degree of inversion of this ratio in individuals on antiretroviral therapy is indicative of the age of the infection and independently predictive of mortality associated with non HIV events.
References
Blood tests | Helper/suppressor ratio | [
"Chemistry"
] | 265 | [
"Blood tests",
"Chemical pathology"
] |
8,202,009 | https://en.wikipedia.org/wiki/Gene%20stacked%20event | A genetically modified organism (GMO) and all subsequent identical clones resulting from a transformation process are called collectively a transformation event. If more than one gene from another organism has been transferred, the created GMO has stacked genes (or stacked traits), and is called a gene stacked event.
Gene stacked events have become an important topic in plant breeding. Occasionally, researchers wish to transfer more than one trait (e.g. an insect resistance and a herbicide resistance) to a crop. Consequently, they need to transfer more than one gene, and do so either in one or in subsequent steps. This can be achieved either by genetic engineering or by conventional cross-breeding of GM plants with two different modifications.
References
Genetically modified organisms | Gene stacked event | [
"Engineering",
"Biology"
] | 148 | [
"Genetic engineering",
"Genetically modified organisms"
] |
8,202,405 | https://en.wikipedia.org/wiki/Floating%20hinge | A floating hinge is a hinge that, while able to behave as a normal hinge, enables one of the objects to move away from the other - hence "float". In effect, the hinge allows for two parallel axes of rotation – one for each object joined by the hinge – and each axis can be moved relative to the position of the other.
Uses
Floating hinges are used in flatbed scanners designed to scan thick objects such as books. If a regular sheet of paper is placed on the glass and the cover is lowered over it, the glass, the paper, and the cover are very close together. If a thicker object is placed on the glass, an ordinary hinge would leave the cover at an angle to the glass; a floating hinge raises the hinged edge of the cover to the level of the book so that the cover remains parallel to the glass, but raised above it.
Floating hinges are also used in two-plate electric cooking grills, as they allow for even heating of both sides of a thick piece of food without crushing it.
See also
References
External links
Hinges
Hardware (mechanical) | Floating hinge | [
"Physics",
"Technology",
"Engineering"
] | 230 | [
"Machines",
"Physical systems",
"Construction",
"Mechanical engineering",
"Mechanical engineering stubs",
"Hardware (mechanical)"
] |
8,202,435 | https://en.wikipedia.org/wiki/Mason%20equation | The Mason equation is an approximate analytical expression for the growth (due to condensation) or evaporation of a water droplet—it is due to the meteorologist B. J. Mason. The expression is found by recognising that mass diffusion towards the water drop in a supersaturated environment transports energy as latent heat, and this has to be balanced by the diffusion of sensible heat back across the boundary layer, (and the energy of heatup of the drop, but for a cloud-sized drop this last term is usually small).
Equation
In Mason's formulation the changes in temperature across the boundary layer can be related to the changes in saturated vapour pressure by the Clausius–Clapeyron relation; the two energy transport terms must be nearly equal but opposite in sign and so this sets the interface temperature of the drop. The resulting expression for the growth rate is significantly lower than that expected if the drop were not warmed by the latent heat.
Thus if the drop has a size r, the inward mass flow rate is given by
and the sensible heat flux by
and the final expression for the growth rate is
where
S is the supersaturation far from the drop
L is the latent heat
K is the vapour thermal conductivity
D is the binary diffusion coefficient
R is the gas constant
References
Thermodynamic equations
Atmospheric thermodynamics
Equations | Mason equation | [
"Physics",
"Chemistry",
"Mathematics"
] | 283 | [
"Thermodynamics stubs",
"Thermodynamic equations",
"Equations of physics",
"Mathematical objects",
"Equations",
"Thermodynamics",
"Physical chemistry stubs"
] |
8,203,113 | https://en.wikipedia.org/wiki/Aerostructure | An aerostructure is a component of an aircraft's airframe. This may include all or part of the fuselage, wings, or flight control surfaces. Companies that specialize in constructing these components are referred to as "aerostructures manufacturers", though many larger aerospace firms with a more diversified product portfolio also build aerostructures.
Mechanical testing of the individual components or complete structure is carried out on a Universal Testing Machine. Test carried out include tensile, compression, flexure, fatigue, impact, compression after impact. Before testing the component, aerospace engineers build finite element models to simulate the reality.
Civilian
Airplanes designed for civilian use are often cheaper than military aircraft. Smaller passenger airplanes are used for short distance, transcontinental transport. It is more cost efficient for airlines and there is less demand for aircraft transportation at these distances as people can, while inconvenient, drive these distances. While bigger airplanes are manufactured for intercontinental transport, so more passengers can be carried at one time, money can be saved on fuel, and airliners do not have to pay as many pilots. Cargo planes are usually built to be bigger than the average jet. They have a lot of space and large dimensions, so they can carry a lot of weight and a large volume of cargo in one trip. They have large wingspans, a very large cargo hold, and a very tall vertical fin. They are not built to accommodate passengers except for the pilots, so the use of the cargo hold is much more efficient. There does not need to be room for seats and food and bathrooms for everybody, so the companies made a design that optimizes the space in the aircraft.
Military
The YC-14 Prototype was a prototype plane that was being designed by Boeing specifically for the US Air Force. There were a lot of different designs that were considered and different technologies that were used specifically for carrying tanks and paratroopers. There was a computer that was installed and a very powerful vertical wing that could keep the plane flying at a set altitude, so they could drop whatever they needed to in the battlefield without any complications. This allowed for precise troop placement which could be the difference between victory and defeat in a battle. It also talks about different cheaper materials for the prototype which were heavier and used a honeycomb pattern. The cheaper materials were too heavy, and the Air Force was not happy that Boeing did not meet the Air Force's expectations on the prototype even though the Air Force was aware that they would be using different materials in the production of the actual aircraft.The Apache helicopter that Boeing makes is designed so the front of the helicopter is very narrow. Not only does it create less drag, but it is a smaller target for infantry units to hit the helicopter. They have also designed the F-15 fighter jet, which has two engines instead of one for maximum speed. This particular aircraft can reach speeds of Mach 2.5. It also happens to be the 8th fastest aircraft ever built. The Boeing C-17 Globemaster 3 uses size and a very large design to carry cargo. It has 4 powerful engines and a special T-tail designed by Boeing for precise control of the unusually large aircraft.
Research
There is a new aircraft material that is 20% lighter than other conventional aircraft materials. However FSW aluminum-alloy which is much heavier than this new material, is more advantageous as opposed to using the new CFRP black constructions. The aluminum is more understood and can be crafted to almost exact precision as opposed to the CFRP, which is very hard to shape. The weight of the aircraft is important, but the precision of the measurements of the aircraft is also important. The new methods and testing require a wide variety of material properties, even though weight is very important when choosing a material.
Additionally, there is a new method for research, called Thermography, that uses infrared light to look at computer simulated damage to the material and the structure of an aircraft to see how it holds up. They can use this to look at materials and evaluate the integrity of the actual design of an aircraft. It is very accurate, and it will increase the development of materials as the test is much faster than traditional testing methods. It can also be used to predict the behavior of materials under certain stressful conditions that might make it fail while in use.
Examples
Aero Vodochody
Alcoa's Howmet division
Collins Aerospace, currently a subsidiary of Raytheon Technologies
GKN
Goodrich Aerostructures Group, currently a part of Collins Aerospace
Mitsubishi Heavy Industries Aerospace
Messier-Bugatti-Dowty
Indonesian Aerospace
Premium AEROTEC
Exelis Inc.
Groupe Latécoère
Spirit AeroSystems
Stelia Aerospace
Vought
References
Aerospace engineering | Aerostructure | [
"Engineering"
] | 959 | [
"Aerospace engineering"
] |
8,203,600 | https://en.wikipedia.org/wiki/Cotlar%E2%80%93Stein%20lemma | The Cotlar–Stein almost orthogonality lemma is a mathematical lemma in the field of functional analysis. It may be used to obtain information on the operator norm on an operator, acting from one Hilbert space into another, when the operator can be decomposed into almost orthogonal pieces.
The original version of this lemma (for self-adjoint and mutually commuting operators) was proved by Mischa Cotlar in 1955 and allowed him to conclude that the Hilbert transform is a continuous linear operator in without using the Fourier transform. A more general version was proved by Elias Stein.
Statement of the lemma
Let be two Hilbert spaces. Consider a family of operators , , with each a bounded linear operator from to .
Denote
The family of operators , is almost orthogonal if
The Cotlar–Stein lemma states that if are almost orthogonal, then the series converges in the strong operator topology, and
Proof
If is a finite collection of bounded operators, then
So under the hypotheses of the lemma,
It follows that
and that
Hence, the partial sums
form a Cauchy sequence.
The sum is therefore absolutely convergent with the limit satisfying the stated inequality.
To prove the inequality above set
with |aij| ≤ 1 chosen so that
Then
Hence
Taking 2mth roots and letting m tend to ∞,
which immediately implies the inequality.
Generalization
The Cotlar-Stein lemma has been generalized, with sums being replaced by integrals. Let X be a locally compact space and μ a Borel measure on X. Let T(x) be a map from X into bounded operators from E to F which is uniformly bounded and continuous in the strong operator topology. If
are finite, then the function T(x)v is integrable for each v in E with
The result can be proven by replacing sums with integrals in the previous proof, or by utilizing Riemann sums to approximate the integrals.
Example
Here is an example of an orthogonal family of operators. Consider the infinite-dimensional matrices.
and also
Then for each , hence the series does not converge in the uniform operator topology.
Yet, since
and
for ,
the Cotlar–Stein almost orthogonality lemma tells us that
converges in the strong operator topology and is bounded by 1.
Notes
References
Hilbert spaces
Harmonic analysis
Operator theory
Inequalities
Theorems in functional analysis
Lemmas in analysis | Cotlar–Stein lemma | [
"Physics",
"Mathematics"
] | 482 | [
"Theorems in mathematical analysis",
"Quantum mechanics",
"Binary relations",
"Theorems in functional analysis",
"Mathematical relations",
"Inequalities (mathematics)",
"Lemmas in mathematical analysis",
"Mathematical problems",
"Hilbert spaces",
"Mathematical theorems",
"Lemmas"
] |
8,207,719 | https://en.wikipedia.org/wiki/Svetlana%20Alexievich | Svetlana Alexandrovna Alexievich (born 31 May 1948) is a Belarusian investigative journalist, essayist and oral historian who writes in Russian. She was awarded the 2015 Nobel Prize in Literature "for her polyphonic writings, a monument to suffering and courage in our time". She is the first writer from Belarus to receive the award.
Background
Born in the west Ukrainian town of Stanislav (Ivano-Frankivsk since 1962) to a Belarusian father and a Ukrainian mother, Svetlana Alexievich grew up in Belarus. After graduating from high school she worked as a reporter in several local newspapers. In 1972 she graduated from Belarusian State University and became a correspondent for the literary magazine Nyoman in Minsk (1976).
In a 2015 interview, she mentioned early influences: "I explored the world through people like Hanna Krall and Ryszard Kapuściński." During her career in journalism, Alexievich specialized in crafting narratives based on witness testimonies. In the process, she wrote artfully constructed oral histories of several dramatic events in Soviet history: the Second World War, Afghan War, dissolution of the Soviet Union, and the Chernobyl disaster.
In 1989 Alexievich's documentary book Zinky Boys, about the fallen soldiers who had returned in zinc coffins from the Soviet-Afghan War of 1979 – 1985, was the subject of controversy, and she was accused of "defamation" and "desecration of the soldiers' honor". Alexievich was tried a number of times between 1992 and 1996. After political persecution by the Lukashenko administration, she left Belarus in 2000. The International Cities of Refuge Network offered her sanctuary, and during the following decade she lived in Paris, Gothenburg and Berlin. In 2011, Alexievich moved back to Minsk.
Influences and legacy
Alexievich's books trace the emotional history of the Soviet and post-Soviet individual through carefully constructed collages of interviews. According to Russian writer and critic Dmitry Bykov, her books owe much to the ideas of Belarusian writer Ales Adamovich, who felt that the best way to describe the horrors of the 20th century was not by creating fiction but through recording the testimonies of witnesses. Belarusian poet Uladzimir Nyaklyayew called Adamovich "her literary godfather". He also named the documentary novel I'm From Fire Village () by Ales Adamovich, Janka Bryl and Uladzimir Kalesnik, about the villages burned by the German troops during the occupation of Belarus, as the main single book that has influenced Alexievich's attitude to literature. Alexievich has confirmed the influence of Adamovich and Belarusian writer Vasil Bykaŭ, among others. She regards Varlam Shalamov as the best writer of the 20th century.
Her most notable works in English translation include a collection of first-hand accounts from the war in Afghanistan (Zinky Boys: Soviet Voices from a Forgotten War) and an oral history of the Chernobyl disaster (Chernobyl Prayer / Voices from Chernobyl). Alexievich describes the theme of her works this way:
Works
Her first book, War's Unwomanly Face, came out in 1985. It was repeatedly reprinted and sold more than two million copies. The book was finished in 1983 and published (in short edition) in Oktyabr, a Soviet monthly literary magazine, in February 1984. In 1985, the book was published by several publishers, and the number of printed copies reached 2,000,000 in the next five years. This non-fiction oral history book is made up of monologues of women in the war speaking about the aspects of World War II that had never been related before. Another book, The Last Witnesses: the Book of Unchildlike Stories, describes personal memories of children during wartime. The war seen through women's and children's eyes revealed a new world of feelings.
In 1992, Alexievich published "Boys in Zinc". The course of the Soviet-Afghan War (1979–1989) is told through emotive personal testimony from unnamed participants of the war; from nurses to commissioned officers and pilots, mothers and widows. Each provides an excerpt of the Soviet-Afghan War which was disguised in the face of criticism first as political support, then intervention, and finally humanitarian aid to the Afghan people. Alexievich writes at the beginning of the book:
Alexievich was not embedded with the Red Army due to her reputation in the Soviet Union; instead, she travelled to Kabul on her own prerogative during the war and gathered many accounts from veterans returning from Afghanistan. In "Boys in Zinc", Alexievich calls herself 'a historian of the untraceable' and 'strive[s] desperately (from book to book) to do one thing - reduce history to the human being.' She brings brutally honest accounts of the war to lay at the feet of the Soviet people but claims no heroism for herself: 'I went [to watch them assemble pieces of boys blown up by an anti-tank mine] and there was nothing heroic about it because I fainted there. Perhaps it was from the heat, perhaps from the shock. I want to be honest.' The monologues which make up the book are honest (if edited for clarity) reproductions of the oral histories Alexievich collected, including those who perhaps did not understand her purpose: 'What's your book for? Who's it for? None of us who came back from there will like it anyway. How can you possibly tell people how it was? The dead camels and dead men lying in a single pool of blood, with their blood mingled together. Who wants that?' Alexievich was brought to trial in Minsk between 1992 and 1996, accused of distorting and falsifying the testimony of Afghan veterans and their mothers who were 'offended [...] that their boys were portrayed exclusively as soulless killer-robots, pillagers, drug addicts and rapists...' The trial, while apparently defending the honour of the army and veterans, is widely seen as an attempt to preserve old ideology in post-communist Belarus. The Belarus League for Human Rights claims that in the early 1990s, multiple cases were directed against democratically inclined intelligentsia with politically motivated verdicts.
In 1993, she published Enchanted by Death, a book about attempted and completed suicides due to the downfall of the Soviet Union. Many people felt inseparable from the Communist ideology and were unable to accept the new order surely and the newly interpreted history.
Her books were not published by Belarusian state-owned publishing houses after 1993, while private publishers in Belarus have only published two of her books: Chernobyl Prayer in 1999 and Second-hand Time in 2013, both translated into Belarusian. As a result, Alexievich has been better known in the rest of world than in Belarus.
She has been described as the first journalist to receive the Nobel Prize in Literature. She herself rejects the notion that she is a journalist, and, in fact, Alexievich's chosen genre is sometimes called "documentary literature": an artistic rendering of real events, with a degree of poetic license. In her own words:
On 26 October 2019, Alexievich was elected chairman of the Belarusian PEN Center.
Political activism
During the 2020 Belarusian protests Alexievich became a member of the Coordination Council of Sviatlana Tsikhanouskaya, the leader of the Belarusian democratic movement and main opposition candidate against Lukashenko.
On 20 August, Alexander Konyuk, the Prosecutor-General of Belarus, initiated criminal proceedings against the members of the Coordination Council under Article 361 of the Belarusian Criminal Code, on the grounds of attempting to seize state power and harming national security.
On 26 August, Alexievich was questioned by Belarusian authorities about her involvement in the council.
On 9 September 2020, Alexievich alerted the press that "men in black masks" were trying to enter her apartment in central Minsk. "I have no friends and companions left in the Coordinating Council. All are in prison or have been forcibly sent into exile," she wrote in a statement. "First they kidnapped the country; now it's the turn of the best among us. But hundreds more will replace those who have been torn from our ranks. It is not the Coordinating Council that has rebelled. It is the country." Diplomats from Lithuania, Poland, the Czech Republic, Romania, Slovakia, and Sweden began to keep a round-the-clock watch on Alexievich's home to prevent her abduction by security services.
On 28 September 2020, Alexievich left Belarus for Germany, promising to return depending on political conditions in Belarus. Prior to her departure, she was the last member of the Coordination Council who was not in exile or under arrest.
In August 2021, her book The Last Witnesses was excluded from the school curriculum in Belarus and her name was removed from the curriculum. It was assumed that the exclusion was made for her political activity.
In her first public statement, after she was announced the Nobel Prize in 2015, Alexievich condemned Russia's annexation of Crimea in 2014. Following the 2022 Russian invasion of Ukraine, she commented that "providing a territory for an aggressor country is nothing but complicity in a crime" in relation to Belarusian involvement in the 2022 Russian invasion of Ukraine.
Awards and honours
Alexievich has received many awards, including:
Saint Euphrosyne of Polotsk Medal (Медаль имени Святой Евфросиньи Полоцкой)
1984 — Order of the Badge of Honour (USSR)
1984 — Nikolay Ostrovskiy literary award of the Union of Soviet Writers
1984 — Oktyabr Magazine Prize
1985 — Литературная премия имени Константина Федина of the Union of Soviet Writers
1986 — Lenin Komsomol Prize — for the book «У войны не женское лицо»
1987 — Literaturnaya Gazeta Prize
1996 — Tucholsky-Preis (Swedish PEN)
1997 — Prize
1997 — (Russia)
1997 — Andrei Sinyavsky Prize of Novaya Gazeta
1998 — Leipzig Book Award for European Understanding
1998 — Friedrich-Ebert-Stiftung-Preis
1999 — Herder Prize
2005 — National Book Critics Circle Award, Voices from Chernobyl
2007 — Oxfam Novib/PEN Award
2011 — Ryszard Kapuściński Award (Poland)
2011 — Angelus Award (Poland)
2013 — Peace Prize of the German Book Trade
2013 — Prix Médicis essai, La Fin de l'homme rouge ou le temps du désenchantement (for her book Secondhand Time)
2014 — Officer of the Order of the Arts and Letters (France)
2015 — Nobel Prize in Literature
2017 — Arthur Ross Book Award Bronze Medal given by the Council on Foreign Relations for her book Secondhand Time
2017 — Golden Plate Award from the American Academy of Achievement.
2018 — Belarusian Democratic Republic 100th Jubilee Medal
2020 — Sakharov Prize for Freedom of Thought by the European Parliament (one of the named representatives of the democratic opposition in Belarus)
2021 — Sonning Prize
2021 — Order of Merit of the Federal Republic of Germany (Commander's Cross)
Alexievich is a member of the advisory committee of the Lettre Ulysses Award. She gave the inaugural Anna Politkovskaya Memorial Lecture at the British Library on 9 October 2019. The lecture is an international platform to amplify the voices of women journalists and human rights defenders working in war and conflict zones.
Publications
У войны не женское лицо (U voyny ne zhenskoe litso), Minsk: Mastatskaya litaratura, 1985.
The Unwomanly Face of War, (extracts), from Always a Woman: Stories by Soviet Women Writers, Raduga Publishers, 1987.
War's Unwomanly Face, Moscow: Progress Publishers, 1988, .
The Unwomanly Face of War: An Oral History of Women in World War II, Random House, 2017, .
Последние свидетели: сто недетских колыбельных (Poslednie svideteli: sto nedetskikh kolybelnykh), Moscow: Molodaya Gvardiya, 1985
Last Witnesses: An Oral History of the Children of World War II. Random House, 2019 , translated by Richard Pevear and Larissa Volokhonsky.
Zinky Boys Цинковые мальчики (Tsinkovye malchiki), Moscow: Molodaya Gvardiya, 1991.
(US) Zinky Boys: Soviet Voices from the Afghanistan War. W W Norton 1992 (), translated by Julia and Robin Whitby.
(UK) Boys in Zinc. Penguin Modern Classics 2016 , translated by Andrew Bromfield.
Зачарованные смертью (Zacharovannye Smertyu, Enchanted by Death) (Belarusian: 1993, Russian: 1994)
Чернобыльская молитва (Chernobylskaya molitva), Moscow: Ostozhye, 1997. .
(US) Voices from Chernobyl: The Oral History of a Nuclear Disaster. Dalkey Archive Press 2005 (), translated by Keith Gessen.
(UK) Chernobyl Prayer: A Chronicle of the Future. Penguin Modern Classics 2016 (), translated by Anna Gunin and Arch Tait. New translation of the revised edition published in 2013.
Время секонд хэнд (Vremya sekond khend), Moscow: Vremia, 2013.
Secondhand Time: The Last of the Soviets. Random House 2016 (), translated by Bela Shayevich.
References
External links
Svetlana Alexievich's website - Contains biography, bibliography and excerpts.
Biography at the international literature festival berlin
including the Nobel Lecture 7 December 2015 On the Battle Lost
Interviews
"The Guardian, A Life In..." , Interview by Luke Harding, April 2016
"A Conversation with Svetlana Alexievich", Dalkey Archive Press
Between the public and the private: Svetlana Aleksievich interviews Ales' Adamovich Canadian Slavonic Papers/ Revue Canadienne des Slavistes
Excerpts
Selections from Voices From Chernobyl in The Paris Review, 2015
Articles about Svetlana Alexievich
"The Truth in Many Voices" Timothy Snyder, NYRB, October 2015
"The Memory Keeper" Masha Gessen, The New Yorker, October 2015.
"From Russia with Love" Bookforum, August 2016.
A conspiracy of ignorance and obedience, The Telegraph, 2015
Svetlana Alexievich: Belarusian Language Is Rural And Literary Unripe , Belarus Digest, June 2013
Belarusian Nobel laureate Sviatlana Alieksijevič hit by a smear campaign Belarus Digest, July 2017
Academic articles about Svetlana Alexievich's works
Escrita, biografia e sensibilidade: o discurso da memória soviética de Svetlana Aleksiévitch como um problema historiográfico João Camilo Portal
Mothers, father(s), daughter: Svetlana Aleksievich and The Unwomanly Face of War Angela Brintlinger
"No other proof": Svetlana Aleksievich in the tradition of Soviet war writing Daniel Bush
Mothers, prostitutes, and the collapse of the USSR: the representation of women in Svetlana Aleksievich's Zinky Boys Jeffrey W. Jones
Svetlana Aleksievich's Voices from Chernobyl: between an oral history and a death lament Anna Karpusheva
The polyphonic performance of testimony in Svetlana Aleksievich's Voices from Utopia Johanna Lindbladh
A new literary genre. Trauma and the individual perspective in Svetlana Aleksievich's Chernobyl'skaia molitva Irina Marchesini
Svetlana Aleksievich's changing narrative of the Soviet–Afghan War in Zinky Boys Holly Myers
Other
Lukashenko's comment on Alexievich (1''12 video, in Russian, no subtitles)
Svetlana Alexievich at Goodreads
Svetlana Alexievich Quotes With Pictures at Rugusavay.com
List of Works
1948 births
Living people
21st-century women writers
Belarusian Nobel laureates
Belarusian people of Ukrainian descent
Belarusian women journalists
Belarusian essayists
Nobel laureates in Literature
Oxfam Novib/PEN Award winners
Russian-language writers
Soviet journalists
Women Nobel laureates
People associated with the Chernobyl disaster
Commanders Crosses of the Order of Merit of the Federal Republic of Germany
Recipients of the Lenin Komsomol Prize
Herder Prize recipients
Prix Médicis essai winners
20th-century Belarusian writers
21st-century Belarusian writers
20th-century Belarusian women writers
21st-century Belarusian women writers
National Book Critics Circle Award winners | Svetlana Alexievich | [
"Technology"
] | 3,590 | [
"Women Nobel laureates",
"Women in science and technology"
] |
10,587,844 | https://en.wikipedia.org/wiki/Oregrounds%20iron | Oregrounds iron was a grade of iron that was regarded as the best grade available in 18th century England. The term was derived from the small Swedish city of Öregrund, the port from which the bar iron was shipped. It was produced using the Walloon process.
Oregrounds iron is the equivalent of the Swedish vallonjärn, which literally translates as Walloon iron. The Swedish name derives from the iron being produced by the Walloon version of the finery forge process, the Walloon process as opposed to the German method, which was more common in Sweden. Actually, the term is more specialised, as all the Swedish Walloon forges made iron from ore ultimately derived from the Dannemora mine. It was made in about 20 forges mainly in Uppland.
Many of the ironworks were founded by Louis de Geer and other Dutch entrepreneurs who set up ironworks in Sweden in the 1610s and 1620s, with blast furnaces and finery forges. Most of the early forgemen were also from Wallonia.
Origins in Wallonia
The technique was developed in Wallonia in present-day Belgium during the Middle Ages. The Walloon method consisted of making pig iron in a blast furnace, followed by refining it in a finery forge. The process was devised in the Liège region, and spread into France and thence from the Pays de Bray to England before the end of the 15th century. Louis de Geer took it to Roslagen in Sweden in the early 17th century, where he employed Walloon ironmakers. Iron made there by this method was known in England as oregrounds iron.
Quality, uses and marketing
Swedish law required bars of iron to have the forge's mark stamped into it for quality control reasons. In Britain, the iron was known by these 'marks', and the quality of each brand was well-known to the buyers in London, Sheffield, Birmingham and elsewhere. It was divided into two grades:
'First oregrounds' came from Österby ('double bullet'), Leufsta (now Lövsta - hoop L), and Åkerby (PL crown). Later Gimo joined them.
'Second oregrounds' came from the other forges, including Forsmark, Harg, Vattholma, and Ullfors.
Its special property was its purity. The manganese content of the Dannemora ore caused impurities, which would otherwise have remained in the iron, to react preferentially with the manganese and to be carried off into the slag. This level of purity meant that the iron was particularly suitable for conversion to steel by being re-carburized, using the cementation process. This made it particularly suitable for making steel, oregrounds iron was an indispensable raw material for metal manufactures, particularly the Sheffield cutlery industry. Substantial quantities were also (until about 1808) bought for use by the British Navy.
This and other uses absorbed substantially the whole output of the industry. The trade in oregrounds iron was controlled from the 1730s to the 1850s by a cartel of merchants, of whom the longest enduring members were the Sykes family of Hull. Other participants were resident in (or controlling imports through) London and Bristol. These merchants advanced money to Swedish exporting houses, which in turn advanced it to the ironmasters, thus buying up the output of the forges several years in advance.
References
K. C. Barraclough, Steelmaking before Bessemer: I Blister Steel (Metals Society, London, 1985).
K. C. Barraclough, 'Swedish iron and Sheffield steel' History of Technology 12 (1990), 1-39 - originally published in Swedish in A Attman et al., Forsmark och vallonjärnet [Forsmark and Walloon iron] (Sweden 1987)
P. W. King, 'The Cartel in Oregrounds Iron' Journal of Industrial History 6(1) (2003), 25-48.
K-G. Hildebrand, Swedish iron in the seventeenth and eighteenth centuries: export industry before industrialization (Stockholm 1992).
Notes
Metallurgy
Ferrous alloys
Uppland
Goods manufactured in Sweden
Economic history of Sweden
Iron | Oregrounds iron | [
"Chemistry",
"Materials_science",
"Engineering"
] | 864 | [
"Ferrous alloys",
"Metallurgy",
"Materials science",
"Alloys",
"nan"
] |
10,590,990 | https://en.wikipedia.org/wiki/Ecological%20release | Ecological release refers to a population increase or population explosion that occurs when a species is freed from limiting factors in its environment. Sometimes this may occur when a plant or animal species is introduced, for example, to an island or to a new territory or environment other than its native habitat. When this happens, the new arrivals may find themselves suddenly free from the competitors, diseases, or predatory species, etc. in their previous environment, allowing their population numbers to increase beyond their previous limitations. Another common example of ecological release can occur if a disease or a competitor or a keystone species, such as a top predator, is removed from a community or ecosystem. Classical examples of this latter dynamics include population explosions of sea urchins in California's offshore kelp beds, for example, when human hunters began to kill too many sea otters, and/or sudden population explosions of jackrabbits if hunters or ranchers kill too many coyotes.
The foreign species either flourishes into a local population or dies out. Not all released species will become invasive; most released species that don't immediately die out tend to find a small niche in the local ecosystem.
Ecological release also occurs when a species expands its niche within its own habitat or into a new habitat.
Origin
The term ecological release first appeared in the scientific literature in 1972 in the American Zoologist journal discussing the increased diversity of diet and habitat preferences adopted by a sea snail species introduced without competition in the isolated ecosystem of Easter Island. One of the first studies that linked niche shifts to the presence and absence of competitors was by Lack and Southern where habitat broadness of song birds was positively correlated to the absence of a related species.
Common example
Invasive species are an excellent example of successful ecological release because low levels of biodiversity, an abundance of resources, and particular life history traits allow their numbers to increase dramatically. Additionally, there are few predators for these species.
Causes and mechanisms
Cascade effect
When a keystone species, such as a top predator, is removed from a community or ecosystem, an ecological cascade effect can occur through which a series of secondary extinctions take place. Keystone predators are responsible for the control of prey densities, and their removal can result in an increase in one or a number of predators, consumers, or competitors elsewhere in the food web. Several prey or competitor species can consequently suffer a population decline and potentially be extirpated; the result of this would be a decrease in community diversity. Without the keystone species, prey populations can grow indefinitely and will, ultimately, be limited by resources such as food and shelter. Due to these secondary extinctions, a niche is left unfilled: this allows a new species to invade and exploit the resources that are no longer being used by other species.
Human causes ecological release
Ecological release by human means, intentional or unintentional, has had drastic effects on ecosystems worldwide. The most extreme examples of invasive species include: cane toads in Australia, kudzu in the Southeast United States, or beavers in Tierra Del Fuego. But ecological release can also be more subtle, less drastic and easily overlooked such as mustangs and dandelions in North America, musk oxen in Svalbard, dromedaries in Australia, or peaches in Georgia
See also
Mesopredator release hypothesis
Trophic cascade
Fishing down the food web
References
Biogeography
Behavioral ecology
Conservation biology | Ecological release | [
"Biology"
] | 684 | [
"Behavior",
"Biogeography",
"Behavioral ecology",
"Behavioural sciences",
"Ethology",
"Conservation biology"
] |
123,450 | https://en.wikipedia.org/wiki/Philosophy%20of%20physics | In philosophy, the philosophy of physics deals with conceptual and interpretational issues in physics, many of which overlap with research done by certain kinds of theoretical physicists. Historically, philosophers of physics have engaged with questions such as the nature of space, time, matter and the laws that govern their interactions, as well as the epistemological and ontological basis of the theories used by practicing physicists. The discipline draws upon insights from various areas of philosophy, including metaphysics, epistemology, and philosophy of science, while also engaging with the latest developments in theoretical and experimental physics.
Contemporary work focuses on issues at the foundations of the three pillars of modern physics:
Quantum mechanics: Interpretations of quantum theory, including the nature of quantum states, the measurement problem, and the role of observers. Implications of entanglement, nonlocality, and the quantum-classical relationship are also explored.
Relativity: Conceptual foundations of special and general relativity, including the nature of spacetime, simultaneity, causality, and determinism. Compatibility with quantum mechanics, gravitational singularities, and philosophical implications of cosmology are also investigated.
Statistical mechanics: Relationship between microscopic and macroscopic descriptions, interpretation of probability, origin of irreversibility and the arrow of time. Foundations of thermodynamics, role of information theory in understanding entropy, and implications for explanation and reduction in physics.
Other areas of focus include the nature of physical laws, symmetries, and conservation principles; the role of mathematics; and philosophical implications of emerging fields like quantum gravity, quantum information, and complex systems. Philosophers of physics have argued that conceptual analysis clarifies foundations, interprets implications, and guides theory development in physics.
Philosophy of space and time
The existence and nature of space and time (or space-time) are central topics in the philosophy of physics. Issues include (1) whether space and time are fundamental or emergent, and (2) how space and time are operationally different from one another.
Time
In classical mechanics, time is taken to be a fundamental quantity (that is, a quantity which cannot be defined in terms of other quantities). However, certain theories such as loop quantum gravity claim that spacetime is emergent. As Carlo Rovelli, one of the founders of loop quantum gravity, has said: "No more fields on spacetime: just fields on fields". Time is defined via measurement—by its standard time interval. Currently, the standard time interval (called "conventional second", or simply "second") is defined as 9,192,631,770 oscillations of a hyperfine transition in the 133 caesium atom. (ISO 31-1). What time is and how it works follows from the above definition. Time then can be combined mathematically with the fundamental quantities of space and mass to define concepts such as velocity, momentum, energy, and fields.
Both Isaac Newton and Galileo Galilei, as well as most people up until the 20th century, thought that time was the same for everyone everywhere. The modern conception of time is based on Albert Einstein's theory of relativity and Hermann Minkowski's spacetime, in which rates of time run differently in different inertial frames of reference, and space and time are merged into spacetime. Einstein's general relativity as well as the redshift of the light from receding distant galaxies indicate that the entire Universe and possibly space-time itself began about 13.8 billion years ago in the Big Bang. Einstein's theory of special relativity mostly (though not universally) made theories of time where there is something metaphysically special about the present seem much less plausible, as the reference-frame-dependence of time seems to not allow the idea of a privileged present moment.
Space
Space is one of the few fundamental quantities in physics, meaning that it cannot be defined via other quantities because there is nothing more fundamental known at present. Thus, similar to the definition of other fundamental quantities (like time and mass), space is defined via measurement. Currently, the standard space interval, called a standard metre or simply metre, is defined as the distance traveled by light in a vacuum during a time interval of 1/299792458 of a second (exact).
In classical physics, space is a three-dimensional Euclidean space where any position can be described using three coordinates and parameterised by time. Special and general relativity use four-dimensional spacetime rather than three-dimensional space; and currently there are many speculative theories which use more than four spatial dimensions.
Philosophy of quantum mechanics
Quantum mechanics is a large focus of contemporary philosophy of physics, specifically concerning the correct interpretation of quantum mechanics. Very broadly, much of the philosophical work that is done in quantum theory is trying to make sense of superposition states: the property that particles seem to not just be in one determinate position at one time, but are somewhere 'here', and also 'there' at the same time. Such a radical view turns many common sense metaphysical ideas on their head. Much of contemporary philosophy of quantum mechanics aims to make sense of what the very empirically successful formalism of quantum mechanics tells us about the physical world.
Uncertainty principle
The uncertainty principle is a mathematical relation asserting an upper limit to the accuracy of the simultaneous measurement of any pair of conjugate variables, e.g. position and momentum. In the formalism of operator notation, this limit is the evaluation of the commutator of the variables' corresponding operators.
The uncertainty principle arose as an answer to the question: How does one measure the location of an electron around a nucleus if an electron is a wave? When quantum mechanics was developed, it was seen to be a relation between the classical and quantum descriptions of a system using wave mechanics.
"Locality" and hidden variables
Bell's theorem is a term encompassing a number of closely related results in physics, all of which determine that quantum mechanics is incompatible with local hidden-variable theories given some basic assumptions about the nature of measurement. "Local" here refers to the principle of locality, the idea that a particle can only be influenced by its immediate surroundings, and that interactions mediated by physical fields cannot propagate faster than the speed of light. "Hidden variables" are putative properties of quantum particles that are not included in the theory but nevertheless affect the outcome of experiments. In the words of physicist John Stewart Bell, for whom this family of results is named, "If [a hidden-variable theory] is local it will not agree with quantum mechanics, and if it agrees with quantum mechanics it will not be local."
The term is broadly applied to a number of different derivations, the first of which was introduced by Bell in a 1964 paper titled "On the Einstein Podolsky Rosen Paradox". Bell's paper was a response to a 1935 thought experiment that Albert Einstein, Boris Podolsky and Nathan Rosen proposed, arguing that quantum physics is an "incomplete" theory. By 1935, it was already recognized that the predictions of quantum physics are probabilistic. Einstein, Podolsky and Rosen presented a scenario that involves preparing a pair of particles such that the quantum state of the pair is entangled, and then separating the particles to an arbitrarily large distance. The experimenter has a choice of possible measurements that can be performed on one of the particles. When they choose a measurement and obtain a result, the quantum state of the other particle apparently collapses instantaneously into a new state depending upon that result, no matter how far away the other particle is. This suggests that either the measurement of the first particle somehow also influenced the second particle faster than the speed of light, or that the entangled particles had some unmeasured property which pre-determined their final quantum states before they were separated. Therefore, assuming locality, quantum mechanics must be incomplete, as it cannot give a complete description of the particle's true physical characteristics. In other words, quantum particles, like electrons and photons, must carry some property or attributes not included in quantum theory, and the uncertainties in quantum theory's predictions would then be due to ignorance or unknowability of these properties, later termed "hidden variables".
Bell carried the analysis of quantum entanglement much further. He deduced that if measurements are performed independently on the two separated particles of an entangled pair, then the assumption that the outcomes depend upon hidden variables within each half implies a mathematical constraint on how the outcomes on the two measurements are correlated. This constraint would later be named the Bell inequality. Bell then showed that quantum physics predicts correlations that violate this inequality. Consequently, the only way that hidden variables could explain the predictions of quantum physics is if they are "nonlocal", which is to say that somehow the two particles can carry non-classical correlations no matter how widely they ever become separated.
Multiple variations on Bell's theorem were put forward in the following years, introducing other closely related conditions generally known as Bell (or "Bell-type") inequalities. The first rudimentary experiment designed to test Bell's theorem was performed in 1972 by John Clauser and Stuart Freedman. More advanced experiments, known collectively as Bell tests, have been performed many times since. To date, Bell tests have consistently found that physical systems obey quantum mechanics and violate Bell inequalities; which is to say that the results of these experiments are incompatible with any local hidden variable theory.
The exact nature of the assumptions required to prove a Bell-type constraint on correlations has been debated by physicists and by philosophers. While the significance of Bell's theorem is not in doubt, its full implications for the interpretation of quantum mechanics remain unresolved.
Interpretations of quantum mechanics
In March 1927, working in Niels Bohr's institute, Werner Heisenberg formulated the principle of uncertainty thereby laying the foundation of what became known as the Copenhagen interpretation of quantum mechanics. Heisenberg had been studying the papers of Paul Dirac and Pascual Jordan. He discovered a problem with measurement of basic variables in the equations. His analysis showed that uncertainties, or imprecisions, always turned up if one tried to measure the position and the momentum of a particle at the same time. Heisenberg concluded that these uncertainties or imprecisions in the measurements were not the fault of the experimenter, but fundamental in nature and are inherent mathematical properties of operators in quantum mechanics arising from definitions of these operators.
The Copenhagen interpretation is somewhat loosely defined, as many physicists and philosophers of physics have advanced similar but not identical views of quantum mechanics. It is principally associated with Heisenberg and Bohr, despite their philosophical differences. Features common to Copenhagen-type interpretations include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using the Born rule, and the principle of complementarity, which states that objects have certain pairs of complementary properties that cannot all be observed or measured simultaneously. Moreover, the act of "observing" or "measuring" an object is irreversible, and no truth can be attributed to an object, except according to the results of its measurement. Copenhagen-type interpretations hold that quantum descriptions are objective, in that they are independent of any arbitrary factors in the physicist's mind.
The many-worlds interpretation of quantum mechanics by Hugh Everett III claims that the wave-function of a quantum system is telling us claims about the reality of that physical system. It denies wavefunction collapse, and claims that superposition states should be interpreted literally as describing the reality of many-worlds where objects are located, and not simply indicating the indeterminacy of those variables. This is sometimes argued as a corollary of scientific realism, which states that scientific theories aim to give us literally true descriptions of the world.
One issue for the Everett interpretation is the role that probability plays on this account. The Everettian account is completely deterministic, whereas probability seems to play an ineliminable role in quantum mechanics. Contemporary Everettians have argued that one can get an account of probability that follows the Born rule through certain decision-theoretic proofs, but there is as yet no consensus about whether any of these proofs are successful.
Physicist Roland Omnès noted that it is impossible to experimentally differentiate between Everett's view, which says that as the wave-function decoheres into distinct worlds, each of which exists equally, and the more traditional view that says that a decoherent wave-function leaves only one unique real result. Hence, the dispute between the two views represents a great "chasm". "Every characteristic of reality has reappeared in its reconstruction by our theoretical model; every feature except one: the uniqueness of facts."
Philosophy of thermal and statistical physics
The philosophy of thermal and statistical physics is concerned with the foundational issues and conceptual implications of thermodynamics and statistical mechanics. These branches of physics deal with the macroscopic behavior of systems comprising a large number of microscopic entities, such as particles, and the nature of laws that emerge from these systems like irreversibility and entropy. Interest of philosophers in statistical mechanics first arose from the observation of an apparent conflict between the time-reversal symmetry of fundamental physical laws and the irreversibility observed in thermodynamic processes, known as the arrow of time problem. Philosophers have sought to understand how the asymmetric behavior of macroscopic systems, such as the tendency of heat to flow from hot to cold bodies, can be reconciled with the time-symmetric laws governing the motion of individual particles.
Another key issue is the interpretation of probability in statistical mechanics, which is primarily concerned with the question of whether probabilities in statistical mechanics are epistemic, reflecting our lack of knowledge about the precise microstate of a system, or ontic, representing an objective feature of the physical world. The epistemic interpretation, also known as the subjective or Bayesian view, holds that probabilities in statistical mechanics are a measure of our ignorance about the exact state of a system. According to this view, we resort to probabilistic descriptions only due to the practical impossibility of knowing the precise properties of all its micro-constituents, like the positions and momenta of particles. As such, the probabilities are not objective features of the world but rather arise from our ignorance. In contrast, the ontic interpretation, also called the objective or frequentist view, asserts that probabilities in statistical mechanics are real, physical properties of the system itself. Proponents of this view argue that the probabilistic nature of statistical mechanics is not merely a reflection of our ignorance but an intrinsic feature of the physical world, and that even if we had complete knowledge of the microstate of a system, the macroscopic behavior would still be best described by probabilistic laws.
History
Aristotelian physics
Aristotelian physics viewed the universe as a sphere with a center. Matter, composed of the classical elements: earth, water, air, and fire; sought to go down towards the center of the universe, the center of the Earth, or up, away from it. Things in the aether such as the Moon, the Sun, planets, or stars circled the center of the universe. Movement is defined as change in place, i.e. space.
Newtonian physics
The implicit axioms of Aristotelian physics with respect to movement of matter in space were superseded in Newtonian physics by Newton's first law of motion.
"Every body" includes the Moon, and an apple; and includes all types of matter, air as well as water, stones, or even a flame. Nothing has a natural or inherent motion. Absolute space being three-dimensional Euclidean space, infinite and without a center. Being "at rest" means being at the same place in absolute space over time. The topology and affine structure of space must permit movement in a straight line at a uniform velocity; thus both space and time must have definite, stable dimensions.
Leibniz
Gottfried Wilhelm Leibniz, 1646–1716, was a contemporary of Newton. He contributed a fair amount to the statics and dynamics emerging around him, often disagreeing with Descartes and Newton. He devised a new theory of motion (dynamics) based on kinetic energy and potential energy, which posited space as relative, whereas Newton was thoroughly convinced that space was absolute. An important example of Leibniz's mature physical thinking is his Specimen Dynamicum of 1695.
Until the discovery of subatomic particles and the quantum mechanics governing them, many of Leibniz's speculative ideas about aspects of nature not reducible to statics and dynamics made little sense.
He anticipated Albert Einstein by arguing, against Newton, that space, time and motion are relative, not absolute: "As for my own opinion, I have said more than once, that I hold space to be something merely relative, as time is, that I hold it to be an order of coexistences, as time is an order of successions."
See also
Anthropic principle
Arrow of time
Causality (physics)
Causal closure
Determinism
Digital physics
Mind-body dualism
Functional decomposition
Holism
Instrumentalism
Laws of thermodynamics
Modal realism
Monism
Physical ontology
Naturalism:
Metaphysical
Methodological
Operationalism
Phenomenology (particle physics)
Philosophy of:
Classical physics
Space and time
Thermodynamics and statistical mechanics
Motion
Physical
Bodies
Law
System
Physicalism
Physics
Aristotle
Physics envy
Quantum theory:
Bohr-Einstein debates
Einstein's thought experiments
EPR paradox
Interpretations of
Metaphysics
Mysticism
Reductionism
Relativity:
General
Special
Space
Absolute theory
Container space
Free space
Relational space
Relational theory
Spacetime
Supervenience
Symmetry in physics
Theophysics
Time in physics
References
Further reading
David Albert, 1994. Quantum Mechanics and Experience. Harvard Univ. Press.
John D. Barrow and Frank J. Tipler, 1986. The Cosmological Anthropic Principle. Oxford Univ. Press.
Beisbart, C. and S. Hartmann, eds., 2011. "Probabilities in Physics". Oxford Univ. Press.
John S. Bell, 2004 (1987), Speakable and Unspeakable in Quantum Mechanics. Cambridge Univ. Press.
David Bohm, 1980. Wholeness and the Implicate Order. Routledge.
Nick Bostrom, 2002. Anthropic Bias: Observation Selection Effects in Science and Philosophy. Routledge.
Thomas Brody, 1993, Ed. by Luis de la Peña and Peter E. Hodgson The Philosophy Behind Physics Springer
Harvey Brown, 2005. Physical Relativity. Space-time structure from a dynamical perspective. Oxford Univ. Press.
Butterfield, J., and John Earman, eds., 2007. Philosophy of Physics, Parts A and B. Elsevier.
Craig Callender and Nick Huggett, 2001. Physics Meets Philosophy at the Planck Scale. Cambridge Univ. Press.
David Deutsch, 1997. The Fabric of Reality. London: The Penguin Press.
Bernard d'Espagnat, 1989. Reality and the Physicist. Cambridge Univ. Press. Trans. of Une incertaine réalité; le monde quantique, la connaissance et la durée.
--------, 1995. Veiled Reality. Addison-Wesley.
--------, 2006. On Physics and Philosophy. Princeton Univ. Press.
Roland Omnès, 1994. The Interpretation of Quantum Mechanics. Princeton Univ. Press.
--------, 1999. Quantum Philosophy. Princeton Univ. Press.
Huw Price, 1996. Time's Arrow and Archimedes's Point. Oxford Univ. Press.
Lawrence Sklar, 1992. Philosophy of Physics. Westview Press. ,
Victor Stenger, 2000. Timeless Reality. Prometheus Books.
Carl Friedrich von Weizsäcker, 1980. The Unity of Nature. Farrar Straus & Giroux.
Werner Heisenberg, 1971. Physics and Beyond: Encounters and Conversations. Harper & Row (World Perspectives series), 1971.
William Berkson, 1974. Fields of Force. Routledge and Kegan Paul, London.
Encyclopædia Britannica, Philosophy of Physics, David Z. Albert
External links
Stanford Encyclopedia of Philosophy:
"Absolute and Relational Theories of Space and Motion"—Nick Huggett and Carl Hoefer
"Being and Becoming in Modern Physics"—Steven Savitt
"Boltzmann's Work in Statistical Physics"—Jos Uffink
"Conventionality of Simultaneity"—Allen Janis
"Early Philosophical Interpretations of General Relativity"—Thomas A. Ryckman
"Everett's Relative-State Formulation of Quantum Mechanics"—Jeffrey A. Barrett
"Experiments in Physics"—Allan Franklin
"Holism and Nonseparability in Physics"—Richard Healey
"Intertheory Relations in Physics"—Robert Batterman
"Naturalism"—David Papineau
"Philosophy of Statistical Mechanics"—Lawrence Sklar
"Physicalism"—Daniel Sojkal
"Quantum Mechanics"—Jenann Ismael
"Reichenbach's Common Cause Principle"—Frank Artzenius
"Structural Realism"—James Ladyman
"Structuralism in Physics"—Heinz-Juergen Schmidt
"Supertasks"—JB Manchak and Bryan Roberts
"Symmetry and Symmetry Breaking"—Katherine Brading and Elena Castellani
"Thermodynamic Asymmetry in Time"—Craig Callender
"Time"—by Ned Markosian
"Time Machines" —John Earman, Chris Wüthrich, and JB Manchak
"Uncertainty principle"—Jan Hilgevoord and Jos Uffink
"The Unity of Science"—Jordi Cat
Physics
Applied and interdisciplinary physics | Philosophy of physics | [
"Physics"
] | 4,467 | [
"Philosophy of physics",
"Applied and interdisciplinary physics"
] |
19,636,775 | https://en.wikipedia.org/wiki/Maximum%20cut | In a graph, a maximum cut is a cut whose size is at least the size of any other cut. That is, it is a partition of the graph's vertices into two complementary sets and , such that the number of edges between and is as large as possible. Finding such a cut is known as the max-cut problem.
The problem can be stated simply as follows. One wants a subset of the vertex set such that the number of edges between and the complementary subset is as large as possible. Equivalently, one wants a bipartite subgraph of the graph with as many edges as possible.
There is a more general version of the problem called weighted max-cut, where each edge is associated with a real number, its weight, and the objective is to maximize the total weight of the edges between and its complement rather than the number of the edges. The weighted max-cut problem allowing both positive and negative weights can be trivially transformed into a weighted minimum cut problem by flipping the sign in all weights.
Lower bounds
Edwards obtained the following two lower bound for Max-Cut on a graph with vertices and edges (in (a) is arbitrary, but in (b) it is connected):
(a)
(b)
Bound (b) is often called the Edwards-Erdős bound as Erdős conjectured it. Edwards proved the Edwards-Erdős bound using probabilistic method; Crowston et al. proved the bound using linear algebra and analysis of pseudo-boolean functions.
The proof of Crowston et al. allows us to extend the Edwards-Erdős bound to the Balanced Subgraph Problem (BSP) on signed graphs , i.e. graphs where each edge is assigned + or –. For a partition of into subsets and , an edge is balanced if either and and are in the same subset, or and and are different subsets. BSP aims at finding a partition with the maximum number of balanced edges in . The Edwards-Erdős gives a lower bound on for every connected signed graph .
Bound (a) was improved for special classes of graphs: triangle-free graphs, graphs of given maximum degree, -free graphs, etc., see e.g.
Poljak and Turzik extended the Edwards-Erdős bound to weighted Max-Cut:
where and are the weights of and its minimum weight spanning tree . Recently, Gutin and Yeo obtained a number of lower bounds for weighted Max-Cut extending the Poljak-Turzik bound for arbitrary weighted graphs and bounds for special classes of weighted graphs.
Computational complexity
The following decision problem related to maximum cuts has been studied widely in theoretical computer science:
Given a graph G and an integer k, determine whether there is a cut of size at least k in G.
This problem is known to be NP-complete. It is easy to see that the problem is in NP: a yes answer is easy to prove by presenting a large enough cut. The NP-completeness of the problem can be shown, for example, by a reduction from maximum 2-satisfiability (a restriction of the maximum satisfiability problem). The weighted version of the decision problem was one of Karp's 21 NP-complete problems; Karp showed the NP-completeness by a reduction from the partition problem.
The canonical optimization variant of the above decision problem is usually known as the Maximum-Cut Problem or Max-Cut and is defined as:
Given a graph G, find a maximum cut.
The optimization variant is known to be NP-Hard.
The opposite problem, that of finding a minimum cut is known to be efficiently solvable via the Ford–Fulkerson algorithm.
Algorithms
Polynomial-time algorithms
As the Max-Cut Problem is NP-hard, no polynomial-time algorithms for Max-Cut in general graphs are known.
Planar graphs
However, in planar graphs, the Maximum-Cut Problem is dual to the route inspection problem (the problem of finding a shortest tour that visits each edge of a graph at least once), in the sense that the edges that do not belong to a maximum cut-set of a graph G are the duals of the edges that are doubled in an optimal inspection tour of the dual graph of G. The optimal inspection tour forms a self-intersecting curve that separates the plane into two subsets, the subset of points for which the winding number of the curve is even and the subset for which the winding number is odd; these two subsets form a cut that includes all of the edges whose duals appear an odd number of times in the tour. The route inspection problem may be solved in polynomial time, and this duality allows the maximum cut problem to also be solved in polynomial time for planar graphs. The Maximum-Bisection problem is known however to be NP-hard.
Approximation algorithms
The Max-Cut Problem is APX-hard, meaning that there is no polynomial-time approximation scheme (PTAS), arbitrarily close to the optimal solution, for it, unless P = NP. Thus, every known polynomial-time approximation algorithm achieves an approximation ratio strictly less than one.
There is a simple randomized 0.5-approximation algorithm: for each vertex flip a coin to decide to which half of the partition to assign it. In expectation, half of the edges are cut edges. This algorithm can be derandomized with the method of conditional probabilities; therefore there is a simple deterministic polynomial-time 0.5-approximation algorithm as well. One such algorithm starts with an arbitrary partition of the vertices of the given graph and repeatedly moves one vertex at a time from one side of the partition to the other, improving the solution at each step, until no more improvements of this type can be made. The number of iterations is at most because the algorithm improves the cut by at least one edge at each step. When the algorithm terminates, at least half of the edges incident to every vertex belong to the cut, for otherwise moving the vertex would improve the cut. Therefore, the cut includes at least edges.
The polynomial-time approximation algorithm for Max-Cut with the best known approximation ratio is a method by Goemans and Williamson using semidefinite programming and randomized rounding that achieves an approximation ratio where
If the unique games conjecture is true, this is the best possible approximation ratio for maximum cut. Without such unproven assumptions, it has been proven to be NP-hard to approximate the max-cut value with an approximation ratio better than .
In there is an extended analysis of 10 heuristics for this problem, including open-source implementation.
Parameterized algorithms and kernelization
While it is trivial to prove that the problem of finding a cut of size at least (the parameter) k is fixed-parameter tractable (FPT), it is much harder to show fixed-parameter tractability for the problem of deciding whether a graph G has a cut of size at least the Edwards-Erdős lower bound (see Lower bounds above) plus (the parameter)k. Crowston et al. proved that the problem can be solved in time and admits a kernel of size . Crowston et al. extended the fixed-parameter tractability result to the Balanced Subgraph Problem (BSP, see Lower bounds above) and improved the kernel size to (holds also for BSP). Etscheid and Mnich improved the fixed-parameter tractability result for BSP to and the kernel-size result to vertices.
Applications
Machine learning
Treating its nodes as features and its edges as distances, the max cut algorithm divides a graph in two well-separated subsets. In other words, it can be naturally applied to perform binary classification. Compared to more common classification algorithms, it does not require a feature space, only the distances between elements within.
Theoretical physics
In statistical physics and disordered systems, the Max Cut problem is equivalent to minimizing the Hamiltonian of a spin glass model, most simply the Ising model. For the Ising model on a graph G and only nearest-neighbor interactions, the Hamiltonian is
Here each vertex i of the graph is a spin site that can take a spin value A spin configuration partitions into two sets, those with spin up and those with spin down We denote with the set of edges that connect the two sets. We can then rewrite the Hamiltonian as
Minimizing this energy is equivalent to the min-cut problem or by setting the graph weights as the max-cut problem.
Circuit design
The max cut problem has applications in VLSI design.
See also
Minimum cut
Minimum k-cut
Odd cycle transversal, equivalent to asking for the largest bipartite induced subgraph
Unfriendly partition, a related concept for infinite graphs
Notes
References
.
.
Maximum cut (optimisation version) is the problem ND14 in Appendix B (page 399).
.
.
.
.
.
.
.
.
.
Maximum cut (decision version) is the problem ND16 in Appendix A2.2.
Maximum bipartite subgraph (decision version) is the problem GT25 in Appendix A1.2.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
External links
Pierluigi Crescenzi, Viggo Kann, Magnús Halldórsson, Marek Karpinski, Gerhard Woeginger (2000), "Maximum Cut", in "A compendium of NP optimization problems".
Andrea Casini, Nicola Rebagliati (2012), "A Python library for solving Max Cut"
Graph theory objects
Combinatorial optimization
NP-complete problems
Computational problems in graph theory | Maximum cut | [
"Mathematics"
] | 1,966 | [
"Computational problems in graph theory",
"Graph theory objects",
"Computational mathematics",
"Graph theory",
"Computational problems",
"Mathematical relations",
"Mathematical problems",
"NP-complete problems"
] |
19,638,111 | https://en.wikipedia.org/wiki/Next%20Generation%20Supersonic%20Transport | The Next Generation Supersonic Transport is a supersonic transport (SST) being developed by the Japanese Space Agency JAXA. By comparison to the Concorde this new design is intended to carry three times as many passengers and fly roughly at the same speed (Mach 2) . It also has twice the range. The goal is to achieve a ticket price comparable to that of subsonic business class. JAXA had expected to launch the plane by 2015. An 11.5-meter prototype was tested on October 10, 2005.
One of the most crucial factors in the commercial viability of a supersonic transport is the strength of the sonic boom it generates. The boom created by Concorde was powerful enough to prevent the aircraft from flying supersonically over land, which eliminated many possible passenger routes and contributing to the cancellation of Concorde's American rival, the Boeing 2707. Since the 1960's a number of techniques have been developed that may reduce the effect (see the sonic boom article). On May 9, 2008, JAXA announced it would collaborate with NASA to conduct joint research on sonic boom modeling.
JAXA is also researching hypersonic transport (Mach 5.0+) , though the goal is not use for commercial aircraft cost competitive with current aircraft.
See also
Supersonic transport
Sonic boom
Notes
References
Supersonic transports | Next Generation Supersonic Transport | [
"Physics"
] | 269 | [
"Physical systems",
"Transport",
"Supersonic transports"
] |
19,638,350 | https://en.wikipedia.org/wiki/IDEF5 | IDEF5 (Integrated Definition for Ontology Description Capture Method) is a software engineering method to develop and maintain usable, accurate domain ontologies. This standard is part of the IDEF family of modeling languages in the field of software engineering.
Overview
In the field of computer science ontologies are used to capture the concept and objects in a specific domain, along with associated relationships and meanings. In addition, ontology capture helps coordinate projects by standardizing terminology and creates opportunities for information reuse. The lDEF5 Ontology Capture Method has been developed to reliably construct ontologies in a way that closely reflects human understanding of the specific domain.
In the IDEF5 method, an ontology is constructed by capturing the content of certain assertions about real-world objects, their properties, and their interrelationships and representing that content in an intuitive and natural form. The IDEF5 method has three main components:
A graphical language to support conceptual ontology analysis
A structured text language for detailed ontology characterization, and
A systematic procedure that provides guidelines for effective ontology capture.
Topics
Ontology
In IDEF5 the meaning of the term ontology is characterized to include a catalog of terms used in a domain, the rules governing how those terms can be combined to make valid statements about situations in that domain, and the “sanctioned inferences” that can be made when such statements are used in that domain. In every domain, there are phenomena that the humans in that domain discriminate as (conceptual or physical) objects, associations, and situations. Through various language mechanisms, one associates definite descriptors (e.g., names, noun phrases, etc.) to those phenomena.
Central concepts of ontology
The construction of ontologies for human engineered systems is the focus of the IDEF5. In the context of such systems, the nature of ontological knowledge involves several modifications to the more traditional conception. The first of these modifications has to do with the notion of a kind. Historically, a kind is an objective category of objects that are bound together by a common nature, a set of properties shared by all and only the members of the kind.
While there is an attempt to divide the world at its joints in the construction of enterprise ontologies, those divisions are not determined by the natures of things in the enterprise so much as the roles those things are to play in the enterprise from some perspective or other. Because those roles might be filled in any of a number of ways by objects that differ in various ways, and because legitimate perspectives on a domain can vary widely, it is too restrictive to require that the instances of each identifiable kind in an enterprise share a common nature, let alone that the properties constituting that nature be essential to their bearers. Consequently, enterprise ontologies require a more flexible notion of kind.
Ontology development process
Ontology development requires extensive iterations, discussions, reviews, and introspection. Knowledge extraction is usually a discovery process and requires considerable introspection. It requires a process that incorporates both significant expert involvement as well as the dynamics of a group effort. Given the open-ended nature of ontological analyses, it is not prudent to adopt a “cookbook” approach to ontology development. In brief, the IDEF5 ontology development process consists of the following five activities:
Organizing and Scoping: This activity involves establishing the purpose, viewpoint, and context for the ontology development project and assigning roles to the team members.
Data Collection: This activity involves acquiring the raw data needed for ontology development.
Data Analysis: This activity involves analyzing the data to facilitate ontology extraction.
Initial Ontology Development: This activity involves developing a preliminary ontology from the acquired data.
Ontology Refinement and Validation: This activity involves refining and validating the ontology to complete the development process.
Although the above activities are listed sequentially, there is a significant amount of overlap and iteration between the activities.
Ontological analysis
Ontological analysis is accomplished by examining the vocabulary that is used to discuss the characteristic objects and processes that compose the domain, developing rigorous definitions of the basic terms in that vocabulary, and characterizing the logical connections among those terms. The product of this analysis, an ontology, is a domain vocabulary complete with a set of precise definitions, or axioms, that constrain the meanings of the terms sufficiently to enable consistent interpretation of the data that use that vocabulary.
IDEF5 Building blocks
Definitions
Some of the key terms in IDEF5 and the basic IDEF5 Schematic Language Symbols, see figure.:
Kind Informally, a group of individuals that share some set of distinguished characteristics. More formally, kinds are properties typically expressed by common nouns such as ‘employee’, ‘machine’, and ‘lathe’.
Individual The most logically basic kind of real world object. Prominent examples include human persons, concrete physical objects, and certain abstract objects such as programs. Unlike objects of higher logical orders such as properties and relations, individuals essentially are not multiply instantiable. Individuals are also known as first-order objects.
Referent A construct in the IDEF5 elaboration language used to refer to a kind, object, property, relation, or process kind in another ontology or an IDEF model.
Relation An abstract, general association or connection that holds between two or more objects. Like properties, relations are multiply instantiable. The objects among which a relation holds in a particular instance are known as its arguments.
State A property, generally indicated by an adjective rather than a common noun, that is characteristic of objects of a certain kind at a certain point within a process. For example, water can be in frozen, liquid, or gaseous states.
Process A real world event or state of affairs involving one or more individuals over some (possibly instantaneous) interval of time. Typically, a process involves some sort of change in the properties of one or more of the individuals within the process. Because of the ambiguity in the term “process”, sometimes referred to as process instance.
Diagram types
Various diagram types, or schematics, can be constructed in the IDEF5 Schematic Language. The purpose of these schematics, like that of any representation, is to represent information visually. Thus, semantic rules must be provided for interpreting every possible schematic. These rules are provided by outlining the rules for interpreting the most basic constructs of the language, then applying them recursively to more complex constructs. There are four primary schematic types derived from the basic IDEF5 Schematic Language which can be used to capture ontology information directly in a form that is intuitive to the domain expert.
Classification Schematics : Classification schematics provide mechanisms for humans to organize knowledge into logical taxonomies. Of particular merit are two types of classification: description subsumption and natural kind classification.
Composition Schematics : Composition schematics serve as mechanisms to represent graphically the "part-of" relation that is so common among components of an ontology.
Relation Schematics : Relation schematics allow ontology developers to visualize and understand relations among kinds in a domain, and can also be used to capture and display relations between first-order relations.
Object State Schematics : Because there is no clean division between information about kinds and states and information about processes, the IDEF5 schematic language enables modelers to express fairly detailed object-centered process information (i.e., information about kinds of objects and the various states they can be in relative to certain processes). Diagrams built from these constructs are known as Object-State Schematics.
See also
IDEF
IDEF6
Ontology
Ontology engineering
Ontology (computer science)
References
External links
Overview of IDEF5 at www.idef.com
IDEF5 Method Report from 1994.
Ontology (information science)
Data modeling
Systems analysis | IDEF5 | [
"Engineering"
] | 1,608 | [
"Data modeling",
"Data engineering"
] |
19,645,022 | https://en.wikipedia.org/wiki/Bulk%20electrolysis | Bulk electrolysis is also known as potentiostatic coulometry or controlled potential coulometry. The experiment is a form of coulometry which generally employs a three electrode system controlled by a potentiostat. In the experiment the working electrode is held at a constant potential (volts) and current (amps) is monitored over time (seconds). In a properly run experiment an analyte is quantitatively converted from its original oxidation state to a new oxidation state, either reduced or oxidized. As the substrate is consumed, the current also decreases, approaching zero when the conversion nears completion.
The results of a bulk electrolysis are visually displayed as the total coulombs passed (total electric charge) plotted against time in seconds, even though the experiment measures electric current (amps) over time. This is done to show that the experiment is approaching an expected total number of coulombs.
Fundamental relationships and applications
The sample mass, molecular mass, number of electrons in the electrode reaction, and number of electrons passed during the experiment are all related by Faraday's laws of electrolysis. It follows that, if three of the values are known, then the fourth can be calculated. The bulk electrolysis can also be useful for synthetic purposes if the product of the electrolysis can be isolated. This is most convenient when the product is neutral and can be isolated from the electrolyte solution through extraction or when the product plates out on the electrode or precipitates in another fashion. Even if the product can not be isolated, other analytical techniques can be performed on the solution including NMR, EPR, UV-Vis, FTIR, among other techniques depending on the specific situation. In specially designed cells the solution can be actively monitored during the experiment.
Cell design
In most three electrode experiments there are two isolated cells. One contains the auxiliary and working electrode, while the other contains the reference electrode. Strictly speaking, the reference electrode does not require a separate compartment. A Quasi-Reference Electrode such as a silver/silver chloride wire electrode can be exposed directly to the analyte solution. In such situations there is concern that the analyte and trace redox products may interact with the reference electrode and either render it useless or increase drift. As a result, even these simple references are commonly sequestered in their own cells. The more complex references such as standard hydrogen electrode, saturated calomel electrode, or silver chloride electrode(specific concentration) can not directly mix the analyte solution for fear the electrode will fall apart or interact/react with the analyte.
A bulk electrolysis is best performed in a three part cell in which both the auxiliary electrode and reference electrode have their own cell which connects to the cell containing the working electrode. This isolates the undesired redox events taking place at the auxiliary electrode. During bulk electrolysis, the analyte undergoes a redox event at the working electrode. If the system was open, then it would be possible for the product of that reaction to diffuse back to the auxiliary electrode and undergo the inverse redox reaction. In addition to maintaining the proper current at the working electrode, the auxiliary electrode will experience extreme potentials often oxidizing or reducing the solvent or electrolyte to balance the current. In voltammetry experiments, the currents (amps) are so small and it is not a problem to decompose a small amount of solvent or electrolyte. In contrast, a bulk electrolysis involves currents greater by several orders of magnitude. At the auxiliary electrode, this greater current would decompose a significant amount of the solution/electrolyte and probably boiling the solution in the process all in an effort to balance the current. To mitigate this challenge the auxiliary cell will often contain a stoichiometric or greater amount of sacrificial reductant (ferrocene) or sacrificial oxidant (ferrocenium) to balance the overall redox reaction.
For ideal performance the auxiliary electrode should be similar in surface area, as close as possible, and evenly spaced with the working electrode. This is in an effort to prevent "hot spots". Hot spots are the result of current following the path of least resistance. This means much of the redox chemistry will occur at the points at either end of the shortest path between the working and auxiliary electrode. Heating associated with the capacitances resistance of the solution can occur at the area around these points, actually boiling the solution. The bubbling resulting from this isolated boiling of the solution can be confused with gas evolution.
Rates and kinetics
The rate of such reactions/experiments is not determined by the concentration of the solution, but rather the mass transfer of the substrate in the solution to the electrode surface. Rates will increase when the volume of the solution is decreased, the solution is stirred more rapidly, or the area of the working electrode is increased. Since mass transfer is so important the solution is stirred during a bulk electrolysis. However, this technique is generally not considered a hydrodynamic technique, since a laminar flow of solution against the electrode is neither the objective or outcome of the stirring.
Bulk electrolysis is occasionally cited in the literature as means to study electrochemical reaction rates. However, bulk electrolysis is generally a poor method to study electrochemical reaction rates since the rate of bulk electrolysis is generally governed by the specific cells ability to perform mass transfer. Rates slower than this mass transfer bottleneck are rarely of interest.
Efficiency and thermodynamics
Electrocatalytic analyzes will often mention the current efficiency or faradaic efficiency of a given process determined by a bulk electrolysis experiment. For example, if one molecule of hydrogen results from every two electrons inserted into an acidic solution then the faradaic efficiency would be 100%. This indicates that the electrons did not end up performing some other reaction. For example, the oxidation of water will often produce oxygen as well as hydrogen peroxide at the anode. Each of these products is related to its own faradaic efficiency which is tied to the experimental arrangement.
Nor is current efficiency the same as thermodynamic efficiency, since it never address the how much energy (potential in volts) is in the electrons added or removed. The voltage efficiency determined by the reactions overpotential is more directly related to the thermodynamics of the electrochemical reaction. In fact the extent to which a reaction goes to completion is related to how much greater the applied potential is than the reduction potential of interest. In the case where multiple reduction potentials are of interest, it is often difficult to set an electrolysis potential a "safe" distance (such as 200 mV) past a redox event. The result is incomplete conversion of the substrate, or else conversion of some of the substrate to the more reduced form. This factor must be considered when analyzing the current passed and when attempting to do further analysis/isolation/experiments with the substrate solution.
References
Electroanalytical methods | Bulk electrolysis | [
"Chemistry"
] | 1,422 | [
"Electroanalytical methods",
"Electroanalytical chemistry"
] |
19,645,922 | https://en.wikipedia.org/wiki/Palierne%20equation | Palierne equation connects the dynamic modulus of emulsions with the dynamic modulus of the two phases, size of the droplets and the interphase surface tension. The equation can also be used for suspensions of viscoelastic solid particles in viscoelastic fluids. The equation is named after French rheologist Jean-François Palierne, who proposed the equation in 1991.
For the dilute emulsions Palierne equation looks like:
where is the dynamic modulus of the emulsion, is the dynamic modulus of the continuous phase (matrix), is the volume fraction of the disperse phase and the is given as
where is the dynamic modulus of the disperse phase, is the surface tension between the phases and is the radius of the droplets.
For the suspension of solid particles the value of is given as
The Palierne equation is usually extended for the finite volume concentrations of the disperse phase as:
References
Non-Newtonian fluids
Colloidal chemistry
Composite materials | Palierne equation | [
"Physics",
"Chemistry"
] | 201 | [
"Colloidal chemistry",
"Composite materials",
"Colloids",
"Surface science",
"Materials",
"Matter"
] |
19,648,225 | https://en.wikipedia.org/wiki/GREET%20Model | R&D GREET (Research and Development Greenhouse gases, Regulated Emissions, and Energy use in Technologies) is a full life cycle model developed by the Argonne National Laboratory (U.S. Department of Energy's Office of Energy Efficiency and Renewable Energy). It fully evaluates energy and emission impacts of advanced and new transportation fuels, the fuel cycle from well to wheel and the vehicle cycle through material recovery and vehicle disposal. It allows researchers and analysts to evaluate various vehicle and fuel combinations on a full fuel-cycle/vehicle-cycle basis.
The GREET model is specified in the Inflation Reduction Act of 2022 §45V as the methodology to calculate the life cycle greenhouse gas emissions "through the point of production (well-to-gate)" when determining the level of tax credit for clean Hydrogen production until a successor is approved by the Secretary of the Treasury. The final 45V regulations determined 45VH2-GREET to be a “successor model” and required its use for the purposes of the 45V tax credit.
The original implementation of the model was made using Excel spreadsheets while a graphical version has also been made using .NET.
Content
For a given vehicle and fuel system, R&D GREET separately calculates the following:
Consumption of total energy (energy in non-renewable and renewable sources), fossil fuels (petroleum, fossil natural gas, and coal together), petroleum, coal and natural gas;
Emissions of -equivalent greenhouse gases - primarily carbon dioxide (), methane (), and nitrous oxide ();
Emissions of six criteria pollutants: volatile organic compounds (VOCs), carbon monoxide (CO), nitrogen oxide (), airborne particulate matter with sizes smaller than 10 micrometre (PM10]), particulate matter with sizes smaller than 2.5 micrometre (PM2.5), and sulfur oxides ().
Water consumption
R&D GREET includes more than 100 fuel production pathways and more than 70 vehicle/fuel systems.
External links
http://greet.anl.gov/
https://www.energy.gov/eere/greet
References
Vehicle emission controls
Emission standards
Transport and the environment
Standards of the United States | GREET Model | [
"Physics"
] | 451 | [
"Physical systems",
"Transport",
"Transport and the environment"
] |
2,589,751 | https://en.wikipedia.org/wiki/Fluorescence%20correlation%20spectroscopy | Fluorescence correlation spectroscopy (FCS) is a statistical analysis, via time correlation, of stationary fluctuations of the fluorescence intensity. Its theoretical underpinning originated from L. Onsager's regression hypothesis. The analysis provides kinetic parameters of the physical processes underlying the fluctuations. One of the interesting applications of this is an analysis of the concentration fluctuations of fluorescent particles (molecules) in solution. In this application, the fluorescence emitted from a very tiny space in solution containing a small number of fluorescent particles (molecules) is observed. The fluorescence intensity is fluctuating due to Brownian motion of the particles. In other words, the number of the particles in the sub-space defined by the optical system is randomly changing around the average number. The analysis gives the average number of fluorescent particles and average diffusion time, when the particle is passing through the space. Eventually, both the concentration and size of the particle (molecule) are determined. Both parameters are important in biochemical research, biophysics, and chemistry.
FCS is such a sensitive analytical tool because it observes a small number of molecules (nanomolar to picomolar concentrations) in a small volume (~1 μm3). In contrast to other methods (such as HPLC analysis) FCS has no physical separation process; instead, it achieves its spatial resolution through its optics. Furthermore, FCS enables observation of fluorescence-tagged molecules in the biochemical pathway in intact living cells. This opens a new area, "in situ or in vivo biochemistry": tracing the biochemical pathway in intact cells and organs.
Commonly, FCS is employed in the context of optical microscopy, in particular confocal microscopy or two-photon excitation microscopy. In these techniques light is focused on a sample and the measured fluorescence intensity fluctuations (due to diffusion, physical or chemical reactions, aggregation, etc.) are analyzed using the temporal autocorrelation. Because the measured property is essentially related to the magnitude and/or the amount of fluctuations, there is an optimum measurement regime at the level when individual species enter or exit the observation volume (or turn on and off in the volume). When too many entities are measured at the same time the overall fluctuations are small in comparison to the total signal and may not be resolvable – in the other direction, if the individual fluctuation-events are too sparse in time, one measurement may take prohibitively too long. FCS is in a way the fluorescent counterpart to dynamic light scattering, which uses coherent light scattering, instead of (incoherent) fluorescence.
When an appropriate model is known, FCS can be used to obtain quantitative information such as
diffusion coefficients
hydrodynamic radii
average concentrations
kinetic chemical reaction rates
singlet-triplet dynamics
Because fluorescent markers come in a variety of colors and can be specifically bound to a particular molecule (e.g. proteins, polymers, metal-complexes, etc.), it is possible to study the behavior of individual molecules (in rapid succession in composite solutions). With the development of sensitive detectors such as avalanche photodiodes the detection of the fluorescence signal coming from individual molecules in highly dilute samples has become practical. With this emerged the possibility to conduct FCS experiments in a wide variety of specimens, ranging from materials science to biology. The advent of engineered cells with genetically tagged proteins (like green fluorescent protein) has made FCS a common tool for studying molecular dynamics in living cells.
History
Signal-correlation techniques were first experimentally applied to fluorescence in 1972 by Magde, Elson, and Webb, who are therefore commonly credited as the inventors of FCS. The technique was further developed in a group of papers by these and other authors soon after, establishing the theoretical foundations and types of applications.
Around 1990, with the ability of detecting sufficiently small number of fluorescence particles, two issues emerged: A non-Gaussian distribution of the fluorescence intensity and the three-dimensional confocal Measurement Volume of a laser-microscopy system. The former led to an analysis of distributions and moments of the fluorescent signals for extracting molecular information, which eventually became a collection of methods known as Brightness Analyses. See Thompson (1991) for a review of that period.
Beginning in 1993, a number of improvements in the measurement techniques—notably using confocal microscopy, and then two-photon microscopy—to better define the measurement volume and reject background—greatly improved the signal-to-noise ratio and allowed single molecule sensitivity. Since then, there has been a renewed interest in FCS, and as of August 2007 there have been over 3,000 papers using FCS found in Web of Science. See Krichevsky and Bonnet for a review. In addition, there has been a flurry of activity extending FCS in various ways, for instance to laser scanning and spinning-disk confocal microscopy (from a stationary, single point measurement), in using cross-correlation (FCCS) between two fluorescent channels instead of autocorrelation, and in using Förster Resonance Energy Transfer (FRET) instead of fluorescence.
Typical setup
The typical FCS setup consists of a laser line (wavelengths ranging typically from 405–633 nm (cw), and from 690–1100 nm (pulsed)), which is reflected into a microscope objective by a dichroic mirror. The laser beam is focused in the sample, which contains fluorescent particles (molecules) in such high dilution, that only a few are within the focal spot (usually 1–100 molecules in one fL). When the particles cross the focal volume, they fluoresce. This light is collected by the same objective and, because it is red-shifted with respect to the excitation light it passes the dichroic mirror reaching a detector, typically a photomultiplier tube, an avalanche photodiode detector or a superconducting nanowire single-photon detector. The resulting electronic signal can be stored either directly as an intensity versus time trace to be analyzed at a later point, or computed to generate the autocorrelation directly (which requires special acquisition cards). The FCS curve by itself only represents a time-spectrum. Conclusions on physical phenomena have to be extracted from there with appropriate models. The parameters of interest are found after fitting the autocorrelation curve to modeled functional forms.
Measurement volume
The measurement volume is a convolution of illumination (excitation) and detection geometries, which result from the optical elements involved. The resulting volume is described mathematically by the point spread function (or PSF), it is essentially the image of a point source. The PSF is often described as an ellipsoid (with unsharp boundaries) of few hundred nanometers in focus diameter, and almost one micrometer along the optical axis. The shape varies significantly (and has a large impact on the resulting FCS curves) depending on the quality of the optical elements (it is crucial to avoid astigmatism and to check the real shape of the PSF on the instrument). In the case of confocal microscopy, and for small pinholes (around one Airy unit), the PSF is well approximated by Gaussians:
where is the peak intensity, r and z are radial and axial position, and and are the radial and axial radii, and . This Gaussian form is assumed in deriving the functional form of the autocorrelation.
Typically is 200–300 nm, and is 2–6 times larger. One common way of calibrating the measurement volume parameters is to perform FCS on a species with known diffusion coefficient and concentration (see below). Diffusion coefficients for common fluorophores in water are given in a later section.
The Gaussian approximation works to varying degrees depending on the optical details, and corrections can sometimes be applied to offset the errors in approximation.
Autocorrelation function
The (temporal) autocorrelation function is the correlation of a time series with itself shifted by time , as a function of :
where is the deviation from the mean intensity. The normalization (denominator) here is the most commonly used for FCS, because then the correlation at , G(0), is related to the average number of particles in the measurement volume.
As an example, raw FCS data and its autocorrelation for freely diffusing Rhodamine 6G are shown in the figure to the right. The plot on top shows the fluorescent intensity versus time. The intensity fluctuates as Rhodamine 6G moves in and out of the focal volume. In the bottom plot is the autocorrelation on the same data. Information about the diffusion rate and concentration can be obtained using one of the models described below.
For a Gaussian illumination profile , the autocorrelation function is given by the general master formula
where the vector denotes the stochastic displacement in space of a fluorophore after time .
The expression is valid if the average number of fluorophores in the focal volume is low and if dark states, etc., of the fluorophore can be ignored. In particular, no assumption was made on the type of diffusive motion under investigation. The formula allows for an interpretation of as (i) a return probability for small beam parameters and (ii) the moment-generating function of if are varied.
Interpreting the autocorrelation function
To extract quantities of interest, the autocorrelation data can be fitted, typically using a nonlinear least squares algorithm. The fit's functional form depends on the type of dynamics (and the optical geometry in question).
Normal diffusion
The fluorescent particles used in FCS are small and thus experience thermal motions in solution. The simplest FCS experiment is thus normal 3D diffusion, for which the autocorrelation is:
where is the ratio of axial to radial radii of the measurement volume, and is the characteristic residence time. This form was derived assuming a Gaussian measurement volume. Typically, the fit would have three free parameters—G(0), , and —from which the diffusion coefficient and fluorophore concentration can be obtained.
With the normalization used in the previous section, G(0) gives the mean number of diffusers in the volume <N>, or equivalently—with knowledge of the observation volume size—the mean concentration:
where the effective volume is found from integrating the Gaussian form of the measurement volume and is given by:
D gives the diffusion coefficient:
Anomalous diffusion
If the diffusing particles are hindered by obstacles or pushed by a force (molecular motors, flow, etc.) the dynamics is often not sufficiently well-described by the normal diffusion model, where the mean squared displacement (MSD) grows linearly with time. Instead the diffusion may be better described as anomalous diffusion, where the temporal dependence of the MSD is non-linear as in the power-law:
where is an anomalous diffusion coefficient. "Anomalous diffusion" commonly refers only to this very generic model, and not the many other possibilities that might be described as anomalous. Also, a power law is, in a strict sense, the expected form only for a narrow range of rigorously defined systems, for instance when the distribution of obstacles is fractal. Nonetheless a power law can be a useful approximation for a wider range of systems.
The FCS autocorrelation function for anomalous diffusion is:
where the anomalous exponent is the same as above, and becomes a free parameter in the fitting.
Using FCS, the anomalous exponent has been shown to be an indication of the degree of molecular crowding (it is less than one and smaller for greater degrees of crowding).
Polydisperse diffusion
If there are diffusing particles with different sizes (diffusion coefficients), it is common to fit to a function that is the sum of single component forms:
where the sum is over the number different sizes of particle, indexed by i, and gives the weighting, which is related to the quantum yield and concentration of each type. This introduces new parameters, which makes the fitting more difficult as a higher-dimensional space must be searched. Nonlinear least square fitting typically becomes unstable with even a small number of s. A more robust fitting scheme, especially useful for polydisperse samples, is the Maximum Entropy Method.
Diffusion with flow
With diffusion together with a uniform flow with velocity in the lateral direction, the autocorrelation is:
where is the average residence time if there is only a flow (no diffusion).
Chemical relaxation
A wide range of possible FCS experiments involve chemical reactions that continually fluctuate from equilibrium because of thermal motions (and then "relax"). In contrast to diffusion, which is also a relaxation process, the fluctuations cause changes between states of different energies. One very simple system showing chemical relaxation would be a stationary binding site in the measurement volume, where particles only produce signal when bound (e.g. by FRET, or if the diffusion time is much faster than the sampling interval). In this case the autocorrelation is:
where
is the relaxation time and depends on the reaction kinetics (on and off rates), and:
is related to the equilibrium constant K.
Most systems with chemical relaxation also show measurable diffusion as well, and the autocorrelation function will depend on the details of the system. If the diffusion and chemical reaction are decoupled, the combined autocorrelation is the product of the chemical and diffusive autocorrelations.
Triplet state correction
The autocorrelations above assume that the fluctuations are not due to changes in the fluorescent properties of the particles. However, for the majority of (bio)organic fluorophores—e.g. green fluorescent protein, rhodamine, Cy3 and Alexa Fluor dyes—some fraction of illuminated particles are excited to a triplet state (or other non-radiative decaying states) and then do not emit photons for a characteristic relaxation time . Typically is on the order of microseconds, which is usually smaller than the dynamics of interest (e.g. ) but large enough to be measured. A multiplicative term is added to the autocorrelation to account for the triplet state. For normal diffusion:
where is the fraction of particles that have entered the triplet state and is the corresponding triplet state relaxation time. If the dynamics of interest are much slower than the triplet state relaxation, the short time component of the autocorrelation can simply be truncated and the triplet term is unnecessary.
Common fluorescent probes
The fluorescent species used in FCS is typically a biomolecule of interest that has been tagged with a fluorophore (using immunohistochemistry for instance), or is a naked fluorophore that is used to probe some environment of interest (e.g. the cytoskeleton of a cell). The following table gives diffusion coefficients of some common fluorophores in water at room temperature, and their excitation wavelengths.
Variations
FCS almost always refers to the single point, single channel, temporal autocorrelation measurement, although the term "fluorescence correlation spectroscopy" out of its historical scientific context implies no such restriction. FCS has been extended in a number of variations by different researchers, with each extension generating another name (usually an acronym).
Spot variation fluorescence correlation spectroscopy (svFCS)
Whereas FCS is a point measurement providing diffusion time at a given observation volume, svFCS is a technique where the observation spot is varied in order to measure diffusion times at different spot sizes. The relationship between the diffusion time and the spot area is linear and could be plotted in order to decipher the major contribution of confinement. The resulting curve is called the diffusion law.
This technique is used in Biology to study the plasma membrane organization on living cells.
where is the y axis intercept. In case of Brownian diffusion, . In case of a confinement due to isolated domains, whereas in case of isolated domains, .
svFCS studies on living cells and simulation papers
Sampling-Volume-Controlled Fluorescence Correlation Spectroscopy (SVC-FCS):
z-scan FCS
FCS with Nano-apertures: breaking the diffraction barrier
STED-FCS:
Fluorescence cross-correlation spectroscopy (FCCS)
FCS is sometimes used to study molecular interactions using differences in diffusion times (e.g. the product of an association reaction will be larger and thus have larger diffusion times than the reactants individually); however, FCS is relatively insensitive to molecular mass as can be seen from the following equation relating molecular mass to the diffusion time of globular particles (e.g. proteins):
where is the viscosity of the sample and is the molecular mass of the fluorescent species. In practice, the diffusion times need to be sufficiently different—a factor of at least 1.6—which means the molecular masses must differ by a factor of 4. Dual color fluorescence cross-correlation spectroscopy (FCCS) measures interactions by cross-correlating two or more fluorescent channels (one channel for each reactant), which distinguishes interactions more sensitively than FCS, particularly when the mass change in the reaction is small.
Brightness analysis methods
This set of methods include number and brightness (N&B), photon counting histogram (PCH), fluorescence intensity distribution analysis (FIDA), and Cumulant Analysis. and Spatial Intensity Distribution Analysis. Combination of multiple methods is also reported.
Fluorescence cross correlation spectroscopy overcomes the weak dependence of diffusion rate on molecular mass by looking at multicolor coincidence. What about homo-interactions? The solution lies in brightness analysis. These methods use the heterogeneity in the intensity distribution of fluorescence to measure the molecular brightness of different species in a sample. Since dimers will contain twice the number of fluorescent labels as monomers, their molecular brightness will be approximately double that of monomers. As a result, the relative brightness is sensitive a measure of oligomerization. The average molecular brightness () is related to the variance () and the average intensity () as follows:
Here and are the fractional intensity and molecular brightness, respectively, of species .
The brightness analysis method might be employed to study the interactions of biomolecules upon binding a non-fluorescent reactant to a fluorescent one. The complex formation causes a change in brightness intensity due to steric shielding, charge transfer, photoisomerization rate, or a combination of these phenomena enabling distinguishing the reactant from the product.
FRET-FCS
Another FCS based approach to studying molecular interactions uses fluorescence resonance energy transfer (FRET) instead of fluorescence, and is called FRET-FCS. With FRET, there are two types of probes, as with FCCS; however, there is only one channel and light is only detected when the two probes are very close—close enough to ensure an interaction. The FRET signal is weaker than with fluorescence, but has the advantage that there is only signal during a reaction (aside from autofluorescence).
Scanning FCS
In Scanning fluorescence correlation spectroscopy (sFCS) the measurement volume is moved across the sample in a defined way. The introduction of scanning is motivated by its ability to alleviate or remove several distinct problems often encountered in standard FCS, and thus, to extend the range of applicability of fluorescence correlation methods in biological systems.
Some variations of FCS are only applicable to serial scanning laser microscopes. Image Correlation Spectroscopy and its variations all were implemented on a scanning confocal or scanning two photon microscope, but transfer to other microscopes, like a spinning disk confocal microscope. Raster ICS (RICS), and position sensitive FCS (PSFCS) incorporate the time delay between parts of the image scan into the analysis. Also, low-dimensional scans (e.g. a circular ring)—only possible on a scanning system—can access time scales between single point and full image measurements. Scanning path has also been made to adaptively follow particles.
Spinning disk FCS and spatial mapping
Any of the image correlation spectroscopy methods can also be performed on a spinning disk confocal microscope, which in practice can obtain faster imaging speeds compared to a laser scanning confocal microscope. This approach has recently been applied to diffusion in a spatially varying complex environment, producing a pixel resolution map of a diffusion coefficient. The spatial mapping of diffusion with FCS has subsequently been extended to the TIRF system. Spatial mapping of dynamics using correlation techniques had been applied before, but only at sparse points or at coarse resolution.
Image correlation spectroscopy (ICS)
When the motion is slow (in biology, for example, diffusion in a membrane), getting adequate statistics from a single-point FCS experiment may take a prohibitively long time. More data can be obtained by performing the experiment in multiple spatial points in parallel, using a laser scanning confocal microscope. This approach has been called Image Correlation Spectroscopy (ICS). The measurements can then be averaged together.
Another variation of ICS performs a spatial autocorrelation on images, which gives information about the concentration of particles. The correlation is then averaged in time. While camera white noise does not autocorrelate over time, it does over space - this creates a white noise amplitude in the spatial autocorrelation function which must be accounted for when fitting the autocorrelation amplitude in order to find the concentration of fluorescent molecules.
A natural extension of the temporal and spatial correlation versions is spatio-temporal ICS (STICS). In STICS there is no explicit averaging in space or time (only the averaging inherent in correlation). In systems with non-isotropic motion (e.g. directed flow, asymmetric diffusion), STICS can extract the directional information. A variation that is closely related to STICS (by the Fourier transform) is k-space Image Correlation Spectroscopy (kICS).
There are cross-correlation versions of ICS as well, which can yield the concentration, distribution and dynamics of co-localized fluorescent molecules. Molecules are considered co-localized when individual fluorescence contributions are indistinguishable due to overlapping point-spread functions of fluorescence intensities.
Particle image correlation spectroscopy (PICS)
Source:
PICS is a powerful analysis tool that resolves correlations on the nanometer length and millisecond timescale. Adapted from methods of spatio-temporal image correlation spectroscopy, it exploits the high positional accuracy of single-particle tracking. While conventional tracking methods break down if multiple particle trajectories intersect, this method works in principle for arbitrarily large molecule densities and dynamical parameters (e.g. diffusion coefficients, velocities) as long as individual molecules can be identified. It is computationally cheap and robust and allows one to identify and quantify motions (e.g. diffusion, active transport, confined diffusion) within an ensemble of particles, without any a priori knowledge about the dynamics.
A particle image cross-correlation spectroscopy (PICCS) extension is available for biological processes that involve multiple interaction partners, as can observed by two-color microscopy.
FCS Super-resolution Optical Fluctuation Imaging (fcsSOFI)
Super-resolution optical fluctuation imaging (SOFI) is a super-resolution technique that achieves spatial resolutions below the diffraction limit by post-processing analysis with correlation equations, similar to FCS. While original reports of SOFI used fluctuations from stationary, blinking of fluorophores, FCS has been combined with SOFI where fluctuations are produced from diffusing probes to produce super-resolution spatial maps of diffusion coefficients. This has been applied to understand diffusion and spatial properties of porous and confined materials. This includes agarose and temperature-responsive PNIPAM hydrogels, liquid crystals, and phase-separated polymers and RNA/protein condensates.
Total internal reflection FCS
Total internal reflection fluorescence (TIRF) is a microscopy approach that is only sensitive to a thin layer near the surface of a coverslip, which greatly minimizes background fluorescence. FCS has been extended to that type of microscope, and is called TIR-FCS. Because the fluorescence intensity in TIRF falls off exponentially with distance from the coverslip (instead of as a Gaussian with a confocal), the autocorrelation function is different.
FCS imaging using Light sheet fluorescence microscopy
Light sheet fluorescence microscopy or selective plane imaging microscopy (SPIM) uses illumination that is done perpendicularly to the direction of observation, by using a thin sheet of (laser) light. Under certain conditions, this illumination principle can be combined with fluorescence correlation spectroscopy, to allow spatially resolved imaging of the mobility and interactions of fluorescing particles such as GFP labelled proteins inside living biological samples.
Other fluorescent dynamical approaches
There are two main non-correlation alternatives to FCS that are widely used to study the dynamics of fluorescent species.
Fluorescence recovery after photobleaching (FRAP)
In FRAP, a region is briefly exposed to intense light, irrecoverably photobleaching fluorophores, and the fluorescence recovery due to diffusion of nearby (non-bleached) fluorophores is imaged. A primary advantage of FRAP over FCS is the ease of interpreting qualitative experiments common in cell biology. Differences between cell lines, or regions of a cell, or before and after application of drug, can often be characterized by simple inspection of movies. FCS experiments require a level of processing and are more sensitive to potentially confounding influences like: rotational diffusion, vibrations, photobleaching, dependence on illumination and fluorescence color, inadequate statistics, etc. It is much easier to change the measurement volume in FRAP, which allows greater control. In practice, the volumes are typically larger than in FCS. While FRAP experiments are typically more qualitative, some researchers are studying FRAP quantitatively and including binding dynamics. A disadvantage of FRAP in cell biology is the free radical perturbation of the cell caused by the photobleaching. It is also less versatile, as it cannot measure concentration or rotational diffusion, or co-localization. FRAP requires a significantly higher concentration of fluorophores than FCS.
Particle tracking
In particle tracking, the trajectories of a set of particles are measured, typically by applying particle tracking algorithms to movies. Particle tracking has the advantage that all the dynamical information is maintained in the measurement, unlike FCS where correlation averages the dynamics to a single smooth curve. The advantage is apparent in systems showing complex diffusion, where directly computing the mean squared displacement allows straightforward comparison to normal or power law diffusion. To apply particle tracking, the particles have to be distinguishable and thus at lower concentration than required of FCS. Also, particle tracking is more sensitive to noise, which can sometimes affect the results unpredictably.
Auto-fluorescence correlation spectroscopy
Recent advances in ultraviolet nanophotonics has led to development of single molecule study on label-free protein by exciting them with deep ultraviolet light and studying the dynamic processes.
Two- and three-photon FCS excitation
Several advantages in both spatial resolution and minimizing photodamage/photobleaching in organic and/or biological samples are obtained by two-photon or three-photon excitation FCS.
See also
Confocal microscopy
Diffusion coefficient
Dynamic light scattering
Fluorescence cross-correlation spectroscopy (FCCS)
Förster resonance energy transfer (FRET)
References
Further reading
Rigler R. and Widengren J. (1990). Ultrasensitive detection of single molecules by fluorescence correlation spectroscopy, BioScience (Ed. Klinge & Owman) p. 180
External links
FCS Classroom
Stowers Institute FCS Tutorial
Cell Migration Consortium FCS Tutorial
Fluorescence Correlation Spectroscopy (FCS) (Becker & Hickl GmbH, web page)
Physical chemistry
Spectroscopy
Fluorescence techniques
Microscopy | Fluorescence correlation spectroscopy | [
"Physics",
"Chemistry",
"Biology"
] | 5,802 | [
"Applied and interdisciplinary physics",
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Spectroscopy",
"nan",
"Microscopy",
"Physical chemistry",
"Fluorescence techniques"
] |
2,590,921 | https://en.wikipedia.org/wiki/Animal%20engine | An animal engine is a machine powered by an animal. Horses, donkeys, oxen, dogs, and humans have all been used in this way. An unusual example of an animal engine was recorded at Portland, Victoria in 1866. A kangaroo had been tamed and trained to work a treadmill which drove various items of machinery.
See also
Experiment (horse powered boat)
Gin gang
Horse mill
Horse engine
Persian well
Treadwheel
Turnspit dog
Books
Animal Powered Machines, J. Kenneth Major. Shire Album 128 - Shire Publications 1985.
References
Grinding mills
Machinery | Animal engine | [
"Physics",
"Technology",
"Engineering"
] | 112 | [
"Physical systems",
"Machines",
"Machinery",
"Mechanical engineering"
] |
2,592,262 | https://en.wikipedia.org/wiki/Protein%20subcellular%20localization%20prediction | Protein subcellular localization prediction (or just protein localization prediction) involves the prediction of where a protein resides in a cell, its subcellular localization.
In general, prediction tools take as input information about a protein, such as a protein sequence of amino acids, and produce a predicted location within the cell as output, such as the nucleus, Endoplasmic reticulum, Golgi apparatus, extracellular space, or other organelles. The aim is to build tools that can accurately predict the outcome of protein targeting in cells.
Prediction of protein subcellular localization is an important component of bioinformatics based prediction of protein function and genome annotation, and it can aid the identification of drug targets.
Background
Experimentally determining the subcellular localization of a protein can be a laborious and time consuming task. Immunolabeling or tagging (such as with a green fluorescent protein) to view localization using fluorescence microscope are often used. A high throughput alternative is to use prediction.
Through the development of new approaches in computer science, coupled with an increased dataset of proteins of known localization, computational tools can now provide fast and accurate localization predictions for many organisms. This has resulted in subcellular localization prediction becoming one of the challenges being successfully aided by bioinformatics, and machine learning.
Many prediction methods now exceed the accuracy of some high-throughput laboratory methods for the identification of protein subcellular localization. Particularly, some predictors have been developed that can be used to deal with proteins that may simultaneously exist, or move between, two or more different subcellular locations. Experimental validation is typically required to confirm the predicted localizations.
Tools
In 1999 PSORT was the first published program to predict subcellular localization. Subsequent tools and websites have been released using techniques such as artificial neural networks, support vector machine and protein motifs. Predictors can be specialized for proteins in different organisms. Some are specialized for eukaryotic proteins, some for human proteins, and some for plant proteins. Methods for the prediction of bacterial localization predictors, and their accuracy, have been reviewed. In 2021, SCLpred-MEM, a membrane protein prediction tool powered by artificial neural networks was published. SCLpred-EMS is another tool powered by Artificial neural networks that classify proteins into endomembrane system and secretory pathway (EMS) versus all others. Similarly, Light-Attention uses machine learning methods to predict ten different common subcellular locations.
The first model to generalize protein subcellular localization to all cell line does so by leveraging images of subcellular landmark stains (i.e., nuclear, plasma membrane, and endoplasmic reticulum markers) across multiple cell stains. Coupling multimodal data of landmark stains along with a pre-trained protein language model, the Prediction of Unseen Proteins' Subcellular Localization (PUPS) model is capable of generative subcellular localization prediction of any protein in any cell line given the protein's amino acid sequence and reference stains of the cell line.
The development of protein subcellular location prediction has been summarized in two comprehensive review articles. Recent tools and an experience report can be found in a recent paper by Meinken and Min (2012).
Application
Knowledge of the subcellular localization of a protein can significantly improve target identification during the drug discovery process. For example, secreted proteins and plasma membrane proteins are easily accessible by drug molecules due to their localization in the extracellular space or on the cell surface.
Bacterial cell surface and secreted proteins are also of interest for their potential as vaccine candidates or as diagnostic targets. Aberrant subcellular localization of proteins has been observed in the cells of several diseases, such as cancer and Alzheimer's disease. Secreted proteins from some archaea that can survive in unusual environments have industrially important applications.
By using prediction a high number of proteins can be assessed in order to find candidates that are trafficked to the desired location.
Databases
The results of subcellular localization prediction can be stored in databases. Examples include the multi-species database Compartments, FunSecKB2, a fungal database; PlantSecKB, a plant database; MetazSecKB, an animal and human database; and ProtSecKB, a protist database.
References
Further reading
Biochemistry detection methods
Protein methods
Cell biology
Computational science
Bioinformatics software
Protein targeting | Protein subcellular localization prediction | [
"Chemistry",
"Mathematics",
"Biology"
] | 898 | [
"Biochemistry methods",
"Cell biology",
"Bioinformatics software",
"Applied mathematics",
"Protein methods",
"Protein targeting",
"Protein biochemistry",
"Chemical tests",
"Computational science",
"Bioinformatics",
"Cellular processes",
"Biochemistry detection methods"
] |
2,592,537 | https://en.wikipedia.org/wiki/Beta%20scission | Beta scission is an important reaction in the chemistry of thermal cracking of hydrocarbons and the formation of free radicals. Free radicals are formed upon splitting the carbon-carbon bond. Free radicals are extremely reactive and short-lived. When a free radical in a polymer chain undergoes a beta scission, the free radical breaks two carbons away from the charged carbon producing an olefin (ethylene) and a primary free radical, which has two fewer carbon atoms.
In organic synthesis, beta scission can be used to direct multistep radical transformations. For example, beta-scission of a weak C-S bond was used to favor one of two equilibrating radicals in metal free conversion of phenols to aromatic esters and acids via C-O transposition.
References
Reaction mechanisms | Beta scission | [
"Chemistry"
] | 164 | [
"Reaction mechanisms",
"Chemical kinetics",
"Physical organic chemistry"
] |
2,594,101 | https://en.wikipedia.org/wiki/Langmuir%20%28unit%29 | The langmuir (symbol: L) is a unit of exposure (or dosage) to a surface (e.g. of a crystal) and is used in ultra-high vacuum (UHV) surface physics to study the adsorption of gases. It is a practical unit, and is not dimensionally homogeneous, and so is used only in this field. It is named after American physicist Irving Langmuir.
Definition
The langmuir is defined by multiplying the pressure of the gas by the time of exposure. One langmuir corresponds to an exposure of 10−6 Torr during one second. For example, exposing a surface to a gas pressure of 10−8 Torr for 100 seconds corresponds to 1 L.
Similarly, keeping the pressure of oxygen gas at 2.5·10−6 Torr for 40 seconds will give a dose of 100 L.
Conversion
Since both different pressures and exposure times can give the same langmuir (see Definition) it can be difficult to convert Langmuir (L) to exposure pressure × time (Torr·s) and vice versa. The following equation can be used to easily convert between the two: Here, and are any two numbers whose product equals the desired Langmuir value, is an integer allowing different magnitudes of pressure or exposure time to be used in conversion. The units are represented in the [square brackets]. Using the prior example, for a dose of 100 L a pressure of 2.5 × 10−6 Torr can be applied for 40 seconds, thus, , and . However, this dosage could also be gained with 8 × 10−8 Torr for 1250 seconds, here , , . In both scenarios .
Derivation
Exposure of a surface in surface physics is a type of fluence, that is the integral of number flux (JN) with respect to exposed time (t) to give a number of particles per unit area (Φ):
The number flux for an ideal gas, that is the number of gas molecules passing through (in a single direction) a surface of unit area in unit time, can be derived from kinetic theory:
where C is the number density of the gas, and is the mean speed of the molecules (not the root-mean-square speed, although the two are related). The number density of an ideal gas depends on the thermodynamic temperature (T) and the pressure (p):
The mean speed of the gas molecules can also be derived from kinetic theory:
where m is the mass of a gas molecule. Hence
The proportionality between number flux and pressure is only strictly valid for a given temperature and a given molecular mass of adsorbing gas. However, the dependence is only on the square roots of m and T. Gas adsorption experiments typically operate around ambient temperature with light gases, and so the langmuir remains useful as a practical unit.
Usage
Assuming that every gas molecule hitting the surface sticks to it (that is, the sticking coefficient is 1), one langmuir (1 L) leads to a coverage of about one monolayer of the adsorbed gas molecules on the surface. In general, the sticking coefficient varies depending on the reactivity of the surface and the molecules, so that the langmuir gives a lower limit of the time it needs to completely cover a surface.
This also illustrates why ultra-high vacuum (UHV) must be used to study solid-state surfaces, nanostructures or even single molecules. The typical time to perform physical experiments on sample surfaces is in the range of one to several hours. In order to keep the surface free of contaminations, the pressure of the residual gas in a UHV chamber should be below 10−10 Torr.
References
.
Gases
Units of amount of substance | Langmuir (unit) | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 776 | [
"Units of amount of substance",
"Matter",
"Units of measurement",
"Planes (geometry)",
"Thin film deposition",
"Quantity",
"Coatings",
"Phases of matter",
"Vacuum",
"Surface science",
"Thin films",
"Condensed matter physics",
"Statistical mechanics",
"Solid state engineering",
"Gases"
] |
2,594,468 | https://en.wikipedia.org/wiki/Frank%20Close | Francis Edwin Close (born 24 July 1945) is a particle physicist who is Emeritus Professor of Physics at the University of Oxford and a Fellow of Exeter College, Oxford.
Education
Close was a pupil at King's School, Peterborough (then a grammar school), where he was taught Latin by John Dexter, brother of author Colin Dexter. He took a BSc in physics at St Andrews University graduating in 1967, before researching for a DPhil in theoretical physics at Magdalen College, Oxford, under the supervision of Richard Dalitz, which he was awarded in 1970. He is an atheist.
Career
In addition to his scientific research, he is known for his lectures and writings making science intelligible to a wider audience and promoting physics outreach.
From Oxford he went to Stanford University in California for two years as a Postdoctoral Fellow on the Stanford Linear Accelerator Center. In 1973 he went to the Daresbury Laboratory in Cheshire and then to CERN in Switzerland from 1973 to 1975. He joined the Rutherford Appleton Laboratory in Oxfordshire in 1975 as a research physicist and was latterly head of Theoretical Physics Division from 1991. He headed the communication and public education activities at CERN from 1997 to 2000. From 2001, he was professor of theoretical physics at Oxford. He was a visiting professor at the University of Birmingham from 1996 to 2002.
Close lists his recreations as writing, singing, travel, squash and Real tennis, and he is a member of Harwell Squash Club.
Honours and awards
He became a Fellow of the Institute of Physics (FInstP) in 1991.
The Institute of Physics awarded him its 1996 Kelvin Medal and Prize, which is given "for outstanding contributions to the public understanding of physics".
From 1993 to 1999, he was vice-president of the British Association for the Advancement of Science.
He was appointed an OBE in 2000.
Since 2003, he has been Chairman of the British team (BPhO) in the International Physics Olympiad, based at the University of Leicester.
2013 Awarded the Royal Society Michael Faraday Prize.
He became a Fellow of the Royal Society (FRS) in 2021.
Christmas lectures
His Royal Institution Christmas Lectures in 1993, entitled The Cosmic Onion, gave their name to one of his books. He was a member on the council of the Royal Institution from 1997 to 1999. From 2000 to 2003 he gave public lectures as professor of astronomy at Gresham College, London.
Publications
In his book, Lucifer's Legacy: The Meaning of Asymmetry, Close wrote: "Fundamental physical science involves observing how the universe functions and trying to find regularities that can be encoded into laws. To test if these are right, we do experiments. We hope that the experiments won't always work out, because it is when our ideas fail that we extend our experience. The art of research is to ask the right questions and discover where your understanding breaks down."
His 2010 book Neutrino discusses the tiny, difficult-to-detect particle emitted from radioactive transitions and generated by stars. Also discussed are the contributions of John Bahcall, Ray Davis, Bruno Pontecorvo, and others who made a scientific understanding of this fundamental building block of the universe.
In The Infinity Puzzle: Quantum Field Theory and the Hunt for an Orderly Universe (2013), Close focuses on the discovery of the mass mechanism, the so-called Higgs-mechanism.
In his 2019 book, Trinity: The Treachery and Pursuit of the Most Dangerous Spy in History, Close recounts the life and the espionage of Klaus Fuchs who passed atomic secrets to the Soviets during the race for development of the nuclear bomb. He concludes that "it was primarily Fuchs who enabled the Soviets to catch up with Americans".
Other books include: Particle Physics: A Very Short Introduction , Antimatter and Nothing .
See also
Gresham Professor of Astronomy
Works
(Published in the US as Apocalypse When?)
References
External links
Frank Close at st-andrews.ac.uk
Frank Close at Exeter College
Interview in The Guardian, 1 June 2004
Radio 4 Museum of Curiosity 5 March 2008
Frank Close's page , Conville and Walsh literary agents
Scientific publications of Frank Close on INSPIRE-HEP
Jodcast Interview with Professor Frank Close on the life, research and disappearance of Bruno Pontecorvo
Contributor to discussion on Eclipses for BBC Radio 4 programme In Our Time
Video clips
1945 births
Alumni of Magdalen College, Oxford
Alumni of the University of St Andrews
English atheists
British physicists
Theoretical physicists
Particle physicists
Quantum physicists
People associated with CERN
Fellows of Exeter College, Oxford
Living people
Officers of the Order of the British Empire
People educated at The King's School, Peterborough
People from Peterborough
Academics of Gresham College | Frank Close | [
"Physics"
] | 948 | [
"Theoretical physicists",
"Theoretical physics",
"Quantum physicists",
"Quantum mechanics",
"Particle physics",
"Particle physicists"
] |
20,821,833 | https://en.wikipedia.org/wiki/Aldgate%20Pump | Aldgate Pump is a historic former water pump located at the junction where Aldgate High Street meets Fenchurch Street and Leadenhall Street in the City of London. The pump is considered to be the symbolic start point of the East End of London.
The pump is also notable for its long and sometimes dark history, along with its significant cultural references.
Design
Aldgate Pump is a Grade II listed structure. The metal wolf head on the pump's spout is supposed to signify the last wolf shot in the City of London.
Historic photographs show that the pump was surmounted by an ornate wrought iron lantern. During the 20th century this was removed, but was recreated by the Bottega Prata workshop in Bologna, Italy, during its restoration by the Heritage of London Trust, unveiled in September 2019. The pump can no longer be used to draw water, but a drainage grating is still in place.
History
As a well, it was mentioned during the reign of King John in the early 13th century.
A structure is shown on Braun and Hogenburg's map of 1574, and shown as St Michael’s Well on the Agas map of 1633. John Stow recalled the execution of the Bailiff of Romford on a gibbet 'near the well within Aldgate'. This execution seems to have been carried out on the dubious basis that he was involved in Kett's Rebellion of 1549.
Served by one of London's many underground streams, the water was praised for being "bright, sparkling, and cool, and of an agreeable taste". These qualities were later found to be derived from decaying organic matter from adjoining graveyards, and the leaching of calcium from the bones of the dead in many new cemeteries in north London through which the stream ran from Hampstead. On its relocation in 1876, the New River Company changed the supplies to mains water.
Fenchurch Street railway station was built in 1841 upon the site of Aldgate Pump Court.
As the City of London developed, it is thought to have been taken down and moved a short distance to the west, to its current location in 1876, as a result of road widening.
East End
The line of the former eastern walls and gates of the City are taken as the usual start point of the East End, but the pump lies just inside the site of the former Aldgate.
The pump is a suitable symbolic start point for several reasons:
The removal of the gate and associated walls in the late 18th century gave the pump added significance.
The social importance of pumps as meeting places
The pump marks the start of the originally Roman A11 road, later known as the Great Essex Road. Distances to locations in the Tower division of Middlesex, Essex and East Anglia were measured from here.
Cultural references
Phrases
East of Aldgate Pump is a term used to apply to the East End or East London as a whole, as in the old slur "East of Aldgate Pump, people cared for nothing but drink, vice and crime".
It is also used in two phrases which seem to hark back to the epidemic:
As Cockney Rhyming Slang; Aldgate Pump, or just Aldgate for short, rhymes with “get (or take) the hump”, i.e. to be annoyed.
A draft on Aldgate Pump refers to a harmful, worthless or fraudulent financial transaction, such as a bouncing cheque. The pun is on a draught (or draft) of water and a draft of money.
There's a pump up Aldgate, mate. Pump that! was an East End phrase directed at rent collectors believed to be pressing tenants unreasonably hard.
Music, TV and literature
Charles Dickens refers to the pump in The Uncommercial Traveller, published in 1860: "My day's no-business beckoning me to the East End of London. I had turned my face to that point of the metropolitan compass…and had got past Aldgate Pump."
Aldgate Pump was also the name of a song, written by G. W. Hunt for the lion comique Arthur Lloyd in 1869. In the song, the raconteur is abandoned by the girl "I met near Aldgate Pump".
References
1876 establishments in England
Infrastructure completed in 1876
Grade II listed buildings in the City of London
Tourist attractions in the City of London
East End of London
Pumps | Aldgate Pump | [
"Physics",
"Chemistry"
] | 904 | [
"Physical systems",
"Hydraulics",
"Turbomachinery",
"Pumps"
] |
20,825,692 | https://en.wikipedia.org/wiki/Energy%20flux | Energy flux is the rate of transfer of energy through a surface. The quantity is defined in two different ways, depending on the context:
Total rate of energy transfer (not per unit area); SI units: W = J⋅s−1.
Specific rate of energy transfer (total normalized per unit area); SI units: W⋅m−2 = J⋅m−2⋅s−1:
This is a vector quantity, its components being determined in terms of the normal (perpendicular) direction to the surface of measurement.
This is sometimes called energy flux density, to distinguish it from the first definition.
Radiative flux, heat flux, and sound energy flux density (also sound intensity) are specific cases of this meaning.
See also
Energy flow (ecology)
Flux
Irradiance
Poynting vector
Stress–energy tensor
Energy current
References
Physical quantities
Vector calculus | Energy flux | [
"Physics",
"Mathematics"
] | 175 | [
"Physical phenomena",
"Quantity",
"Physical quantities",
"Physical properties"
] |
20,826,071 | https://en.wikipedia.org/wiki/Volumetric%20flux | In fluid dynamics, the volumetric flux is the rate of volume flow across a unit area (m3·s−1·m−2), and has dimensions of distance/time (volume/(time*area)) - equivalent to mean velocity. The density of a particular property in a fluid's volume, multiplied with the volumetric flux of the fluid, thus defines the advective flux of that property. The volumetric flux through a porous medium is called superficial velocity and it is often modelled using Darcy's law.
Volumetric flux is not to be confused with volumetric flow rate, which is the volume of fluid that passes through a given surface per unit of time (as opposed to a unit surface).
References
Physical quantities
Vector calculus
Fluid dynamics | Volumetric flux | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 157 | [
"Physical phenomena",
"Physical quantities",
"Chemical engineering",
"Quantity",
"Piping",
"Physical properties",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
20,826,274 | https://en.wikipedia.org/wiki/Radiative%20flux | Radiative flux, also known as radiative flux density or radiation flux (or sometimes power flux density), is the amount of power radiated through a given area, in the form of photons or other elementary particles, typically measured in W/m2. It is used in astronomy to determine the magnitude and spectral class of a star and in meteorology to determine the intensity of the convection in the planetary boundary layer. Radiative flux also acts as a generalization of heat flux, which is equal to the radiative flux when restricted to the infrared spectrum.
When radiative flux is incident on a surface, it is often called irradiance. Flux emitted from a surface may be called radiant exitance or radiant emittance. The ratio of irradiance reflected to the irradiance received by a surface is called albedo.
Geophysics
Shortwave
In geophysics, shortwave flux is a result of specular and diffuse reflection of incident shortwave radiation by the underlying surface. This shortwave radiation, as solar radiation, can have a profound impact on certain biophysical processes of vegetation, such as canopy photosynthesis and land surface energy budgets, by being absorbed into the soil and canopies. As it is the main energy source of most weather phenomena, the solar shortwave radiation is used extensively in numerical weather prediction.
Longwave
Longwave flux is a product of both downwelling infrared energy as well as emission by the underlying surface. The cooling associated with the divergence of longwave radiation is necessary for creating and sustaining lasting inversion layers close to the surface during polar night. Longwave radiation flux divergence also plays a role in the formation of fog.
SI radiometry units
See also
Spectral flux density
Flux
References
Physical quantities
Vector calculus | Radiative flux | [
"Physics",
"Mathematics"
] | 357 | [
"Physical phenomena",
"Quantity",
"Physical properties",
"Physical quantities"
] |
6,224,278 | https://en.wikipedia.org/wiki/National%20Institute%20of%20Aeronautics%20and%20Space | The National Institute of Aeronautics and Space (, LAPAN) was the Indonesian government's space agency. It was established on 27 November 1963, by former Indonesian president Sukarno, after one year's existence of a previous, informal space agency organization. LAPAN is responsible for long-term civilian and military aerospace research.
For over two decades, LAPAN managed satellites, including the domain-developed small scientific-technology satellite LAPAN-TUBsat and the Palapa series of telecommunication satellites, which were built by Hughes (now Boeing Satellite Systems) and launched from the US on Delta rockets, or from French Guiana using Ariane 4 and Ariane 5 rockets. LAPAN has also developed sounding rockets and has been trying to develop small orbital space launchers. The LAPAN A1, in 2007, and LAPAN A2, in 2015, satellites were launched by India.
With the enactment of Presidential Decree No. 33/2021 on 5 May 2021, LAPAN is due to be disbanded along with government research agencies such as the Agency of Assessment and Application of Technology (Indonesian: Badan Pengkajian dan Penerapan Teknologi, BPPT), National Nuclear Energy Agency (Indonesian: Badan Tenaga Nuklir Nasional, BATAN), and Indonesian Institute of Sciences (Indonesian: Lembaga Ilmu Pengetahuan Indonesia, LIPI). All of those agencies fused into newly formed National Research and Innovation Agency (Indonesian: Badan Riset dan Inovasi Nasional, BRIN). As of September 2021, the disbandment process is still on process and expected to be finished on 1 January 2022.
On 1 September 2021, LAPAN was finally dissolved as an independent agency and transformed into the space and aeronautics research organization of BRIN, signaling the beginning of the institutional integration of the former LAPAN into BRIN.
History
On 31 May 1962, Indonesia commenced aeronautics exploration when the Aeronautics Committee was established by the Indonesian prime minister, Djuanda, who was also the head of Indonesian Aeronautics. The secretary of Indonesian Aeronautics, RJ Salatun, was also involved in the establishment.
On 22 September 1962, the Initial Scientific and Military Rocket Project (known in Indonesia as Proyek Roket Ilmiah dan Militer Awal or PRIMA) was formed by an affiliation of AURI (Indonesian Air Force) with ITB (Bandung Institute of Technology). The outcome of the project was the launching of two "Kartika I" ("star")–series rockets and their telemetric ordnances.
After two informal projects, the National Institute of Aeronautics and Space (LAPAN) was established in 1963 by Presidential Decree 236.
Programs
For more than 20 years, LAPAN has done research on rocketry, remote sensing, satellites, and space sciences.
Satellites
Palapa A1 and A2
The first program was the launching of the Palapa A1 (launched 7 August 1976) and A2 (launched 3 October 1977) satellites. These satellites were almost identical to Canada's Anik and Western Union's Westars. The Indonesian satellites belonged to the government-owned company Perumtel, but they were made in the United States.
LAPAN satellites
The development of microsatellites has become an opportunity for LAPAN. The development of such satellites requires only a limited budget and facilities, compared to the development of large satellites. Meanwhile, the capability to develop micro-satellite will brings LAPAN to be ready to implement a future space program that will have measurable economic impact, and therefore contribute to the country's sustainable development effort.
LAPAN-A1
The LAPAN-A1, or LAPAN-TUBsat, is designed to develop knowledge, skill, and experience with micro-satellite technology development, in cooperation with Technische Universität Berlin, Germany, where the satellite was manufactured. The Indonesian spacecraft is based on the German DLR-Tubsat, but includes a new star sensor and features a new 45 × 45 × 27 cm structure. The satellite payload is a commercial off-the-shelf video camera with a 1000 mm lens, resulting in a nadir resolution of 5 m and nadir swath of 3.5 km from an altitude of 650 km. In addition, the satellite carries a video camera with a 50 mm lens, resulting in a 200 m resolution video image with swath of 80 km at the nadir. The uplink and downlink for telemetry, tracking, and command (TTC) is done in the UHF band and downlink for video is done in S-band analog. On 10 January 2007, the satellite was successfully launched from Sriharikota, India, as an auxiliary payload with India's Cartosat-2, in the ISRO's Polar Satellite Launch Vehicle (PSLV) C7, to a Sun-synchronous orbit of 635 km, with an inclination of 97.60° and a period of 99.039 minutes. The longitude shift per orbit is about 24.828° with a ground track velocity of 6.744 km/s with an angular velocity of 3.635 deg/s, and a circular velocity of 7.542 km/s. LAPAN Tubsat performed technological experiments, earth observation, and attitude control experiments.
LAPAN-A2
The mission of LAPAN-A2, or LAPAN-ORARI, is Earth observation using an RGB camera, maritime traffic monitoring using an automatic identification system (AIS)—which can know name and flag of the ship registered, ship type, tonnage, current route, departure and arrival ports—and amateur radio communication (text and voice; ORARI is Indonesian Amateur Radio Organization). The satellite will be launched, as a secondary payload of India's ASTROSAT mission, into a circular orbit of 650 km with an inclination of 8 degrees. The purpose of the project is to develop the capability to design, assembly, integrate, and test (AIT) micro-satellites. The satellite was successfully launched on 28 September 2015 using India's ISRO Polar Satellite Launch Vehicle (PSLV) and will pass over Indonesia every 97 minutes, or 14 times a day.
LAPAN-A3
LAPAN-A3, or LAPAN-IPB, will perform experimental remote sensing. In addition to that, the satellite will support a global AIS mission and amateur radio communication. The satellite payload is a four-band push broom multi-spectral imaging camera (Landsat band: B, G, R, NIR), which will give a resolution of 18 m and coverage of 120 km from 650 km altitude. The satellite was launched in June 2016.
International cooperation
In 2008 Indonesia signed an agreement with the National Space Agency of Ukraine (NSAU) that will allow Indonesia access to rocket and satellite technologies.
Spaceport development plan
Biak Spaceport plan (2006)
Since 2006, Indonesia and Russia have been discussing the possibility of launching a satellite from Biak island using air launch technology. LAPAN and the Russian Federal Space Agency (RKA) have worked on a government-to-government space cooperation agreement in order to enable such activities in Indonesia. The plan is for an Antonov An-124 aircraft to deliver a Polyot space launch vehicle to the new Indonesian spaceport on Biak island (West Papua province). This spaceport is well suited to commercial launches, as it sits almost exactly on the equator (the nearer the equator the greater the initial velocity imparted to the launched craft, making higher velocity or heavier payloads possible). In the spaceport, the launch vehicle will be fueled, and the satellites will be loaded on it. The Antonov An-124 would then fly to 10 km altitude above the ocean east of Biak island to jettison the launch vehicle. In 2012, discussions resumed. The main stumbling block is Russian concerns over compliance with the terms of the Missile Technology Control Regime; Russia is a co-signatory, Indonesia is not. In 2019, LAPAN officially confirmed plans for building the Biak spaceport, with first flights expected in 2024.
Enggano Launchpad plan (2011)
In 2011, LAPAN planned to build a satellite to be launchpad at Enggano Island, Bengkulu province, located in the westernmost part of Indonesia, on the Indian Ocean. There are three possible locations, two in Kioyo Natural Park and one in Gunung Nanua Bird Park. The most strategic site for this launchpad is inside Nanua Bird Park, a place called Tanjung Laboko, which is 20 meters above sea level and far from residential areas. The satellite launch pad itself sits on only one hectare of ground, but the safety zone covers 200 hectares. The cost to be disbursed is Rp.40 trillion (around $4.5 billion). The location can handle the assembly of the rockets and launch preparations for satellites of up to 3.8 tonnes. The Bengkulu Natural Resources Conservation Agency has expressed concerns about the plan, because both parks are habitats for a number of bird species native to Enggano Island. The Bengkulu provincial government refused to consider those concerns.
Morotai Spaceport plan (2012)
After studying the surrounding environment at three potential spaceport island sites (Enggano-Bengkulu, Morotai-North Maluku, and Biak-Papua), LAPAN (21/11) announced Morotai Island as a future spaceport site. Planning started in December 2012. The launch site's completion is expected for 2025. In 2013, LAPAN planned to launch an RX-550 experimental satellite launcher from a location in Morotai to be decided. This island was selected according to the following criteria:
Morotai Island's location near the equator, which makes the launch more economical.
The island's having seven runways, one of them 2,400 meters, easily extended to 3,000 meters.
The ease of building on Morotai, which is not densely populated, and consequently little potential for social conflict with native inhabitants.
Morotai Island's east side facing the Pacific Ocean directly, reducing downrange risks to other island populations.
Field installations
Ground stations
Remote-sensing satellite ground station
The Stasiun Bumi Satelit Penginderaan Jauh (Remote Sensing Satellite Earth Station) is located at Parepare, South Sulawesi; it has been in operation since 1993. Its main functions include receiving and recording data from earth observation satellites such as Landsat, SPOT, ERS-1, JERS-1, Terra/Aqua MODIS, and NPP.
Weather satellite ground stations
These ground stations are located at Pekayon, Jakarta, and Biak; since 1982 they have been receiving, recording, and processing data from NOAA, MetOp, and Himawari weather satellites 24 times a day.
Rocket launch site
LAPAN manages Stasiun Peluncuran Roket (Rocket Launching Station) located at Pameungpeuk Beach in the Garut Regency on West Java (). Starting in 1963, the facility was built through cooperation between Indonesia and Japan, as the station was designed by Hideo Itokawa, with the aim of supporting high atmospheric research using Kappa-8 rockets. This installation comprises a Motor Assembly building, a Launch Control Center, a Meteorological Sounding System building, a Rocket Motor Storage hangar, and a dormitory.
Radar
Koto Tabang Equator Atmospheric Radar
The Radar Atmosfer Khatulistiwa Koto Tabang is a radar facility located at Koto Tabang, West Sumatra. It commenced operations in 2001. This facility is used for atmospheric dynamics research, especially areas concerning global climate change, such as El Niño and La Niña climate anomalies.
Laboratory
Remote Sensing Technology and Data Laboratory
The Remote Sensing Technology and Data Laboratory is located at Pekayon, in Jakarta. Its functions include data acquisition systems development, satellite payload imager systems development, satellite ground station system development, preliminary satellite imagery image processing—such as making geometric, radiometric, and atmospheric corrections.
Remote Sensing Applications Laboratory
The Remote Sensing Applications Laboratory at Pekayon, Jakarta, works with remote sensing satellite data applications for Land Resource, Coastal-Marine Resources, Environment Monitoring, and Disaster Mitigation.
Rocket Motor Laboratory
The Laboratorium Motor Roket (Rocket Motor Laboratory) is located at Tarogong, West Java. It designs and produces rocket propulsion systems.
Propellant Laboratory
The Laboratorium Bahan Baku Propelan (Combustion Propellant Laboratory) researches propellants such as oxidizer Ammonium perchlorate and Hydroxyl-terminated polybutadiene.
Satellite Technology Laboratory
The Satellite Technology Laboratory is located at Bogor, West Java. Its functions include research, development, and engineering of the satellite payload, the satellite bus, and facilities of the ground segment.
Aviation Technology Laboratory
The Aviation Technology Laboratory is located at Rumpin, West Java. Its functions include research, development, and engineering of aerodynamics, flight mechanics technology, propulsion technology, avionics technology, and aerostructure.
Observatories
In 2020, Indonesia joined other nations in the hunt for habitable-zone exoplanets, after completion of new astronomical observatory center at Kupang Regency in East Nusa Tenggara province.
Equatorial Atmosphere Observatory
The Equatorial Atmosphere Observatory of LAPAN is located at Koto Tabang, West Sumatera. It researches:
High-resolution observations of wind vectors that will make it possible to study the detailed structure of the equatorial atmosphere, which is related to the growth and decay of cumulus convection;
From long-term continuous observations, relationships between atmospheric waves and global atmospheric circulation;
By conducting observations from near the surface to the ionosphere, it will be possible to reveal dynamical couplings between the equatorial atmosphere and ionosphere;
Based on these results, transports of atmospheric constituents such as ozone and greenhouse gases, and the variations of the Earth's atmosphere that lead to climatic change such as El-Nino and La-Nina.
Solar Radiation Observatory
The Stasiun Pengamat Radiasi Matahari (Solar Radiation Observation Station) observes ultraviolet radiation of the sun. Operations began in 1992. These facilities were developed by Eko Instrument, of Japan, and are located at Bandung and Pontianak.
Aerospace Observatory
For decades, Indonesian astronomy depended on the Bosscha Observatory in Lembang, West Java, which was built in 1928 by the Dutch and which, at that time, had one of the largest telescopes in the southern hemisphere.
At present, the aerospace observatories of LAPAN are located at Pontianak-West Kalimantan, Pontianak-North Sulawesi, Kupang-East Nusa Tenggara, and Watukosek-East Java, and make observations relevant to climatology, meteorology, the sun, and Earth's magnetic field.
National Observatory (Obnas)
The new observatory construction project on Mount Timau in Kupang Regency, East Nusa Tenggara, which began functioning in 2020, is the biggest observatory in Southeast Asia. The observatory is built with the cooperation of the Bandung Institute of Technology (ITB), Nusa Cendana University (UNdana). It is designated as the National Observatory (Obnas), and has a telescope.
The area around Obnas is developed as a national park, with the aim of attracting tourists. The aim of the observatory is to:
develop Indonesian space science to a high degree
economically strengthen the surrounding region, to allow for equitable distribution of inter-regional development, especially in Eastern Indonesia.
Obnas is one of LAPAN's key strategic objectives, along with mastery of rocket technology, building a launch site, growing its National Remote Sensing Data Bank (BDPJN) and National Earth Monitoring System (SPBN), and overall technological development.
Rockets
LAPAN rockets are classified "RX" (Roket Eksperimental) followed by the diameter in millimeters. For example, the RX-100 has a diameter of .
LAPAN's current workhouse rocket propulsion system consists of four stages, namely the three-stage RX-420 and the RX-320 level. It is planned to use the RX-420 as a rocket booster for the planned Roket Pengorbit Satelit (RPS) (Orbital Satellite Rocket) to fly in 2014.
In 2008, optimistic hopes were that this rocket, known as the Satellite Launch Vehicle (SLV), would first be launched in Indonesia to 2012, and if there were extra funds pursuant to the good economic situation of 2007–8, possibly the year 2010. In fact, the LAPAN budget for 2008 and 2007 was Rp 200 billion (approximately US$20 million). Budgetary issues surrounding the international credit crises of 2008–2009 placed many Indonesian technical projects in jeopardy, most especially the complete development of RX-420 and associated micro-satellite program to world-class standards ahead of project finalization schedule and the opportunity to work together with the world institutions. LAPAN hopes to be an educating partner with Indian Aerospace in sciences related to satellite.
On November 11, 2010, a LAPAN spokesman said that the RX-550 rocket would undergo a static test in December 2010 and a flight test in 2012. The rocket would consist of four stages, and would be part of an RPS-01 rocket to put a satellite in orbit. Previously, the Polar LAPAN-TUBsat (LAPAN-A1) satellite had been successfully placed in orbit and is still functioning well. The aim is to have home-made rockets and satellites.
Beginning in 2005, LAPAN rejuvenated Indonesian expertise in rocket-based weapons systems, in cooperation with the Armed Forces of Indonesia (TNI). In April 2008, the TNI began a new missile research program, alongside LAPAN. Prior to this, eight projects were sponsored by the TNI in Malacca monitoring, using LAPAN-TUBsat, the theft of timber and alleged encroachment on Indonesian territorial waters in the 2009 escalation over Malaysia's claims to the huge gas fields off Ambalat-island.
RX-100
The RX-100 serves to test rocket payload subsystems. It has a diameter of , a length of , and a mass of . It carries enough solid-composite propellant to last 2.5 seconds, which allows for a flight time of 70 seconds, at a maximum speed of Mach 1, at an altitude of , for a range of . The rocket carries a GPS, altimeter, gyroscope, 3-axis accelerometer, CPU, and battery.
RX-150 / 120
The two-stage rocket booster RX-150-120 is supported by the Indonesian Army (TNI-AD) and PT Pindad. With a range of , it was successfully launched from a moving vehicle (Pindad Panser) on March 31, 2009.
R-Han 122
The R-Han 122 rocket is a surface-to-surface missile with a range of up to at Mach 1.8. As of March 28, 2012, fifty R-Han 122s have been successfully launched. The rockets are the result of six years work by LAPAN. By 2014, at least 500 R-Han 122 rockets will be part of the army arsenal.
RX-250
Between 1987 and 2005, LAPAN RX-250 rockets have been regularly launched.
RX-320
LAPAN successfully launched two -diameter RX-320 rockets on 30 May and 2 July 2008 at Pameungpeuk, West Java.
Space launchers
RPS-420 (Pengorbitan-1)
The RPS-420 (Pengorbitan-1) is a micro-satellite orbital launch vehicle, similar to Lambda from Japan, but with lighter, modern materials and modern avionics. It is launched unguided at a 70-degree angle of inclination with a four-stage solid rocket motor launcher.
It has a diameter of , a length of , a lift-off mass of .
It uses solid composite propellant, for a firing time of 13 seconds, yielding a thrust of 9.6 tons, for a flight duration of 205 seconds at a
maximum velocity of Mach 4.5. Its range is at an altitude of . Its payload consists of diagnostics, GPS, altimeter, gyro, 3-axis accelerometer, CPU, and battery. The RX-420 was entirely built using local materials.
LAPAN carried out a stationary test on the RX-420 on 23 December 2008 in Tarogong, West Java. The RX-420 had its first test flight at the launching station Cilauteureun, Pameungpeuk District, Garut regency, West Java. The LAPAN RX-420 is the test bed for an entirely indigenously developed satellite launch vehicle. The RX-420 is suitable for launching micro-satellites of or less and nano-satellites of or less in co-development with Technische Universität Berlin.
The rocket launching plan was extended in 2010 by launching combined RX-420-420s, and in 2011 for combined RX-420-420 – 320, and SOB 420.
RPS-420/520 (Pengorbitan-2)
In the planning stage are the RX-420 with multiple customizable configuration boosters and the planned RX-520, which is predicted to be able to launch a greater than payload into orbit. This large rocket is intended to be fueled by high-pressure liquid hydrogen peroxide, and various hydrocarbons are under evaluation. The addition of RX-420 boosters to the RX-520 should increase lifting capacity to over , although if too expensive, the proven Russian Soyuz and Energiya rockets will likely be employed.
The RX-520 consists of one RX-420 and two RX-420 as a stage-1 booster, one RX-420 as stage 2, one RX-420 as stage 3, and as a payload launcher, one RX-320 as stage 4.
RX-550
In 2013, LAPAN launched an RX-550 experimental satellite launcher from a point in Morotai.
LAPAN Library
In June 2009, LAPAN put online its extensive library of over 8000 titles on aeronautics and astronautics. This is the largest dedicated aerospace library in ASEAN and it was hoped it would bring Indonesian and ASEAN talent into the LAPAN program, especially those disadvantaged by location. It was unclear how much content would be available freely to the public.
Komurindo
Komurindo or Kompetisi Muatan Roket Indonesia is the Indonesia Payload Rocket Competition. The competition was established by LAPAN, the education ministry, and some universities, to enhance rocket research by the universities. The third competition was held in late June 2011 in Pandansimo Beach of Bantul, Yogyakarta.
Aircraft
LAPAN XT-400
LSU-02
LSU-03
LAPAN Fighter Experiment (LFX)
Logo
End of LAPAN
On 1 September 2021, LAPAN became the Space and Aeronautics Research Organization of the National Research and Innovation Agency (BRIN), signaling the beginning of the institutional integration of the former LAPAN into BRIN.
See also
List of government space agencies
List of rocket launch sites
Pratiwi Sudarmono
References
External links
LAPAN remote Sensing
https://web.archive.org/web/20090815080000/http://www.lapan.go.id/lombaRUM2009/index.php
LAPAN Satellite Technology
State Ministry of Research and Technology, Indonesia (RISTEK)
Government agencies of Indonesia
Space agencies
Science and technology in Indonesia
Space program of Indonesia
Aerospace
Defense companies of Indonesia
1963 establishments in Indonesia
2021 disestablishments in Indonesia | National Institute of Aeronautics and Space | [
"Physics"
] | 4,889 | [
"Spacetime",
"Space",
"Aerospace"
] |
6,224,402 | https://en.wikipedia.org/wiki/Vilma%20Esp%C3%ADn | Vilma Lucila Espín Guillois (7 April 1930 – 18 June 2007) was a Cuban revolutionary, feminist, and chemical engineer. She helped supply and organize the 26th of July Movement as an underground spy, and took an active role in many branches of the Cuban government from the conclusion of the revolution to her death. Espín helped found the Federation of Cuban Women and promoted equal rights for Cuban women in all spheres of life.
As the wife of Raúl Castro and the sister-in-law of Fidel Castro, she was essentially the First Lady of Cuba for about 45 years.
Early life and education
Vilma Espín Guillois was born on 7 April 1930, in Santiago de Cuba. She was the daughter of a wealthy Cuban lawyer, Jose Espín and wife Margarita Guillois. She had four siblings, Nilsa, Iván, Sonia and José. Espín attended Academia Pérez-Peña for primary school and studied ballet and singing at the Asociación Pro-Arte Cubano during the 1940s. In the 1950s, she studied chemical engineering at Universidad de Oriente, Santiago de Cuba (one of the first women in Cuba to study this subject). While attending Universidad de Oriente, Santiago de Cuba, she played volleyball, tennis, and was a soprano in the University Choir. In university, Espin met her mentor Frank Pais in a university group called Oriente Revolutionary Action (ARO), which was responsible for the assault on the Moncada barracks. After graduating, her father encouraged her to attend MIT in Cambridge, Massachusetts to complete her post-graduate studies in the hopes that visiting America would dissuade her from becoming involved in socialist activity. When she finally acquiesced, her brief academic career at MIT left her with even more animosity toward the United States, as she officially joined the 26th of July Movement on her way back to Cuba through Mexico. Espin only completed one semester at MIT.
Role in the Cuban revolution
Returning home, she became more involved with the opposition to the dictator Fulgencio Batista. A meeting with revolutionary leader Frank País led her to become a leader of the revolutionary movement in Oriente province. Espín met the Castro brothers who had relocated to Mexico after their failed armed attack on the Moncada Barracks in July 1953 and release from prison in 1955. Espin acted as a messenger between the Julio 26 Movement in Mexico and Pais back in Cuba. She then went on to assist the revolutionaries in the Sierra Maestra mountains after the 26th of July Movement's return to Cuba on the Granma yacht in November 1956.
Espín's ability to speak both Spanish and English allowed her to represent the revolutionary movement on an international scale. Pepín Bosch, an executive of the Bacardi Corporation, arranged a meeting between CIA Inspector General Lyman Kirkpatrick and representatives of the 26th of July Movement in 1957. Espín, as both a revolutionary leader and the daughter of a Bacardi executive, told Kirkpatrick that the revolutionaries only wanted "what you Americans have: clean politics and a clean police system." She also acted as an interpreter for an interview between New York Times reporter Herbert Matthews and Fidel Castro in 1957, which served the dual purpose of spreading news of the revolution and assuring Cubans and the international community that Batista's claims of Castro's death were false.
Role in the Federation of Cuban Women
Vilma Espín was an outspoken supporter of gender equality in Cuba, but distinctly separated herself and the goals of the Federation of Cuban Women from traditional feminism, insisting advocacy for 'feminine' not 'feminist'. Her involvement in the revolution helped transform the role of women in Cuba and in 1960, Espín became the president of the Federation of Cuban Women, and remained in that position until her death in 2007. The organization's primary goals were educating women, giving them the necessary skills to seek gainful employment, and above all encouraging them to participate in politics and support the revolutionary government. In 1960, when sugar mills and cane fields were under attack across Cuba shortly before the Bay of Pigs invasion, the Federation of Cuban Women created the Emergency Medical Response Brigades to mobilize women against counter-revolution. The Cuban government and the Federation encouraged women to join the labor force, even going so far as to pass the Cuban Family Code in 1975, a law mandating that men must help with household chores and childcare to lighten the workload for working mothers.
Role in the Cuban government
Espín served as a member of the Central Committee of the Cuban Communist Party from 1965 to 1989. She also held many other roles in the Cuban government, including chair of the Commission for Social Prevention from 1967 to 1971, director of Industrial Development in the Ministry of Food in 1969, president of the Institute of Childcare in 1971, and member of the Cuban Council of State in 1976. In addition to her roles within Cuba, Espín also served as Cuba's representative at the United Nations General Assembly.
Espín took on the role of Cuba's First Lady for 45 years, initially taking on the role as the sister-in-law to Fidel Castro, who was divorced at the time he came to power. She officially became the First Lady in 2006 when her husband, Raúl Castro, became president. Additionally, she was granted the title of "Secretary of State" in the Government of Cuba.
Espín headed the Cuban Delegation to the Congress of the International Federation of Democratic Women in Chile in September 1959. She also headed the Cuban delegations to subsequent Conferences on Women, praising them as "invaluable to women in developing countries."
Family
Espín was married to Raúl Castro, the former First Secretary of the Communist Party of Cuba, who is the brother to former First Secretary Fidel Castro. Their wedding took place in 1959, only weeks after the 26th of July Movement had successfully overthrown dictator Fulgencio Batista. She had four children (Deborah, Mariela, Nilsa, and Alejandro Castro Espín) and eight grandchildren. Her daughter, Mariela Castro, currently heads the Cuban National Center for Sex Education, and her son, Alejandro Castro Espín, is a Colonel in the Ministry of Interior.
Death and legacy
Espín died in Havana at 4:14 p.m. EDT on 18 June 2007, following a long illness. An official mourning-period was declared from 8 p.m. on 18 June until 10 p.m. on 19 June. A funeral ceremony was held at the Karl Marx Theatre in Havana the day after her death. Thousands of Cubans paid their respects in a receiving line at the Plaza of the Revolution in Havana. Raúl Castro was in the receiving line, but Fidel Castro was not present. The Cuban government released a statement praising her as "one of the most relevant fighters for women's emancipation in our country and in the world." Her body was cremated, and her remains rest in the Frank País Mausoleum, Municipio II Frente in the province of Santiago de Cuba, Cuba. The Vilma Espín elementary school was opened in Havana in April 2013. Espin founded the Frente Continental de Mujeres Contra la Intervención (Continental Women’s Front Against Intervention, FCMCI) and the Regional Center of the International Democratic Federation of Women for the Americas and Caribbean.
Notes
References
External links
Biographies of Spouses of Heads of State and Government of the Americas
Short Biography of Vilma Espin at Cuba.dk
http://cubahistory.org/en/corruption-a-coups/attack-on-moncada-barracks.html
Los Angeles Times
Obituary: Vilma Espín Guillois, The Guardian
1930 births
2007 deaths
Communist Party of Cuba politicians
Cuban people of French descent
Cuban revolutionaries
Cuban guerrillas
20th-century Cuban women politicians
20th-century Cuban politicians
Fidel Castro family
Government ministers of Cuba
Recipients of the Lenin Peace Prize
Massachusetts Institute of Technology alumni
People from Santiago de Cuba
People of the Cuban Revolution
Chemical engineers
Socialist feminists
Women in war in the Caribbean
Women in war 1945–1999
Female revolutionaries
21st-century Cuban women politicians
21st-century Cuban politicians
Women's International Democratic Federation people | Vilma Espín | [
"Chemistry",
"Engineering"
] | 1,663 | [
"Chemical engineering",
"Chemical engineers"
] |
6,226,425 | https://en.wikipedia.org/wiki/Sommerfeld%20radiation%20condition | In applied mathematics, and theoretical physics, the Sommerfeld radiation condition is a concept from theory of differential equations and scattering theory used for choosing a particular solution to the Helmholtz equation. It was introduced by Arnold Sommerfeld in 1912
and is closely related to the limiting absorption principle (1905) and the limiting amplitude principle (1948).
The boundary condition established by the principle essentially chooses a solution of some wave equations which only radiates outwards from known sources. It, instead, of allowing arbitrary inbound waves from the infinity propagating in instead detracts from them.
The theorem most underpinned by the condition only holds true in three spatial dimensions. In two it breaks down because wave motion doesn't retain its power as one over radius squared. On the other hand, in spatial dimensions four and above, power in wave motion falls off much faster in distance.
Formulation
Arnold Sommerfeld defined the condition of radiation for a scalar field satisfying the Helmholtz equation as
"the sources must be sources, not sinks of energy. The energy which is radiated from the sources must scatter to infinity; no energy may be radiated from infinity into ... the field."
Mathematically, consider the inhomogeneous Helmholtz equation
where is the dimension of the space, is a given function with compact support representing a bounded source of energy, and is a constant, called the wavenumber. A solution to this equation is called radiating if it satisfies the Sommerfeld radiation condition
uniformly in all directions
(above, is the imaginary unit and is the Euclidean norm). Here, it is assumed that the time-harmonic field is If the time-harmonic field is instead one should replace with in the Sommerfeld radiation condition.
The Sommerfeld radiation condition is used to solve uniquely the Helmholtz equation. For example, consider the problem of radiation due to a point source in three dimensions, so the function in the Helmholtz equation is where is the Dirac delta function. This problem has an infinite number of solutions, for example, any function of the form
where is a constant, and
Of all these solutions, only satisfies the Sommerfeld radiation condition and corresponds to a field radiating from The other solutions are unphysical . For example, can be interpreted as energy coming from infinity and sinking at
See also
Limiting absorption principle
Limiting amplitude principle
Nonradiation condition
Notes
References
External links
Radiation
Boundary conditions | Sommerfeld radiation condition | [
"Physics",
"Chemistry"
] | 496 | [
"Transport phenomena",
"Waves",
"Physical phenomena",
"Radiation"
] |
6,226,587 | https://en.wikipedia.org/wiki/Necklace%20%28combinatorics%29 | In combinatorics, a k-ary necklace of length n is an equivalence class of n-character strings over an alphabet of size k, taking all rotations as equivalent. It represents a structure with n circularly connected beads which have k available colors.
A k-ary bracelet, also referred to as a turnover (or free) necklace, is a necklace such that strings may also be equivalent under reflection. That is, given two strings, if each is the reverse of the other, they belong to the same equivalence class. For this reason, a necklace might also be called a fixed necklace to distinguish it from a turnover necklace.
Formally, one may represent a necklace as an orbit of the cyclic group acting on n-character strings over an alphabet of size k, and a bracelet as an orbit of the dihedral group. One can count these orbits, and thus necklaces and bracelets, using Pólya's enumeration theorem.
Equivalence classes
Number of necklaces
There are
different k-ary necklaces of length n, where is Euler's totient function.
When the beads are restricted to particular color multiset , where is the number of beads of color , there are
different necklaces made of all the beads of .
Here and is the multinomial coefficient.
These two formulas follow directly from Pólya's enumeration theorem applied to the action of the cyclic group acting on the set of all functions .
If all k colors must be used, the count is
where are the Stirling number of the second kind.
and are related via the Binomial coefficients:
and
Number of bracelets
The number of different k-ary bracelets of length n is
where Nk(n) is the number of k-ary necklaces of length n. This follows from Pólya's method applied to the action of the dihedral group .
Case of distinct beads
For a given set of n beads, all distinct, the number of distinct necklaces made from these beads, counting rotated necklaces as the same, is = (n − 1)!. This is because the beads can be linearly ordered in n! ways, and the n circular shifts of such an ordering all give the same necklace. Similarly, the number of distinct bracelets, counting rotated and reflected bracelets as the same, is , for n ≥ 3.
If the beads are not all distinct, having repeated colors, then there are fewer necklaces (and bracelets). The above necklace-counting polynomials give the number necklaces made from all possible multisets of beads. Polya's pattern inventory polynomial refines the counting polynomial, using variable for each bead color, so that the coefficient of each monomial counts the number of necklaces on a given multiset of beads.
Aperiodic necklaces
An aperiodic necklace of length n is a rotation equivalence class having size n, i.e., no two distinct rotations of a necklace from such class are equal.
According to Moreau's necklace-counting function, there are
different k-ary aperiodic necklaces of length n, where μ is the Möbius function. The two necklace-counting functions are related by: where the sum is over all divisors of n, which is equivalent by Möbius inversion to
Each aperiodic necklace contains a single Lyndon word so that Lyndon words form representatives of aperiodic necklaces.
See also
Lyndon word
Inversion (discrete mathematics)
Necklace problem
Necklace splitting problem
Permutation
Proofs of Fermat's little theorem#Proof by counting necklaces
Forte number, a representation of binary bracelets of length 12 used in atonal music.
References
External links
Combinatorics on words
Enumerative combinatorics | Necklace (combinatorics) | [
"Mathematics"
] | 766 | [
"Combinatorics on words",
"Enumerative combinatorics",
"Combinatorics"
] |
6,227,150 | https://en.wikipedia.org/wiki/HEC-RAS | HEC-RAS is simulation software used in computational fluid dynamics – specifically, to model the hydraulics of water flow through natural rivers and other channels.
The program was developed by the United States Army Corps of Engineers in order to manage the rivers, harbors, and other public works under their jurisdiction; it has found wide acceptance by many others since its public release in 1995.
The Hydrologic Engineering Center (HEC) in Davis, California, developed the River Analysis System (RAS) to aid hydraulic engineers in channel flow analysis and floodplain determination. It includes numerous data entry capabilities, hydraulic analysis components, data storage and management capabilities, and graphing and reporting capabilities.
Functionality
The basic computational procedure of HEC-RAS for steady flow is based on the solution of the one-dimensional energy equation. Energy losses are evaluated by friction and contraction / expansion. The momentum equation may be used in situations where the water surface profile is rapidly varied. These situations include hydraulic jumps, hydraulics of bridges, and evaluating profiles at river confluences.
For unsteady flow, HEC-RAS solves the full, dynamic, 1-D Saint Venant Equation using an implicit, finite difference method. The unsteady flow equation solver was adapted from Dr. Robert L. Barkau's UNET package.
HEC-RAS is equipped to model a network of channels, a dendritic system or a single river reach. Certain simplifications must be made in order to model some complex flow situations using the HEC-RAS one-dimensional approach. It is capable of modeling subcritical, supercritical, and mixed flow regime flow along with the effects of bridges, culverts, weirs, and structures.
Version 5.0.7 as of March 2019 supports Windows 7, 8, 8.1, and 10 64-bit only. Version 6.0 and newer support 64-bit Windows 7-11, and version 6.1 is available for Linux.
Applications
HEC-RAS is a computer program for modeling water flowing through systems of open channels and computing water surface profiles. HEC-RAS finds particular commercial application in floodplain management and [flood insurance] studies to evaluate floodway encroachments. Some of the additional uses are: bridge and culvert design and analysis, levee studies, and channel modification studies. It can be used for dam breach analysis, though other modeling methods are presently more widely accepted for this purpose.
Advantages
HEC-RAS has merits, notably its support by the US Army Corps of Engineers, the future enhancements in progress, and its acceptance by many government agencies and private firms. It is in the public domain and peer-reviewed, and available to download free of charge from HEC's web site. Various private companies are registered as official "vendors" and offer consulting services and add-on software. Some also distribute the software in countries that are not permitted to access US Army web sites. However, the direct download from HEC includes extensive documentation, and scientists and engineers versed in hydraulic analysis should have little difficulty utilizing the software.
Disadvantages
Users may find numerical instability problems during unsteady analyses, especially in steep and/or highly dynamic rivers and streams. It is often possible to use HEC-RAS to overcome instability issues on river problems. Numerical stability concerns are an intrinsic property of finite difference numerical solution schemes.
Version history
The first version of HEC-RAS was released in 1995. This HEC-RAS 1.0 solves the same numerical equation of the 1968 HEC-2.
Prior to the 2016 update to Version 5.0, the program was one-dimensional, meaning that there is no direct modeling of the hydraulic effect of cross section shape changes, bends, and other two- and three-dimensional aspects of flow. The release of Version 5.0 introduced two-dimensional modeling of flow as well as sediment transfer modeling capabilities.
GeoHECRAS
GeoHECRAS is a 2D/3D visualization and editing data wrapper to the HEC-RAS software and used for flood control and flood mitigation engineering studies, including production of Federal Emergency Management Agency flood hazard maps and other river engineering studies.
Features related to HEC-RAS include:
Undo and redo HEC-RAS editing
Multiple document interface (MDI) of HEC-RAS projects
Use of AutoCAD and MicroStation CAD drawings and terrain surfaces
Use of GIS databases
Automated cross section generation
Automated production of floodplain maps
Design and analysis of roadway crossings (bridge and culvert)
Adaptive 2D mesh generation
WMS
WMS (watershed modeling system) is a hydrology software that provides pre and post-processing tools for use with HEC-RAS. The development of WMS by Aquaveo was funded primarily by The United States Army Corps of Engineers.
Features related to HEC-RAS include:
Using feature objects (centerline, cross section lines) and a TIN to develop the geometry of a HEC-RAS model.
Editing, merging, and creating cross sections in a database for use with HEC-RAS and other hydraulic models.
Delineating flood plains from water surface elevation data. Water surface elevations can be computed by HEC-RAS, defined interactively, or imported from a file.
Linking multiple simulations of HEC-1 to HEC-RAS to determine the uncertainty in modeling parameters on a delineated flood plain. Curve Number and Precipitation can be stochastically varied among HEC-1 parameters and Manning's n value for HEC-RAS.
See also
Hydraulic engineering
References
External links
HEC-RAS home page at the US Army Corps of Engineers, Hydrologic Engineering Center
An output video of a flood analysis done with HEC-RAS and visualization in ArcGIS
GeoHECRAS Homepage at CivilGEO
https://www.hec.usace.army.mil/software/hec-ras/download.aspx
Hydraulic engineering
Scientific simulation software
Hydrology software | HEC-RAS | [
"Physics",
"Engineering",
"Environmental_science"
] | 1,222 | [
"Hydrology",
"Hydrology software",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering"
] |
6,229,024 | https://en.wikipedia.org/wiki/Civic%20amenity%20site | A civic amenity site (CA site) or household waste recycling centre (HWRC) (both terms are used in the United Kingdom) is a facility where the public can dispose of household waste and also often containing recycling points. Civic amenity sites are run by the local authority in a given area. Collection points for recyclable waste such as green waste, metals, glass and other waste types (including WVO) are available. Items that cannot be collected by local waste collection schemes such as bulky waste are also accepted.
In the United Kingdom, civic amenity sites are informally called "tips" or "dumps".
In continental Europe, there are usually several types of collection sites:
sorted waste container stands: a group of containers of the most common recyclable household waste, such as plastics, paper, glass, metal cans, liquid packaging board, electrotechnical waste, recyclable clothing and so on. Such stands should be freely accessible by walking. They are often found near bus or tram stops, city squares, village commons, shops etc. A city or a country can have any colour convention to distinguish containers by type of waste.
waste collection courtyards: except for the mentioned household waste, they are specialized for large waste from citizens: furniture, construction waste, compostable gardening waste – or special types of waste (chemical or other hazardous waste etc.). The waste is usually delivered by cars, vans or trucks and the station has an overseeing staff and opening hours, but services are free of charge. Smaller towns have one such site, cities can have more such courtyards in various neighbourhoods.
waste purchase stations: especially for metal scrap (iron and other metals), but also for paper, glass etc. Such stations have been in existence longer than modern disposal stations. Coexistence of paid and free systems of collection can result in homeless, asocial or poor people picking waste from the free containers to sell at the waste purchase station.
See also
Transfer station (waste management)
References
Waste collection
Waste treatment technology | Civic amenity site | [
"Chemistry",
"Engineering"
] | 418 | [
"Water treatment",
"Waste treatment technology",
"Environmental engineering"
] |
6,230,869 | https://en.wikipedia.org/wiki/Signal%20patch | A protein signal patch contains information to send a given protein to the indicated location in the cell. It is made up of amino acid residues that are distant to one another in the primary sequence, but come close to each other in the tertiary structure of the folded protein (see red patch in the diagram). Signal patches, unlike some signal sequences, are not cleaved from the mature protein after sorting. They are very difficult to predict. Nuclear localization signals are often signal patches although signal sequences also exist. They are found on proteins destined for the nucleus and enable their selective transport from the cytosol into the nucleus through the nuclear pore complexes.
See also
protein targeting
signal peptide
Protein targeting | Signal patch | [
"Biology"
] | 141 | [
"Protein targeting",
"Cellular processes"
] |
6,231,089 | https://en.wikipedia.org/wiki/Planar%20Doppler%20velocimetry | Planar Doppler Velocimetry (PDV), also referred to as Doppler Global Velocimetry (DGV), determines flow velocity across a plane by measuring the Doppler shift in frequency of light scattered by particles contained in the flow. The Doppler shift, Δfd, is related to the fluid velocity.
The relatively small frequency shift (order 1 GHz) is discriminated using an atomic or molecular vapor filter. This approach is conceptually similar to what is now known as Filtered Rayleigh Scattering (Miles and Lempert, 1990).
Equipment
Up to now, a typical one-component PDV instrument utilizes a pulsed injection-seeded Nd:YAG laser, one or two scientific grade CCD cameras and a molecular iodine filter. The laser is used to illuminate a plane of the flow with narrow spectral linewidth light. The Doppler shifted scattered light is then split into two paths using a beamsplitter and imaged onto the camera(s). In this manner the absolute absorption of scattered light, as it passes through an iodine cell placed in one of the beam paths, is measured at every spatial location within the object plane. For scattering by relatively large (i.e. Mie scattering) particles, this absorption is a function of particle velocity alone. Accurate calibration and image mapping algorithms have been developed with the result that velocity accuracies of ~1–2 m/s are possible. More details concerning the history of PDV, the art of its application and recent advances can be found in comprehensive review articles by Elliott and Beutner (1999) and Samimy and Wernet (2000).
Strengths
PDV is well suited for high-speed flow measurements where concerns about particle seeding make PIV impractical. Although PDV requires particles to scatter light, individual particles do not need to be imaged thus allowing the use of much smaller seed particles and making the measurements less sensitive to particle seed density. For example, in some unheated supersonic flow facilities it is possible to use condensation of a vapor, such as water, acetone or ethanol, to produce seed particles in the flow. Particles formed using this method, known as product formation, have been estimated to be ~50 micrometres in diameter.
Unlike PIV, PDV requires only a single image of the flow field. This image may taken over a long period (relative to characteristics times scales within the flow) to produce time-averaged images or alternatively using a single laser pulse (approximately 10ns) to obtain a measurement of instantaneous flow velocities. The duration of a single laser pulse is at least an order of magnitude shorter than pulse separations used within PIV. This feature of PDV enables improved resolution of sharp velocity discontinuities such as shock waves.
In addition, PDV has an inherently higher resolution than PIV (where small image subregions are used to determine the velocity typically 16 x 16 pixels) and a velocity measurement may be obtained for each pixel within the flow image. However, particularly in the case of instantaneous measurement using PDV, some pixel binning is used to attenuate the deleterious effects of laser speckle and improve the Signal-to-Noise ratio.
Weaknesses
The main weakness of PDV is the complex optical set up required to get accurate measurements. For each component of velocity, two images (signal and reference) are required, which typically necessitates two cameras. To obtain all three components of velocity, therefore, requires the simultaneous use of up to six cameras, although recent work by Charrett et al. (2006) and Hawkes et al. (2004) has progressively enabled the number of cameras required from six to a single camera. In addition, the laser used for the measurements must be narrow linewidth, which is typically performed by injection seeding of the laser cavity. Even with seeding, the laser frequency can fluctuate with time and must be monitored. These introduce additional complexity to the experimental set-up. PDV systems, although used in many laboratories, are not yet commercially available and can be quite expensive (equipment, data processing, experience, labor, etc.) if built from scratch.
References
Elliott, G. S. and Beutner, T. J., “Molecular filter based planar Doppler velocimetry,” Progress in Aerospace Sciences, Vol. 35, 799, 1999.
McKenzie, R.L., “Measurement capabilities of planar Doppler velocimetry using pulsed lasers,” Applied Optics, Vol. 35, 948, 1996.
Samimy, M., and Wernet, M.P., “Review of planar multiple-component velocimetry in high-speed flows,” AIAA Journal, Vol. 38, 553, 2000.
Thurow, B., Jiang, N., Lempert, W. and Samimy, M., “MHz Rate Planar Doppler Velocimetry in Supersonic Jets,” AIAA Journal, Vol. 43, 500, 2005.
Hawkes, G.S., Thorpe, S.J. and Ainsworth, R.W., “Development of a Three-Component Doppler Global Velocimetry System”, in Proceedings of the 17th Symposium of Measuring Techniques in Transonic and Supersonic Flow in Cascades and Turbomachines, Stockholm, Sweden (2004).
Charrett, T.O.H, Ford, H.D. and Tatam, R.P., “Single Camera 3D Planar Doppler Velocity Measurements using Imaging Fibre Bundles”, Journal of Physics, Conference Series, Vol. 45 (2006) 193-200.
Eddie Irani and L. Scott Miller, "Evaluation of a Basic Doppler Global Velocimetry System", SAE-951427, 1995
External links
http://www.psp-tsp.com/pdv/
Measurement | Planar Doppler velocimetry | [
"Physics",
"Mathematics"
] | 1,249 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
6,231,567 | https://en.wikipedia.org/wiki/Nucleotidase | A nucleotidase is a hydrolytic enzyme that catalyzes the hydrolysis of a nucleotide into a nucleoside and a phosphate.
A nucleotide + H2O = a nucleoside + phosphate
For example, it converts adenosine monophosphate to adenosine, and guanosine monophosphate to guanosine.
Nucleotidases have an important function in digestion in that they break down consumed nucleic acids.
They can be divided into two categories, based upon the end that is hydrolyzed:
: 5'-nucleotidase - NT5C, NT5C1A, NT5C1B, NT5C2, NT5C3
: 3'-nucleotidase - NT3
5'-Nucleotidases cleave off the phosphate from the 5' end of the sugar moiety. They can be classified into various kinds depending on their substrate preferences and subcellular localization. Membrane-bound 5'-nucleotidases display specificity toward adenosine monophosphates and are involved predominantly in the salvage of preformed nucleotides and in signal transduction cascades involving purinergic receptors. Soluble 5'-nucleotidases are all known to belong to the haloacid dehalogenase superfamily of enzymes, which are two domain proteins characterised by a modified Rossman fold as the core and variable cap or hood. The soluble forms are further subclassified based on the criterion mentioned above. mdN and cdN are mitochondrial and cytosolic 5'-3'-pyrimidine nucleotidases. cN-I is a cytosolic nucleotidase(cN) characterized by its affinity toward AMP as its substrate. cN-II is identified by its affinity toward either IMP or GMP or both. cN-III is a pyrimidine 5'-nucleotidase. A new class of nucleotidases called IMP-specific 5'-nucleotidase has been recently defined. 5'-Nucleotidases are involved in varied functions like cell–cell communication, nucleic acid repair, purine salvage pathway for the synthesis of nucleotides, signal transduction, membrane transport, etc.
References
Further reading
External links
EC 3.1.3
Chemical pathology | Nucleotidase | [
"Chemistry",
"Biology"
] | 501 | [
"Biochemistry",
"Chemical pathology"
] |
12,850,536 | https://en.wikipedia.org/wiki/Genevac | Genevac Ltd is a company which was founded in 1990 by Michael Cole. It used to specialize in the manufacture of vacuum pumps and centrifugal evaporators, but has since directed its attention to equipment designed for combinatorial chemistry. Following a series of mergers, it is currently a subsidiary of SP Industries.
References
External links
Official site
Manufacturing companies of the United Kingdom
Combinatorial chemistry | Genevac | [
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 82 | [
"Combinatorial chemistry",
"Materials science",
"Combinatorics"
] |
12,850,865 | https://en.wikipedia.org/wiki/American%20Ceramic%20Society | The American Ceramic Society (ACerS) is a nonprofit organization of professionals for the ceramics community, with a focus on scientific research, emerging technologies, and applications in which ceramic materials are an element. ACerS is located in Westerville, Ohio.
ACerS comprises more than 11,000 members from 75 countries, with membership including engineers, scientists, researchers, manufacturers, plant personnel, educators, students, and marketing and sales representatives.
Journals
The society publishes the following journals:
Journal of the American Ceramic Society (JACerS)
International Journal of Applied Ceramic Technology (ACT)
International Journal of Applied Glass Science (IJAGS)
International Journal of Ceramic Engineering & Science (IJCES)
History
Creation
ACerS was established on April 6, 1898, in Columbus, Ohio by members of the National Brick Manufacturer's Association.
At the dawn of the 20th century, amidst the rapid industrialization of the United States, the importance of ceramics in technological advancements was becoming increasingly apparent. In response to this need, ACerS was formed with the dedication to promoting scientific research, technical advancements, and the practical applications of ceramic materials. The previous year at the association's annual convention in Pittsburgh, Elmer E. Gorton of American Terra Cotta & Ceramic Co. presented a paper entitled “Experimental Work, Wise and Otherwise." This paper was significant for being the first presented at the convention with a scientific focus, and motivated the formation of a non-commercial society dedicated to the exchange of ideas and research on the science of ceramics. The initial meetings and conventions were centered around the ceramic and pottery industries, which were thriving in Ohio and neighboring states at the time. The American Ceramic Society was officially formed on February 6, 1899, at its first annual meeting, which was held in Columbus, Ohio.
In its early years, the Society's focus was primarily on the production of ceramics, addressing the challenges faced by manufacturers and researchers alike. This focus expanded over time to include diverse topics such as glass technology, refractories, and electronic materials.
Growth and expansion (1913–1946)
After its establishment in the field of ceramics science, ACerS underwent considerable growth in membership, publications, and influence. In 1918, the society began publishing the Journal of the American Ceramic Society, which remains one of the most respected journals in the ceramics field. The Journal served as a platform to disseminate knowledge and research findings to the broader scientific community.
During World War II, ACerS scientists contributed significantly to the war effort by supporting the development of advanced ceramic materials for military applications, including radar technology, armor plating, and heat-resistant components for aircraft and rockets. ACerS members played a crucial role in advancing the development of ceramic materials for defense, aerospace, and electronics applications. The society's involvement in these industries helped establish its reputation as a leader in ceramics research and development.
Postwar era and technological advancements (1947–2010)
Following World War II, the ceramics industry experienced rapid growth, propelled by technological advancements in materials science. As a result, ACerS expanded its scope to include new areas of research, such as electronic ceramics, advanced structural ceramics, and biomaterials. To accommodate these growing fields, ACerS established various divisions and technical interest groups to foster collaboration and knowledge exchange among members.
During this period, the society launched additional publications, such as the International Journal of Applied Ceramic Technology and the International Journal of Applied Glass Science, to cater to the diverse interests of its members. ACerS' commitment to research and innovation contributed to the development of materials that had a significant impact on various industries, including aerospace, automotive, and biomedical.
Modern era (2010–present)
ACerS has also been instrumental in establishing and supporting various technical divisions, aimed at promoting specialized research and collaboration within specific areas of ceramic science and engineering. These divisions include the Electronics, Glass and Optical Materials, Nuclear and Environmental Technologies, and Structural Clay divisions, among others.
Organization
ACerS is organized into the following twelve divisions:
Art, Archaeology and Conservation Science. advances the scientific understanding of the materials found in ceramic art, and provides information that aids in the interpretation, reconstruction and preservation of traditional ceramic art and artifacts, as well as the techniques used in their creation artistic purposes.
Basic Science is concerned with studying the chemistry and physics of ceramics.
Bioceramics is dedicated to stimulating the growth and activity of the Society, particularly in the areas of the science, engineering, and manufacturing of bioceramics, biocomposites, and biomaterials.
Cements centers on the development and manufacture of cements, limes, and plasters.
Electronics examines ceramic materials for use in electronic devices.
Energy Materials and Systems deals with the science and engineering of ceramic and glass materials and related technologies, as they apply to the harvesting, conversion, storage, transport and utilization of energy.
Engineering Ceramics deals with the use of ceramics and their composites as structural and mechanical components.
Glass & Optical Materials centers on the design, manufacture and use of glasses.
Manufacturing focuses on meeting the broader needs of today's manufacturers who produce or use ceramic and glass materials, including the entire supply chain. In addition to enhancing networking opportunities, it addresses new manufacturing processes and techniques, sustainability, and business and environmental issues.
Nuclear & Environmental Technology concentrates on the use of ceramics in nuclear energy production and medicine.
Refractory Ceramics explores ceramics for use in high temperature and other hostile environments.
Structural Clay Products is concerned with the manufacture of brick, pipe, and red-body tile.
Classes
Keramos
Keramos was founded by ACerS in 1902 as a professional fraternity of ceramic engineering. It has active chapters at University of Arizona,
University of Florida, Georgia Institute of Technology, University of Illinois at Urbana-Champaign, Iowa State University, Missouri University of Science and Technology, Rutgers University, New York State College of Ceramics, Ohio State University, Pennsylvania State University, Clemson University, and University of Washington.
National Institute of Ceramic Engineers
The National Institute of Ceramic Engineers (NICE) works with ABET to accredit collegiate programs in ceramics. Materials science and engineering programs that offer an option to specialize in ceramics are accredited by NICE in conjunction with The Minerals, Metals & Materials Society (TMS). NICE is also responsible for writing and administering the Principles and Practice of Engineering Exam in ceramics engineering.
Ceramic Educational Council
The Ceramic Educational Council was founded in 1938 with the goal of improving ceramics education.
See also
Journal of the American Ceramic Society
References
Ceramic engineering
Ceramic materials
Glass engineering and science | American Ceramic Society | [
"Materials_science",
"Engineering"
] | 1,315 | [
"Glass engineering and science",
"Ceramic engineering",
"Ceramic materials",
"Materials science"
] |
12,851,541 | https://en.wikipedia.org/wiki/Artificial%20seawater | Artificial seawater (abbreviated ASW) is a mixture of dissolved mineral salts (and sometimes vitamins) that simulates seawater. Artificial seawater is primarily used in marine biology and in marine and reef aquaria, and allows the easy preparation of media appropriate for marine organisms (including algae, bacteria, plants and animals). From a scientific perspective, artificial seawater has the advantage of reproducibility over natural seawater since it is a standardized formula. Artificial seawater is also known as synthetic seawater and substitute ocean water.
Example
The tables below present an example of an artificial seawater (35.00‰ of salinity) preparation devised by Kester, Duedall, Connors and Pytkowicz (1967). The recipe consists of two lists of mineral salts, the first of anhydrous salts that can be weighed out, the second of hydrous salts that should be added to the artificial seawater as a solution.
While all of the compounds listed in the recipe above are inorganic, mineral salts, some artificial seawater recipes, such as Goldman and McCarthy (1978), make use of trace solutions of vitamins or organic compounds.
Standard
The International Standard for making artificial seawater can be found at ASTM International. The current standard is named ASTM D1141-98 (The original standard was ASTM D1141-52) and describes the standard practice for the preparation of substitute ocean water.
The ASTM D1141-98 standard comes in a ready-made artificial seawater form or a "Sea Salt" mix that can be prepared by engineers and hobbyists. Generally, the ready-made artificial seawater comes in 1 gallon and 5 gallon containers, whereas the "Sea Salt" mix comes in 20lb pails (makes approximately 57 gallons) and 50lb pails (makes approximately 143 gallons).
Uses and applications
There are various applications for ASTM D1141-98 synthetic seawater including corrosion studies, ocean instrument calibration and chemical processing. Typically, laboratory-grade water is used when making synthetic salts.
See also
Algaculture
Aquarium
References
External links
Artificial seawater media, Goldman & McCarthy (1978)
Modified Artificial Seawater Media (MASM), Culture Collection of Algae and Protozoa
Synthetic Seawaters for Aquaria and Laboratories, Calypso Publications (1979)
Seaweeds
Aquariums
Aquatic ecology
Biological oceanography
Chemical oceanography
Liquid water
Marine biology
Planktology | Artificial seawater | [
"Chemistry",
"Biology"
] | 497 | [
"Algae",
"Marine biology",
"Chemical oceanography",
"Ecosystems",
"Seaweeds",
"Aquatic ecology"
] |
12,856,226 | https://en.wikipedia.org/wiki/Scalar%20expectancy | The scalar timing or scalar expectancy theory (SET) is a model of the processes that govern behavior controlled by time. The model posits an internal clock, and particular memory and decision processes. SET is one of the most important models of animal timing behavior.
History
John Gibbon originally proposed SET to explain the temporally controlled behavior of non-human subjects. He initially used the model to account for a pattern of behavior seen in animals that are being reinforced at fixed-intervals, for example every 2 minutes. An animal that is well trained on such a fixed-interval schedule pauses after each reinforcement and then suddenly starts responding about two-thirds of the way through the new interval. (See operant conditioning) The model explains how the animal's behavior is controlled by time in this manner. Gibbon and others later elaborated the model and applied it to a variety of other timing phenomena.
Summary of the model
SET assumes that the animal has a clock, a working memory, a reference memory, and a decision process. The clock contains a discrete pacemaker that generates pulses like the ticks a mechanical clock. A stimulus that signals the start of a timed interval closes a switch, allowing pulses to enter an accumulator. The resulting accumulation of pulses represents elapsed time, and this time value is continuously sent to a working memory. When reinforcement happens at the end of the timed interval, the time value is stored in a long-term reference memory. This time-to-reinforcement in reference memory represents the expected time to reinforcement.
Key to the SET model is the decision process that controls timing behavior. While the animal is timing some interval it continually compares the current time (stored in working memory) to the expected time (stored in reference memory). Specifically, the animal continually samples from its memory of past times at which reinforcement occurred and compares this memory sample with the current time on its clock. When the two values are close to one another the animal responds; when they are far enough apart, the animal stops responding. To make this comparison, it computes the ratio of the two values; when the ratio is less than a certain value it responds, when the ratio is larger it does not respond.
By using a ratio of current time to expected time, rather than, for example, simply subtracting one from the other, SET accounts for a key observation about animal and human timing. That is, timing precision is relative to the size of the interval being timed (See Accuracy and precision). This is the "scalar" property that gives the model its name. For example, when timing a 10 sec interval an animal might be precise to within 1 sec, whereas when timing a 100 sec interval the animal would be precise to only about 10 sec. Thus time perception is like the perception of lights, sounds, and other sensory events, where precision is also relative to the size (brightness, loudness, etc.) of the percept being judged. (See Weber-Fechner law.)
A number of alternative models of timing have appeared over the years. These include Killeen’s Behavioral Theory of timing (BeT) model and Machado’s learning-to-time (LeT) model.
Moreover, there are some evidence that this property might not be valid in all ranges of durations. Additionally John Staddon argues that SET is inconsistent on explaining the location of temporal indifference point in temporal bisection procedure.
Human mechanism
In 1993, John Wearden claimed that human behavior exhibits appropriate scalar properties, as was indicated by experiments on internal production with concurrent chronometric counting. However, human timing behavior is undoubtedly more varied than animal timing behavior. A major factor responsible for this variability is attentional allocation.
References
Theories
Time | Scalar expectancy | [
"Physics",
"Mathematics"
] | 759 | [
"Physical quantities",
"Time",
"Quantity",
"Spacetime",
"Wikipedia categories named after physical quantities"
] |
782,795 | https://en.wikipedia.org/wiki/Iso-elastic | In engineering, iso-elastic refers to a system of elastic and tensile parts (springs and pulleys) which are arranged in a configuration which isolates physical motion at one end in order to minimize or prevent similar motion from occurring at the other end.
This type of device must be able to maintain angular direction and load-bearing over a large range of motion.
The most prominent use of an iso-elastic system is in the supporting armature of a Steadicam, used to isolate a film or video camera from the operator's movements.
Steadicam arms all work in a fashion similar to a spring lamp since each arm has two sections (similar to and labelled like a human arm); both the upper and fore-arm sections consist of a parallelogram with a diagonal iso-elastic cable-pulley-spring system. The iso-elastic system is tensioned to counteract the weight of the camera and steadicam sled. This tensioning allows the camera and operator to move vertically and independently of each other. For example, as the operator runs, the bouncing of his body is absorbed by the springs, keeping the camera steady. The arm also has unsprung hinges at both ends of each arm allowing it to bend in the horizontal plane (just like your elbow, not like a spring lamp).
To understand how an iso-elastic system works, we must first understand how springs work. The tension (elastic force) in a spring is proportional to its extension according to Hooke's law. This means that if a weight is hung on a spring it will oscillate with simple harmonic motion about its balance point; when the weight is above the balance point the spring's tension is reduced so the weight falls due to gravity, and when the weight is below the balance point the spring's tension will pull it back upwards.
If a simple spring system were used in a steadicam, then as the operator moved vertically, the camera would be subject to simple harmonic motion, and bounce up and down. To counteract this tendency, an iso-elastic system is employed.
The springs used are large, stiff springs with a high modulus of elasticity, and they are highly tensioned. A compound pulley system is then used so that the large force exerted by the spring can be divided by a factor of five, for example, so the cable exiting the pulley system will have only moderate tension. Most importantly, however, when the cable is drawn in or out the extension of the spring changes by only a fifth of that distance, so that the tension force of the spring will not change much. The result is that the spring-pulley system can produce a fairly constant tension in the cable over a large range of movement.
The almost constant force exerted by an iso-elastic system is employed in the armature of a steadicam, to counteract the constant force of gravity on the camera's and mount's mass. The result is that the weight of the camera is almost exactly balanced by the tension force throughout the entire range of vertical movement, so even when the operator jumps vertically, the camera will retain its vertical position due to inertia, but remain balanced, just with the armature at a different angle.
As a result, the camera doesn't bounce up to the 'balanced' position after a move, for example when the operator steps up onto a curb from the road. This allows the camera to be more isolated and independent of the operator's moves. The operator can of course deliberately move the camera up or down, if desired. In reality however camera operators find it preferable for the arm to not be perfectly iso-elastic so that the camera will naturally rise to a comfortable operating height; the springs will be tensioned so this only happens very slowly and without bouncing so as to maintain the smoothness of the camera's motion."
See also
Precision engineering
References
Engineering concepts
Cinematography | Iso-elastic | [
"Engineering"
] | 809 | [
"nan"
] |
783,563 | https://en.wikipedia.org/wiki/Total%20synthesis | Total synthesis, a specialized area within organic chemistry, focuses on constructing complex organic compounds, especially those found in nature, using laboratory methods. It often involves synthesizing natural products from basic, commercially available starting materials. Total synthesis targets can also be organometallic or inorganic. While total synthesis aims for complete construction from simple starting materials, modifying or partially synthesizing these compounds is known as semisynthesis.
Natural product synthesis serves as a critical tool across various scientific fields. In organic chemistry, it tests new synthetic methods, validating and advancing innovative approaches. In medicinal chemistry, natural product synthesis is essential for creating bioactive compounds, driving progress in drug discovery and therapeutic development. Similarly, in chemical biology, it provides research tools for studying biological systems and processes. Additionally, synthesis aids natural product research by helping confirm and elucidate the structures of newly isolated compounds.
The field of natural product synthesis has progressed remarkably since the early 19th century, with improvements in synthetic techniques, analytical methods, and an evolving understanding of chemical reactivity. Today, modern synthetic approaches often combine traditional organic methods, biocatalysis, and chemoenzymatic strategies to achieve efficient and complex syntheses, broadening the scope and applicability of synthetic processes.
Key components of natural product synthesis include retrosynthetic analysis, which involves planning synthetic routes by working backward from the target molecule to design the most effective construction pathway. Stereochemical control is crucial to ensure the correct three-dimensional arrangement of atoms, critical for the molecule's functionality. Reaction optimization enhances yield, selectivity, and efficiency, making synthetic steps more practical. Finally, scale-up considerations allow researchers to adapt lab-scale syntheses for larger production, expanding the accessibility of synthesized products. This evolving field continues to fuel advancements in drug development, materials science, and our understanding of the diversity in natural compounds.
Scope and definitions
There are numerous classes of natural products for which total synthesis is applied to. These include (but are not limited to): terpenes, alkaloids, polyketides. and polyethers. Total synthesis targets are sometimes referred to by their organismal origin such as plant, marine, and fungal. The term total synthesis is less frequently but still accurately applied to the synthesis of natural polypeptides and polynucleotides. The peptide hormones oxytocin and vasopressin were isolated and their total syntheses first reported in 1954. It is not uncommon for natural product targets to feature multiple structural components of several natural product classes.
Aims
Although untrue from an historical perspective (see the history of the steroid, cortisone), total synthesis in the modern age has largely been an academic endeavor (in terms of manpower applied to problems). Industrial chemical needs often differ from academic focuses. Typically, commercial entities may pick up particular avenues of total synthesis efforts and expend considerable resources on particular natural product targets, especially if semi-synthesis can be applied to complex, natural product-derived drugs. Even so, for decades there has been a continuing discussion regarding the value of total synthesis as an academic enterprise. While there are some outliers, the general opinions are that total synthesis has changed in recent decades, will continue to change, and will remain an integral part of chemical research. Within these changes, there has been increasing focus on improving the practicality and marketability of total synthesis methods. The Phil S. Baran group at Scripps, a notable pioneer of practical synthesis have endeavored to create scalable and high efficiency syntheses that would have more immediate uses outside of academia.
History
Friedrich Wöhler discovered that an organic substance, urea, could be produced from inorganic starting materials in 1828. That was an important conceptual milestone in chemistry by being the first example of a synthesis of a substance that had been known only as a byproduct of living processes. Wöhler obtained urea by treating silver cyanate with ammonium chloride, a simple, one-step synthesis:
AgNCO + NH4Cl → (NH2)2CO + AgCl
Camphor was a scarce and expensive natural product with a worldwide demand. Haller and Blanc synthesized it from camphor acid; however, the precursor, camphoric acid, had an unknown structure. When Finnish chemist Gustav Komppa synthesized camphoric acid from diethyl oxalate and 3,3-dimethylpentanoic acid in 1904, the structure of the precursors allowed contemporary chemists to infer the complicated ring structure of camphor. Shortly thereafter, William Perkin published another synthesis of camphor. The work on the total chemical synthesis of camphor allowed Komppa to begin industrial production of the compound, in Tainionkoski, Finland, in 1907.
The American chemist Robert Burns Woodward was a pre-eminent figure in developing total syntheses of complex organic molecules, some of his targets being cholesterol, cortisone, strychnine, lysergic acid, reserpine, chlorophyll, colchicine, vitamin B12, and prostaglandin F-2a.
Vincent du Vigneaud was awarded the 1955 Nobel Prize in Chemistry for the total synthesis of the natural polypeptide oxytocin and vasopressin, which reported in 1954 with the citation "for his work on biochemically important sulphur compounds, especially for the first synthesis of a polypeptide hormone."
Another gifted chemist is Elias James Corey, who won the Nobel Prize in Chemistry in 1990 for lifetime achievement in total synthesis and for the development of retrosynthetic analysis.
List of notable total syntheses
Quinine total synthesis First synthesized by Robert Burns Woodward and William von Eggers Doering in 1944, this achievement was significant due to quinine's importance as an antimalarial drug.
Strychnine total synthesis First synthesized by Robert Burns Woodward in 1954, this synthesis was a landmark achievement due to the molecule's structural complexity.
Morphine: First synthesized by Marshall D. Gates in 1952, with subsequent more efficient syntheses developed by other chemists, including Toshiaki Fukuyama in 2017.
Cholesterol total synthesis Synthesized by Robert Burns Woodward in 1951, this was a significant achievement in steroid synthesis.
Cortisone: Another notable steroid synthesis by Robert Burns Woodward in 1951.
Lysergic acid: Synthesized by Robert Burns Woodward in 1954, this was an important precursor to LSD.
Reserpine: Completed by Robert Burns Woodward in 1956, this synthesis was notable for its complexity and the molecule's importance as an antihypertensive drug.
Chlorophyll: Synthesized by Robert Burns Woodward in 1960, this achievement was significant due to chlorophyll's crucial role in photosynthesis.
Colchicine: Another notable synthesis by Robert Burns Woodward, completed in 1963.
Prostaglandin F-2a: Synthesized by E.J. Corey in 1969, this was an important achievement in the synthesis of prostaglandins.
Vitamin B12 total synthesis Completed by Robert Burns Woodward and his team in 1972, this synthesis is considered one of the most complex ever achieved, involving over 100 steps.
Paclitaxel (Taxol) total synthesis: First synthesized by Robert A. Holton in 1994, and later by K. C. Nicolaou in 1995, this anticancer drug's synthesis was a major breakthrough in medicinal chemistry.
Brefeldin A: Synthesized by S. Raghavan in 2017, this complex macrolide has potential as an anticancer agent.
Ryanodine: Synthesized by Sarah E. Reisman in 2017, this complex diterpenoid has important biological activity.
References
External links
The Organic Synthesis Archive
Total Synthesis Highlights
Total Synthesis News
Total syntheses schemes with reaction and reagent indices
Group Meeting Problems in Organic Chemistry
Organic synthesis | Total synthesis | [
"Chemistry"
] | 1,609 | [
"Total synthesis",
"Organic synthesis",
"Chemical synthesis"
] |
787,776 | https://en.wikipedia.org/wiki/Curse%20of%20dimensionality | The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. The expression was coined by Richard E. Bellman when considering problems in dynamic programming. The curse generally refers to issues that arise when the number of datapoints is small (in a suitably defined sense) relative to the intrinsic dimension of the data.
Dimensionally cursed phenomena occur in domains such as numerical analysis, sampling, combinatorics, machine learning, data mining and databases. The common theme of these problems is that when the dimensionality increases, the volume of the space increases so fast that the available data become sparse. In order to obtain a reliable result, the amount of data needed often grows exponentially with the dimensionality. Also, organizing and searching data often relies on detecting areas where objects form groups with similar properties; in high dimensional data, however, all objects appear to be sparse and dissimilar in many ways, which prevents common data organization strategies from being efficient.
Domains
Combinatorics
In some problems, each variable can take one of several discrete values, or the range of possible values is divided to give a finite number of possibilities. Taking the variables together, a huge number of combinations of values must be considered. This effect is also known as the combinatorial explosion. Even in the simplest case of binary variables, the number of possible combinations already is exponential in the dimensionality. Naively, each additional dimension doubles the effort needed to try all combinations.
Sampling
There is an exponential increase in volume associated with adding extra dimensions to a mathematical space. For example, 102 = 100 evenly spaced sample points suffice to sample a unit interval (try to visualize a "1-dimensional" cube) with no more than 10−2 = 0.01 distance between points; an equivalent sampling of a 10-dimensional unit hypercube with a lattice that has a spacing of 10−2 = 0.01 between adjacent points would require 1020 = [(102)10] sample points. In general, with a spacing distance of 10−n the 10-dimensional hypercube appears to be a factor of 10n(10−1) = [(10n)10/(10n)] "larger" than the 1-dimensional hypercube, which is the unit interval. In the above example n = 2: when using a sampling distance of 0.01 the 10-dimensional hypercube appears to be 1018 "larger" than the unit interval. This effect is a combination of the combinatorics problems above and the distance function problems explained below.
Optimization
When solving dynamic optimization problems by numerical backward induction, the objective function must be computed for each combination of values. This is a significant obstacle when the dimension of the "state variable" is large.
Machine learning
In machine learning problems that involve learning a "state-of-nature" from a finite number of data samples in a high-dimensional feature space with each feature having a range of possible values, typically an enormous amount of training data is required to ensure that there are several samples with each combination of values. In an abstract sense, as the number of features or dimensions grows, the amount of data we need to generalize accurately grows exponentially.
A typical rule of thumb is that there should be at least 5 training examples for each dimension in the representation. In machine learning and insofar as predictive performance is concerned, the curse of dimensionality is used interchangeably with the peaking phenomenon, which is also known as Hughes phenomenon. This phenomenon states that with a fixed number of training samples, the average (expected) predictive power of a classifier or regressor first increases as the number of dimensions or features used is increased but beyond a certain dimensionality it starts deteriorating instead of improving steadily.
Nevertheless, in the context of a simple classifier (e.g., linear discriminant analysis in the multivariate Gaussian model under the assumption of a common known covariance matrix), Zollanvari, et al., showed both analytically and empirically that as long as the relative cumulative efficacy of an additional feature set (with respect to features that are already part of the classifier) is greater (or less) than the size of this additional feature set, the expected error of the classifier constructed using these additional features will be less (or greater) than the expected error of the classifier constructed without them. In other words, both the size of additional features and their (relative) cumulative discriminatory effect are important in observing a decrease or increase in the average predictive power.
In metric learning, higher dimensions can sometimes allow a model to achieve better performance. After normalizing embeddings to the surface of a hypersphere, FaceNet achieves the best performance using 128 dimensions as opposed to 64, 256, or 512 dimensions in one ablation study. A loss function for unitary-invariant dissimilarity between word embeddings was found to be minimized in high dimensions.
Data mining
In data mining, the curse of dimensionality refers to a data set with too many features.
Consider the first table, which depicts 200 individuals and 2000 genes (features) with a 1 or 0 denoting whether or not they have a genetic mutation in that gene. A data mining application to this data set may be finding the correlation between specific genetic mutations and creating a classification algorithm such as a decision tree to determine whether an individual has cancer or not.
A common practice of data mining in this domain would be to create association rules between genetic mutations that lead to the development of cancers. To do this, one would have to loop through each genetic mutation of each individual and find other genetic mutations that occur over a desired threshold and create pairs. They would start with pairs of two, then three, then four until they result in an empty set of pairs. The complexity of this algorithm can lead to calculating all permutations of gene pairs for each individual or row. Given the formula for calculating the permutations of n items with a group size of r is: , calculating the number of three pair permutations of any given individual would be 7988004000 different pairs of genes to evaluate for each individual. The number of pairs created will grow by an order of factorial as the size of the pairs increase. The growth is depicted in the permutation table (see right).
As we can see from the permutation table above, one of the major problems data miners face regarding the curse of dimensionality is that the space of possible parameter values grows exponentially or factorially as the number of features in the data set grows. This problem critically affects both computational time and space when searching for associations or optimal features to consider.
Another problem data miners may face when dealing with too many features is the notion that the number of false predictions or classifications tend to increase as the number of features grow in the data set. In terms of the classification problem discussed above, keeping every data point could lead to a higher number of false positives and false negatives in the model.
This may seem counter intuitive but consider the genetic mutation table from above, depicting all genetic mutations for each individual. Each genetic mutation, whether they correlate with cancer or not, will have some input or weight in the model that guides the decision-making process of the algorithm. There may be mutations that are outliers or ones that dominate the overall distribution of genetic mutations when in fact they do not correlate with cancer. These features may be working against one's model, making it more difficult to obtain optimal results.
This problem is up to the data miner to solve, and there is no universal solution. The first step any data miner should take is to explore the data, in an attempt to gain an understanding of how it can be used to solve the problem. One must first understand what the data means, and what they are trying to discover before they can decide if anything must be removed from the data set. Then they can create or use a feature selection or dimensionality reduction algorithm to remove samples or features from the data set if they deem it necessary. One example of such methods is the interquartile range method, used to remove outliers in a data set by calculating the standard deviation of a feature or occurrence.
Distance function
When a measure such as a Euclidean distance is defined using many coordinates, there is little difference in the distances between different pairs of points.
One way to illustrate the "vastness" of high-dimensional Euclidean space is to compare the proportion of an inscribed hypersphere with radius and dimension , to that of a hypercube with edges of length
The volume of such a sphere is , where is the gamma function, while the volume of the cube is .
As the dimension of the space increases, the hypersphere becomes an insignificant volume relative to that of the hypercube. This can clearly be seen by comparing the proportions as the dimension goes to infinity:
as .
Furthermore, the distance between the center and the corners is , which increases without bound for fixed r.
In this sense when points are uniformly generated in a high-dimensional hypercube, almost all points are much farther than units away from the centre. In high dimensions, the volume of the d-dimensional unit hypercube (with coordinates of the vertices ) is concentrated near a sphere with the radius for large dimension d. Indeed, for each coordinate the average value of in the cube is
.
The variance of for uniform distribution in the cube is
Therefore, the squared distance from the origin, has the average value d/3 and variance 4d/45. For large d, distribution of is close to the normal distribution with the mean 1/3 and the standard deviation according to the central limit theorem. Thus, when uniformly generating points in high dimensions, both the "middle" of the hypercube, and the corners are empty, and all the volume is concentrated near the surface of a sphere of "intermediate" radius .
This also helps to understand the chi-squared distribution. Indeed, the (non-central) chi-squared distribution associated to a random point in the interval [-1, 1] is the same as the distribution of the length-squared of a random point in the d-cube. By the law of large numbers, this distribution concentrates itself in a narrow band around d times the standard deviation squared (σ2) of the original derivation. This illuminates the chi-squared distribution and also illustrates that most of the volume of the d-cube concentrates near the boundary of a sphere of radius .
A further development of this phenomenon is as follows. Any fixed distribution on the real numbers induces a product distribution on points in . For any fixed n, it turns out that the difference between the minimum and the maximum distance between a random reference point Q and a list of n random data points P1,...,Pn become indiscernible compared to the minimum distance:
.
This is often cited as distance functions losing their usefulness (for the nearest-neighbor criterion in feature-comparison algorithms, for example) in high dimensions. However, recent research has shown this to only hold in the artificial scenario when the one-dimensional distributions are independent and identically distributed. When attributes are correlated, data can become easier and provide higher distance contrast and the signal-to-noise ratio was found to play an important role, thus feature selection should be used.
More recently, it has been suggested that there may be a conceptual flaw in the argument that contrast-loss creates a curse in high dimensions. Machine learning can be understood as the problem of assigning instances to their respective generative process of origin, with class labels acting as symbolic representations of individual generative processes. The curse's derivation assumes all instances are independent, identical outcomes of a single high dimensional generative process. If there is only one generative process, there would exist only one (naturally occurring) class and machine learning would be conceptually ill-defined in both high and low dimensions. Thus, the traditional argument that contrast-loss creates a curse, may be fundamentally inappropriate. In addition, it has been shown that when the generative model is modified to accommodate multiple generative processes, contrast-loss can morph from a curse to a blessing, as it ensures that the nearest-neighbor of an instance is almost-surely its most closely related instance. From this perspective, contrast-loss makes high dimensional distances especially meaningful and not especially non-meaningful as is often argued.
Nearest neighbor search
The effect complicates nearest neighbor search in high dimensional space. It is not possible to quickly reject candidates by using the difference in one coordinate as a lower bound for a distance based on all the dimensions.
However, it has recently been observed that the mere number of dimensions does not necessarily result in difficulties, since relevant additional dimensions can also increase the contrast. In addition, for the resulting ranking it remains useful to discern close and far neighbors. Irrelevant ("noise") dimensions, however, reduce the contrast in the manner described above. In time series analysis, where the data are inherently high-dimensional, distance functions also work reliably as long as the signal-to-noise ratio is high enough.
k-nearest neighbor classification
Another effect of high dimensionality on distance functions concerns k-nearest neighbor (k-NN) graphs constructed from a data set using a distance function. As the dimension increases, the indegree distribution of the k-NN digraph becomes skewed with a peak on the right because of the emergence of a disproportionate number of hubs, that is, data-points that appear in many more k-NN lists of other data-points than the average. This phenomenon can have a considerable impact on various techniques for classification (including the k-NN classifier), semi-supervised learning, and clustering, and it also affects information retrieval.
Anomaly detection
In a 2012 survey, Zimek et al. identified the following problems when searching for anomalies in high-dimensional data:
Concentration of scores and distances: derived values such as distances become numerically similar
Irrelevant attributes: in high dimensional data, a significant number of attributes may be irrelevant
Definition of reference sets: for local methods, reference sets are often nearest-neighbor based
Incomparable scores for different dimensionalities: different subspaces produce incomparable scores
Interpretability of scores: the scores often no longer convey a semantic meaning
Exponential search space: the search space can no longer be systematically scanned
Data snooping bias: given the large search space, for every desired significance a hypothesis can be found
Hubness: certain objects occur more frequently in neighbor lists than others.
Many of the analyzed specialized methods tackle one or another of these problems, but there remain many open research questions.
Blessing of dimensionality
Surprisingly and despite the expected "curse of dimensionality" difficulties, common-sense heuristics based on the most straightforward methods "can yield results which are almost surely optimal" for high-dimensional problems. The term "blessing of dimensionality" was introduced in the late 1990s. Donoho in his "Millennium manifesto" clearly explained why the "blessing of dimensionality" will form a basis of future data mining. The effects of the blessing of dimensionality were discovered in many applications and found their foundation in the concentration of measure phenomena. One example of the blessing of dimensionality phenomenon is linear separability of a random point from a large finite random set with high probability even if this set is exponentially large: the number of elements in this random set can grow exponentially with dimension. Moreover, this linear functional can be selected in the form of the simplest linear Fisher discriminant. This separability theorem was proven for a wide class of probability distributions: general uniformly log-concave distributions, product distributions in a cube and many other families (reviewed recently in ).
"The blessing of dimensionality and the curse of dimensionality are two sides of the same coin." For example, the typical property of essentially high-dimensional probability distributions in a high-dimensional space is: the squared distance of random points to a selected point is, with high probability, close to the average (or median) squared distance. This property significantly simplifies the expected geometry of data and indexing of high-dimensional data (blessing), but, at the same time, it makes the similarity search in high dimensions difficult and even useless (curse).
Zimek et al. noted that while the typical formalizations of the curse of dimensionality affect i.i.d. data, having data that is separated in each attribute becomes easier even in high dimensions, and argued that the signal-to-noise ratio matters: data becomes easier with each attribute that adds signal, and harder with attributes that only add noise (irrelevant error) to the data. In particular for unsupervised data analysis this effect is known as swamping.
See also
Bellman equation
Clustering high-dimensional data
Concentration of measure
Dimensionality reduction
Dynamic programming
Fourier-related transforms
Grand Tour
Linear least squares
Model order reduction
Multilinear PCA
Multilinear subspace learning
Principal component analysis
Singular value decomposition
References
Numerical analysis
Dynamic programming
Machine learning
Dimension | Curse of dimensionality | [
"Physics",
"Mathematics",
"Engineering"
] | 3,539 | [
"Geometric measurement",
"Numerical analysis",
"Approximations",
"Physical quantities",
"Machine learning",
"Computational mathematics",
"Mathematical relations",
"Theory of relativity",
"Artificial intelligence engineering",
"Dimension"
] |
788,567 | https://en.wikipedia.org/wiki/Thermopile | A thermopile is an electronic device that converts thermal energy into electrical energy. It is composed of several thermocouples connected usually in series or, less commonly, in parallel. Such a device works on the principle of the thermoelectric effect, i.e., generating a voltage when its dissimilar metals (thermocouples) are exposed to a temperature difference.
Operation
Thermocouples operate by measuring the temperature differential from their junction point to the point in which the thermocouple output voltage is measured. Once a closed circuit is made up of more than one metal and there is a difference in temperature between junctions and points of transition from one metal to another, a current is produced as if generated by a difference of potential between the hot and cold junction.
Thermocouples can be connected in series as thermocouple pairs with a junction located on either side of a thermal resistance layer. The output from the thermocouple pair will be a voltage directly proportional to the temperature difference across the thermal resistance layer and also to the heat flux through the thermal resistance layer. Adding more thermocouple pairs in series increases the magnitude of the voltage output.
Thermopiles can be constructed with a single thermocouple pair, composed of two thermocouple junctions, or multiple thermocouple pairs.
Thermopiles do not respond to absolute temperature, but generate an output voltage proportional to a local temperature difference or temperature gradient. The amount of voltage and power are very small and they are measured in milli-watts and milli-volts using controlled devices that are specifically designed for such purpose.
Applications
Thermopiles are used to provide an output in response to temperature as part of a temperature measuring device, such as the infrared thermometers widely used by medical professionals to measure body temperature, or in thermal accelerometers to measure the temperature profile inside the sealed cavity of the sensor. They are also used widely in heat flux sensors and pyrheliometers and gas burner safety controls. The output of a thermopile is usually in the range of tens or hundreds of millivolts. As well as increasing the signal level, the device may be used to provide spatial temperature averaging.Thermopiles are also used to generate electrical energy from, for instance, heat from electrical components, solar wind, radioactive materials, laser radiation or combustion. The process is also an example of the Peltier effect (electric current transferring heat energy) as the process transfers heat from the hot to the cold junctions.
There are also the so-called thermopile sensors, which are power meters based on the principle that the optical or laser power is converted to heat and the resulting increase in temperature is measured by a thermopile.
See also
Seebeck effect, the physical effect responsible for the generation of voltage in a thermopile
Thermoelectric materials, high-performance materials that can be used to construct a compact thermopile that delivers high power
References
External links
TPA81 Thermopile detector Array Technical Specification
Electrical components
Thermoelectricity | Thermopile | [
"Technology",
"Engineering"
] | 661 | [
"Electrical engineering",
"Electrical components",
"Components"
] |
23,807,785 | https://en.wikipedia.org/wiki/Surface%20Evolver | Surface Evolver is an interactive program for the study of surfaces shaped by surface tension and other energies, and subject to various constraints. A surface is implemented as a simplicial complex. The user defines an initial surface in a datafile. The Evolver evolves the surface toward minimal energy by a gradient descent method. The aim can be to find a minimal energy surface, or to model the process of evolution by mean curvature. The energy in the Evolver can be a combination of surface tension, gravitational energy, squared mean curvature, user-defined surface integrals, or knot energies. The Evolver can handle arbitrary topology, volume constraints, boundary constraints, boundary contact angles, prescribed mean curvature, crystalline integrands, gravity, and constraints expressed as surface integrals. The surface can be in an ambient space of arbitrary dimension, which can have a Riemannian metric, and the ambient space can be a quotient space under a group action.
Evolver was written at The Geometry Center, sponsored by the National Science Foundation, the Department of Energy, Enterprise Minnesota, and the University of Minnesota.
References
Mathematical software
Physics software
Science software | Surface Evolver | [
"Physics",
"Mathematics"
] | 234 | [
"Mathematical software",
"Physics software",
"Computational physics"
] |
23,808,291 | https://en.wikipedia.org/wiki/Nekhoroshev%20estimates | The Nekhoroshev estimates are an important result in the theory of Hamiltonian systems concerning the long-time stability of solutions of integrable systems under a small perturbation of the Hamiltonian. The first paper on the subject was written by Nikolay Nekhoroshev in 1971.
The theorem complements both the Kolmogorov-Arnold-Moser theorem and the phenomenon of instability for nearly integrable Hamiltonian systems, sometimes called Arnold diffusion, in the following way: the KAM theorem tells us that many solutions to nearly integrable Hamiltonian systems persist under a perturbation for all time, while, as Vladimir Arnold first demonstrated in 1964, some solutions do not stay close to their integrable counterparts for all time. The Nekhoroshev estimates tell us that, nonetheless, all solutions stay close to their integrable counterparts for an exponentially long time. Thus, they restrict how quickly solutions can become unstable.
Statement
Let be a nearly integrable degree-of-freedom Hamiltonian, where are the action-angle variables. Ignoring the technical assumptions and details in the statement, Nekhoroshev estimates assert that:
for
where is a complicated constant.
See also
Arnold diffusion
References
Dynamical systems | Nekhoroshev estimates | [
"Physics",
"Mathematics"
] | 253 | [
"Mechanics",
"Dynamical systems"
] |
23,809,352 | https://en.wikipedia.org/wiki/Carbon-fiber%20reinforced%20polymer | Carbon fiber-reinforced polymers (American English), carbon-fibre-reinforced polymers (Commonwealth English), carbon-fiber-reinforced plastics, carbon-fiber reinforced-thermoplastic (CFRP, CRP, CFRTP), also known as carbon fiber, carbon composite, or just carbon, are extremely strong and light fiber-reinforced plastics that contain carbon fibers. CFRPs can be expensive to produce, but are commonly used wherever high strength-to-weight ratio and stiffness (rigidity) are required, such as aerospace, superstructures of ships, automotive, civil engineering, sports equipment, and an increasing number of consumer and technical applications.
The binding polymer is often a thermoset resin such as epoxy, but other thermoset or thermoplastic polymers, such as polyester, vinyl ester, or nylon, are sometimes used. The properties of the final CFRP product can be affected by the type of additives introduced to the binding matrix (resin). The most common additive is silica, but other additives such as rubber and carbon nanotubes can be used.
Carbon fiber is sometimes referred to as graphite-reinforced polymer or graphite fiber-reinforced polymer (GFRP is less common, as it clashes with glass-(fiber)-reinforced polymer).
Properties
CFRP are composite materials. In this case the composite consists of two parts: a matrix and a reinforcement. In CFRP the reinforcement is carbon fiber, which provides its strength. The matrix is usually a thermosetting plastic, such as polyester resin, to bind the reinforcements together. Because CFRPs consist of two distinct elements, the material properties depend on these two elements.
Reinforcement gives CFRPs their strength and rigidity, measured by stress and elastic modulus respectively. Unlike isotropic materials like steel and aluminum, CFRPs have directional strength properties. The properties of a CFRP depend on the layouts of the carbon fiber and the proportion of the carbon fibers relative to the polymer. The two different equations governing the net elastic modulus of composite materials using the properties of the carbon fibers and the polymer matrix can also be applied to carbon fiber reinforced plastics. The equation:
is valid for composite materials with the fibers oriented in the direction of the applied load. is the total composite modulus, and are the volume fractions of the matrix and fiber respectively in the composite, and and are the elastic moduli of the matrix and fibers respectively. The other extreme case of the elastic modulus of the composite with the fibers oriented transverse to the applied load can be found using the equation:
The fracture toughness of carbon fiber reinforced plastics is governed by the mechanisms: 1) debonding between the carbon fiber and polymer matrix, 2) fiber pull-out, and 3) delamination between the CFRP sheets. Typical epoxy-based CFRPs exhibit virtually no plasticity, with less than 0.5% strain to failure. Although CFRPs with epoxy have high strength and elastic modulus, the brittle fracture mechanics presents unique challenges to engineers in failure detection since failure occurs catastrophically. As such, recent efforts to toughen CFRPs include modifying the existing epoxy material and finding alternative polymer matrix. One such material with high promise is PEEK, which exhibits an order of magnitude greater toughness with similar elastic modulus and tensile strength. However, PEEK is much more difficult to process and more expensive.
Despite their high initial strength-to-weight ratios, a design limitation of CFRPs are their lack of a definable fatigue limit. This means, theoretically, that stress cycle failure cannot be ruled out. While steel and many other structural metals and alloys do have estimable fatigue or endurance limits, the complex failure modes of composites mean that the fatigue failure properties of CFRPs are difficult to predict and design against; however emerging research has shed light on the effects of low velocity impacts on composites. Low velocity impacts can make carbon fibre polymers susceptible to damage. As a result, when using CFRPs for critical cyclic-loading applications, engineers may need to design in considerable strength safety margins to provide suitable component reliability over its service life.
Environmental effects such as temperature and humidity can have profound effects on the polymer-based composites, including most CFRPs. While CFRPs demonstrate excellent corrosion resistance, the effect of moisture at wide ranges of temperatures can lead to degradation of the mechanical properties of CFRPs, particularly at the matrix-fiber interface. While the carbon fibers themselves are not affected by the moisture diffusing into the material, the moisture plasticizes the polymer matrix. This leads to significant changes in properties that are dominantly influenced by the matrix in CFRPs such as compressive, interlaminar shear, and impact properties. The epoxy matrix used for engine fan blades is designed to be impervious against jet fuel, lubrication, and rain water, and external paint on the composites parts is applied to minimize damage from ultraviolet light.
Carbon fibers can cause galvanic corrosion when CRP parts are attached to aluminum or mild steel but not to stainless steel or titanium.
Carbon Fiber Reinforced Plastics are very hard to machine, and cause significant tool wear. The tool wear in CFRP machining is dependent on the fiber orientation and machining condition of the cutting process. To reduce tool wear various types of coated tools are used in machining CFRP and CFRP-metal stack.
Manufacturing
The primary element of CFRPs is a carbon filament; this is produced from a precursor polymer such as polyacrylonitrile (PAN), rayon, or petroleum pitch. For synthetic polymers such as PAN or rayon, the precursor is first spun into filament yarns, using chemical and mechanical processes to initially align the polymer chains in a way to enhance the final physical properties of the completed carbon fiber. Precursor compositions and mechanical processes used during spinning filament yarns may vary among manufacturers. After drawing or spinning, the polymer filament yarns are then heated to drive off non-carbon atoms (carbonization), producing the final carbon fiber. The carbon fibers filament yarns may be further treated to improve handling qualities, then wound onto bobbins. From these fibers, a unidirectional sheet is created. These sheets are layered onto each other in a quasi-isotropic layup, e.g. 0°, +60°, or −60° relative to each other.
From the elementary fiber, a bidirectional woven sheet can be created, i.e. a twill with a 2/2 weave. The process by which most CFRPs are made varies, depending on the piece being created, the finish (outside gloss) required, and how many of the piece will be produced. In addition, the choice of matrix can have a profound effect on the properties of the finished composite.
Many CFRP parts are created with a single layer of carbon fabric that is backed with fiberglass. A tool called a chopper gun is used to quickly create these composite parts. Once a thin shell is created out of carbon fiber, the chopper gun cuts rolls of fiberglass into short lengths and sprays resin at the same time, so that the fiberglass and resin are mixed on the spot. The resin is either external mix, wherein the hardener and resin are sprayed separately, or internal mixed, which requires cleaning after every use.
Manufacturing methods may include the following:
Molding
One method of producing CFRP parts is by layering sheets of carbon fiber cloth into a mold in the shape of the final product. The alignment and weave of the cloth fibers is chosen to optimize the strength and stiffness properties of the resulting material. The mold is then filled with epoxy and is heated or air-cured. The resulting part is very corrosion-resistant, stiff, and strong for its weight. Parts used in less critical areas are manufactured by draping cloth over a mold, with epoxy either pre-impregnated into the fibers (also known as pre-preg) or "painted" over it. High-performance parts using single molds are often vacuum-bagged and/or autoclave-cured, because even small air bubbles in the material will reduce strength. An alternative to the autoclave method is to use internal pressure via inflatable air bladders or EPS foam inside the non-cured laid-up carbon fiber.
Vacuum bagging
For simple pieces of which relatively few copies are needed (one or two per day), a vacuum bag can be used. A fiberglass, carbon fiber, or aluminum mold is polished and waxed, and has a release agent applied before the fabric and resin are applied, and the vacuum is pulled and set aside to allow the piece to cure (harden). There are three ways to apply the resin to the fabric in a vacuum mold.
The first method is manual and called a wet layup, where the two-part resin is mixed and applied before being laid in the mold and placed in the bag. The other one is done by infusion, where the dry fabric and mold are placed inside the bag while the vacuum pulls the resin through a small tube into the bag, then through a tube with holes or something similar to evenly spread the resin throughout the fabric. Wire loom works perfectly for a tube that requires holes inside the bag. Both of these methods of applying resin require hand work to spread the resin evenly for a glossy finish with very small pin-holes.
A third method of constructing composite materials is known as a dry layup. Here, the carbon fiber material is already impregnated with resin (pre-preg) and is applied to the mold in a similar fashion to adhesive film. The assembly is then placed in a vacuum to cure. The dry layup method has the least amount of resin waste and can achieve lighter constructions than wet layup. Also, because larger amounts of resin are more difficult to bleed out with wet layup methods, pre-preg parts generally have fewer pinholes. Pinhole elimination with minimal resin amounts generally require the use of autoclave pressures to purge the residual gases out.
Compression molding
A quicker method uses a compression mold, also commonly known as carbon fiber forging. This is a two (male and female), or multi-piece mold, usually made out of aluminum or steel and more recently 3D printed plastic. The mold components are pressed together with the fabric and resin loaded into the inner cavity that ultimately becomes the desired component. The benefit is the speed of the entire process. Some car manufacturers, such as BMW, claimed to be able to cycle a new part every 80 seconds. However, this technique has a very high initial cost since the molds require CNC machining of very high precision.
Filament winding
For difficult or convoluted shapes, a filament winder can be used to make CFRP parts by winding filaments around a mandrel or a core.
Applications
Applications for CFRPs include the following:
Aerospace engineering
The Airbus A350 XWB is built of 53% CFRP including wing spars and fuselage components, overtaking the Boeing 787 Dreamliner, for the aircraft with the highest weight ratio for CFRP, which is 50%. This was one of the first commercial aircraft to have wing spars made from composites. The Airbus A380 was one of the first commercial airliners to have a central wing-box made of CFRP; it is the first to have a smoothly contoured wing cross-section instead of the wings being partitioned span-wise into sections. This flowing, continuous cross section optimises aerodynamic efficiency. Moreover, the trailing edge, along with the rear bulkhead, empennage, and un-pressurised fuselage are made of CFRP. However, many delays have pushed order delivery dates back because of problems with the manufacture of these parts. Many aircraft that use CFRPs have experienced delays with delivery dates due to the relatively new processes used to make CFRP components, whereas metallic structures have been studied and used on airframes for decades, and the processes are relatively well understood. A recurrent problem is the monitoring of structural ageing, for which new methods are constantly investigated, due to the unusual multi-material and anisotropic nature of CFRPs.
In 1968 a Hyfil carbon-fiber fan assembly was in service on the Rolls-Royce Conways of the Vickers VC10s operated by BOAC.
Specialist aircraft designers and manufacturers Scaled Composites have made extensive use of CFRPs throughout their design range, including the first private crewed spacecraft Spaceship One. CFRPs are widely used in micro air vehicles (MAVs) because of their high strength-to-weight ratio.
Automotive engineering
CFRPs are extensively used in high-end automobile racing. The high cost of carbon fiber is mitigated by the material's unsurpassed strength-to-weight ratio, and low weight is essential for high-performance automobile racing. Race-car manufacturers have also developed methods to give carbon fiber pieces strength in a certain direction, making it strong in a load-bearing direction, but weak in directions where little or no load would be placed on the member. Conversely, manufacturers developed omnidirectional carbon fiber weaves that apply strength in all directions. This type of carbon fiber assembly is most widely used in the "safety cell" monocoque chassis assembly of high-performance race-cars. The first carbon fiber monocoque chassis was introduced in Formula One by McLaren in the 1981 season. It was designed by John Barnard and was widely copied in the following seasons by other F1 teams due to the extra rigidity provided to the chassis of the cars.
Many supercars over the past few decades have incorporated CFRPs extensively in their manufacture, using it for their monocoque chassis as well as other components. As far back as 1971, the Citroën SM offered optional lightweight carbon fiber wheels.
Use of the material has been more readily adopted by low-volume manufacturers who used it primarily for creating body-panels for some of their high-end cars due to its increased strength and decreased weight compared with the glass-reinforced polymer they used for the majority of their products.
Civil engineering
CFRPs have become a notable material in structural engineering applications. Studied in an academic context as to their potential benefits in construction, CFRPs have also proved themselves cost-effective in a number of field applications strengthening concrete, masonry, steel, cast iron, and timber structures. Their use in industry can be either for retrofitting to strengthen an existing structure or as an alternative reinforcing (or prestressing) material instead of steel from the outset of a project.
Retrofitting has become the increasingly dominant use of the material in civil engineering, and applications include increasing the load capacity of old structures (such as bridges, beams, ceilings, columns and walls) that were designed to tolerate far lower service loads than they are experiencing today, seismic retrofitting, and repair of damaged structures. Retrofitting is popular in many instances as the cost of replacing the deficient structure can greatly exceed the cost of strengthening using CFRP.
Applied to reinforced concrete structures for flexure, the use of CFRPs typically has a large impact on strength (doubling or more the strength of the section is not uncommon), but only moderately increases stiffness (as little as 10%). This is because the material used in such applications is typically very strong (e.g., 3 GPa ultimate tensile strength, more than 10 times mild steel) but not particularly stiff (150 to 250 GPa elastic modulus, a little less than steel, is typical). As a consequence, only small cross-sectional areas of the material are used. Small areas of very high strength but moderate stiffness material will significantly increase strength, but not stiffness.
CFRPs can also be used to enhance shear strength of reinforced concrete by wrapping fabrics or fibers around the section to be strengthened. Wrapping around sections (such as bridge or building columns) can also enhance the ductility of the section, greatly increasing the resistance to collapse under dynamic loading. Such 'seismic retrofit' is the major application in earthquake-prone areas, since it is much more economic than alternative methods.
If a column is circular (or nearly so) an increase in axial capacity is also achieved by wrapping. In this application, the confinement of the CFRP wrap enhances the compressive strength of the concrete. However, although large increases are achieved in the ultimate collapse load, the concrete will crack at only slightly enhanced load, meaning that this application is only occasionally used. Specialist ultra-high modulus CFRP (with tensile modulus of 420 GPa or more) is one of the few practical methods of strengthening cast iron beams. In typical use, it is bonded to the tensile flange of the section, both increasing the stiffness of the section and lowering the neutral axis, thus greatly reducing the maximum tensile stress in the cast iron.
In the United States, prestressed concrete cylinder pipes (PCCP) account for a vast majority of water transmission mains. Due to their large diameters, failures of PCCP are usually catastrophic and affect large populations. Approximately of PCCP were installed between 1940 and 2006. Corrosion in the form of hydrogen embrittlement has been blamed for the gradual deterioration of the prestressing wires in many PCCP lines. Over the past decade, CFRPs have been used to internally line PCCP, resulting in a fully structural strengthening system. Inside a PCCP line, the CFRP liner acts as a barrier that controls the level of strain experienced by the steel cylinder in the host pipe. The composite liner enables the steel cylinder to perform within its elastic range, to ensure the pipeline's long-term performance is maintained. CFRP liner designs are based on strain compatibility between the liner and host pipe.
CFRPs are more costly materials than commonly used their counterparts in the construction industry, glass fiber-reinforced polymers (GFRPs) and aramid fiber-reinforced polymers (AFRPs), though CFRPs are, in general, regarded as having superior properties. Much research continues to be done on using CFRPs both for retrofitting and as an alternative to steel as reinforcing or prestressing materials. Cost remains an issue and long-term durability questions still remain. Some are concerned about the brittle nature of CFRPs, in contrast to the ductility of steel. Though design codes have been drawn up by institutions such as the American Concrete Institute, there remains some hesitation among the engineering community about implementing these alternative materials. In part, this is due to a lack of standardization and the proprietary nature of the fiber and resin combinations on the market.
Carbon-fiber microelectrodes
Carbon fibers are used for fabrication of carbon-fiber microelectrodes. In this application typically a single carbon fiber with diameter of 5–7 μm is sealed in a glass capillary. At the tip the capillary is either sealed with epoxy and polished to make carbon-fiber disk microelectrode or the fiber is cut to a length of 75–150 μm to make carbon-fiber cylinder electrode. Carbon-fiber microelectrodes are used either in amperometry or fast-scan cyclic voltammetry for detection of biochemical signalling.
Sports goods
CFRPs are now widely used in sports equipment such as in squash, tennis, and badminton racquets, sport kite spars, high-quality arrow shafts, hockey sticks, fishing rods, surfboards, high end swim fins, and rowing shells. Amputee athletes such as Jonnie Peacock use carbon fiber blades for running. It is used as a shank plate in some basketball sneakers to keep the foot stable, usually running the length of the shoe just above the sole and left exposed in some areas, usually in the arch.
Controversially, in 2006, cricket bats with a thin carbon-fiber layer on the back were introduced and used in competitive matches by high-profile players including Ricky Ponting and Michael Hussey. The carbon fiber was claimed to merely increase the durability of the bats, but it was banned from all first-class matches by the ICC in 2007.
A CFRP bicycle frame weighs less than one of steel, aluminum, or titanium having the same strength. The type and orientation of the carbon-fiber weave can be designed to maximize stiffness in required directions. Frames can be tuned to address different riding styles: sprint events require stiffer frames while endurance events may require more flexible frames for rider comfort over longer periods. The variety of shapes it can be built into has further increased stiffness and also allowed aerodynamic tube sections. CFRP forks including suspension fork crowns and steerers, handlebars, seatposts, and crank arms are becoming more common on medium as well as higher-priced bicycles. CFRP rims remain expensive but their stability compared to aluminium reduces the need to re-true a wheel and the reduced mass reduces the moment of inertia of the wheel. CFRP spokes are rare and most carbon wheelsets retain traditional stainless steel spokes. CFRPs also appear increasingly in other components such as derailleur parts, brake and shifter levers and bodies, cassette sprocket carriers, suspension linkages, disc brake rotors, pedals, shoe soles, and saddle rails. Although strong and light, impact, over-torquing, or improper installation of CFRP components has resulted in cracking and failures, which may be difficult or impossible to repair.
Other applications
The fire resistance of polymers and thermo-set composites is significantly improved if a thin layer of carbon fibers is moulded near the surface because a dense, compact layer of carbon fibers efficiently reflects heat.
CFRPs are being used in an increasing number of high-end products that require stiffness and low weight, these include:
Musical instruments, including violin bows; guitar picks, necks (carbon fiber rods), and pick-guards; drum shells; bagpipe chanters; piano actions; and entire musical instruments such as carbon fiber cellos, violas, and violins, acoustic guitars and ukuleles; also audio components such as turntables and loudspeakers.
Firearms use it to replace certain metal, wood, and fiberglass components but many of the internal parts are still limited to metal alloys as current reinforced plastics are unsuitable.
High-performance drone bodies and other radio-controlled vehicle and aircraft components such as helicopter rotor blades.
Lightweight poles such as: tripod legs, tent poles, fishing rods, billiards cues, walking sticks, and high-reach poles such as for window cleaning.
Dentistry, carbon fiber posts are used in restoring root canal treated teeth.
Railed train bogies for passenger service. This reduces the weight by up to 50% compared to metal bogies, which contributes to energy savings.
Laptop shells and other high performance cases.
Carbon woven fabrics.
Archery: carbon fiber arrows and bolts, stock (for crossbows) and riser (for vertical bows), and rail.
As a filament for the 3D fused deposition modeling printing process, carbon fiber-reinforced plastic (polyamide-carbon filament) is used for the production of sturdy but lightweight tools and parts due to its high strength and tear length.
District heating pipe rehabilitation, using CIPP method.
Disposal and recycling
CFRPs have a long service lifetime when protected from the sun. When it is time to decommission CFRPs, they cannot be melted down in air like many metals. When free of vinyl (PVC or polyvinyl chloride) and other halogenated polymers, CFRPs can be thermally decomposed via thermal depolymerization in an oxygen-free environment. This can be accomplished in a refinery in a one-step process. Capture and reuse of the carbon and monomers is then possible. CFRPs can also be milled or shredded at low temperature to reclaim the carbon fiber; however, this process shortens the fibers dramatically. Just as with downcycled paper, the shortened fibers cause the recycled material to be weaker than the original material. There are still many industrial applications that do not need the strength of full-length carbon fiber reinforcement. For example, chopped reclaimed carbon fiber can be used in consumer electronics, such as laptops. It provides excellent reinforcement of the polymers used even if it lacks the strength-to-weight ratio of an aerospace component.
Carbon nanotube reinforced polymer (CNRP)
In 2009, Zyvex Technologies introduced carbon nanotube-reinforced epoxy and carbon pre-pregs. Carbon nanotube reinforced polymer (CNRP) is several times stronger and tougher than typical CFRPs and is used in the Lockheed Martin F-35 Lightning II as a structural material for aircraft. CNRP still uses carbon fiber as the primary reinforcement, but the binding matrix is a carbon nanotube-filled epoxy.
See also
Forged carbon fiber
Carbon-ceramic
Carbotanium
References
External links
Japan Carbon Fiber Manufacturers Association (English)
Engineers design composite bracing system for injured Hokie running back Cedric Humes
The New Steel a 1968 Flight article on the announcement of carbon fiber
Carbon Fibres – the First Five Years A 1971 Flight article on carbon fiber in the aviation field
Aerospace materials
Allotropes of carbon
Composite materials
Fibre-reinforced polymers
Synthetic fibers | Carbon-fiber reinforced polymer | [
"Physics",
"Chemistry",
"Engineering"
] | 5,241 | [
"Allotropes of carbon",
"Synthetic fibers",
"Allotropes",
"Aerospace materials",
"Synthetic materials",
"Composite materials",
"Materials",
"Aerospace engineering",
"Matter"
] |
23,812,495 | https://en.wikipedia.org/wiki/X-ray%20transient | X-ray emission occurs from many celestial objects. These emissions can have a pattern, occur intermittently, or as a transient astronomical event. In X-ray astronomy many sources have been discovered by placing an X-ray detector above the Earth's atmosphere. Often, the first X-ray source discovered in many constellations is an X-ray transient. These objects show changing levels of X-ray emission. NRL astronomer Dr. Joseph Lazio stated: " ... the sky is known to be full of transient objects emitting at X- and gamma-ray wavelengths, ...". There are a growing number of recurrent X-ray transients. In the sense of traveling as a transient, the only stellar X-ray source that does not belong to a constellation is the Sun. As seen from Earth, the Sun moves from west to east along the ecliptic, passing over the course of one year through the twelve constellations of the Zodiac, and Ophiuchus.
Exotic X-ray transients
SCP 06F6 is (or was) an astronomical object of unknown type, discovered on February 21, 2006, in the constellation Boötes during a survey of galaxy cluster CL 1432.5+3332.8 with the Hubble Space Telescope's Advanced Camera for Surveys Wide Field Channel.
The European X-ray satellite XMM Newton made an observation in early August 2006 which appears to show an X-ray glow around SCP 06F6, two orders of magnitude more luminous than that of supernovae.
Nova or supernova
Most astronomical X-ray transient sources have simple and consistent time structures; typically a rapid brightening followed by gradual fading, as in a nova or supernova.
GRO J0422+32 is an X-ray nova and black hole candidate that was discovered by the BATSE instrument on the Compton Gamma Ray Observatory satellite on Aug 5 1992. During the outburst, it was observed to be stronger than the Crab Nebula gamma-ray source out to photon energies of about 500 keV.
Transient binary X-ray source
XTE J1650-500 is a transient binary X-ray source located in the constellation Ara. The binary period is 0.32 d.
Soft X-ray transient
"Soft X-ray transients" are composed of some type of compact object (probably a neutron star) and some type of "normal", low-mass star (i.e. a star with a mass of some fraction of the Sun's mass). These objects show changing levels of low-energy, or "soft", X-ray emission, probably produced somehow by variable transfer of mass from the normal star to the compact object. In effect the compact object "gobbles up" the normal star, and the X-ray emission can provide the best view of how this process occurs.
Soft X-ray transients Cen X-4 and Apl X-1 were discovered by Hakucho, Japan's first X-ray astronomy satellite.
X-ray burster
X-ray bursters are one class of X-ray binary stars exhibiting periodic and rapid increases in luminosity (typically a factor of 10 or greater) peaked in the X-ray regime of the electromagnetic spectrum. These astrophysical systems are composed of an accreting compact object, typically a neutron star or occasionally a black hole, and a companion 'donor' star; the mass of the donor star is used to categorize the system as either a high mass (above 10 solar masses) or low mass (less than 1 solar mass) X-ray binary, abbreviated as LMXB and HMXB, respectively. X-ray bursters differ observationally from other X-ray transient sources (such as X-ray pulsars and soft X-ray transients), showing a sharp rise time (1 – 10 seconds) followed by spectral softening (a property of cooling black bodies). Individual bursts are characterized by an integrated flux of 1039-40 ergs.
Gamma-ray burster
A gamma-ray burst (GRB) is a highly luminous flash of gamma rays — the most energetic form of electromagnetic radiation. GRB 970228 was a GRB detected on Feb 28 1997 at 02:58 UTC. Prior to this event, GRBs had only been observed at gamma wavelengths. For several years physicists had expected these bursts to be followed by a longer-lived afterglow at longer wavelengths, such as radio waves, x-rays, and even visible light. This was the first burst for which such an afterglow was observed.
A transient x-ray source was detected which faded with a power law slope in the days following the burst. This x-ray afterglow was the first GRB afterglow ever detected.
Transient X-ray pulsars
For some types of X-ray pulsars, the companion star is a Be star that rotates very rapidly and apparently sheds a disk of gas around its equator. The orbits of the neutron star with these companions are usually large and very elliptical in shape. When the neutron star passes nearby or through the Be circumstellar disk, it will capture material and temporarily become an X-ray pulsar. The circumstellar disk around the Be star expands and contracts for unknown reasons, so these are transient X-ray pulsars that are observed only intermittently, often with months to years between episodes of observable X-ray pulsation.
SAX J1808.4-3658 is a transient, accreting millisecond X-ray pulsar that is intermittent. In addition, X-ray burst oscillations and quasi-periodic oscillations in addition to coherent X-ray pulsations have been seen from SAX J1808.4-3658, making it a Rosetta stone for interpretation of the timing behavior of low-mass X-ray binaries.
Supergiant Fast X-ray Transients (SFXTs)
There are a growing number of recurrent X-ray transients, characterized by short outbursts with very fast rise times (~ tens of minutes) and typical durations of a few hours that are associated with OB supergiants and hence define a new class of massive X-ray binaries: Supergiant Fast X-ray Transients (SFXTs). XTE J1739–302 is one of these. Discovered in 1997, remaining active only one day, with an X-ray spectrum well fitted with a thermal bremsstrahlung (temperature of ~20 keV), resembling the spectral properties of accreting pulsars, it was at first classified as a peculiar Be/X-ray transient with an unusually short outburst. A new burst was observed on Apr 8 2008 with Swift.
The Sun as an X-ray transient
The quiet Sun, although less active than active regions, is awash with dynamic processes and transient events (bright points, nanoflares and jets).
A coronal mass ejection (CME) is an ejected plasma consisting primarily of electrons and protons (in addition to small quantities of heavier elements such as helium, oxygen, and iron), plus the entraining coronal closed magnetic field regions. Small-scale energetic signatures such as plasma heating (observed as compact soft X-ray brightening) may be indicative of impending CMEs. The soft X-ray sigmoid (an S-shaped intensity of soft X-rays) is an observational manifestation of the connection between coronal structure and CME production.
The first detection of a Coronal mass ejection (CME) as such was made on Dec 1 1971 by R. Tousey of the US Naval Research Laboratory using the 7th Orbiting Solar Observatory (OSO 7). Earlier observations of coronal transients or even phenomena observed visually during solar eclipses are now understood as essentially the same thing.
The largest geomagnetic perturbation, resulting presumably from a "prehistoric" CME, coincided with the first-observed solar flare, in 1859. The flare was observed visually by Richard Christopher Carrington and the geomagnetic storm was observed with the recording magnetograph at Kew Gardens. The same instrument recorded a crotchet, an instantaneous perturbation of the Earth's ionosphere by ionizing soft X-rays. This could not easily be understood at the time because it predated the discovery of X-rays (by Roentgen) and the recognition of the ionosphere (by Kennelly and Heaviside).
Transient X-rays from Jupiter
Unlike Earth's aurorae, which are transient and only occur at times of heightened solar activity, Jupiter's aurorae are permanent, though their intensity varies from day to day. They consist of three main components: the main ovals, which are bright, narrow (< 1000 km in width) circular features located at approximately 16° from the magnetic poles; the satellite auroral spots, which correspond to the footprints of the magnetic field lines connecting their ionospheres with the ionosphere of Jupiter, and transient polar emissions situated within the main ovals. The auroral emissions were detected in almost all parts of the electromagnetic spectrum from radio waves to X-rays (up to 3 keV).
Detecting X-ray transients
The X-ray monitor of Solwind, designated NRL-608 or XMON, was a collaboration between the Naval Research Laboratory and Los Alamos National Laboratory. The monitor consisted of 2 collimated argon proportional counters. The instrument bandwidth of 3-10 keV was defined by the detector window absorption (the window was 0.254 mm beryllium) and the upper level discriminator. The active gas volume (P-10 mixture) was 2.54 cm deep, providing good efficiency up to 10 keV. Counts were recorded in 2 energy channels. Slat collimators defined a FOV of 3° x 30° (FWHM) for each detector; the long axes of the FOVs were perpendicular to each other. The long axes were inclined 45 degrees to the scan direction, allowing localization of transient events to about 1 degree.
The PHEBUS experiment recorded high energy transient events in the range 100 keV to 100 MeV. It consisted of two independent detectors and their associated electronics. Each detector consisted of a bismuth germinate (BGO) crystal 78 mm in diameter by 120 mm thick, surrounded by a plastic anti-coincidence jacket. The two detectors were arranged on the spacecraft so as to observe 4π steradians. The burst mode was triggered when the count rate in the 0.1 to 1.5 MeV energy range exceeded the background level by 8 σ (standard deviations) in either 0.25 or 1.0 seconds. There were 116 channels over the energy range.
Also on board the Granat International Astrophysical Observatory were four WATCH instruments that could localize bright sources in the 6 to 180 keV range to within 0.5° using a Rotation Modulation Collimator. Taken together, the instruments' three fields of view covered approximately 75% of the sky. The energy resolution was 30% FWHM at 60 keV. During quiet periods, count rates in two energy bands (6 to 15 and 15 to 180 keV) were accumulated for 4, 8, or 16 seconds, depending on onboard computer memory availability. During a burst or transient event, count rates were accumulated with a time resolution of 1 s per 36 s.
The Compton Gamma Ray Observatory (CGRO) carries the Burst and Transient Source Experiment (BATSE) which detects in the 20 keV to 8 MeV range.
WIND was launched on Nov 1 1994. At first, the satellite had a lunar swingby orbit around the Earth. With the assistance of the Moon's gravitational field Wind's apogee was kept over the day hemisphere of the Earth and magnetospheric observations were made. Later in the mission, the Wind spacecraft was inserted into a special "halo" orbit in the solar wind upstream from the Earth, about the sunward Sun-Earth equilibrium point (L1). The satellite has a spin period of ~ 20 seconds, with the spin axis normal to the ecliptic. WIND carries the Transient Gamma-Ray Spectrometer (TGRS) which covers the energy range 15 keV - 10 MeV, with an energy resolution of 2.0 keV @ 1.0 MeV (E/delta E = 500).
The third US Small Astronomy Satellite (SAS-3) was launched on May 7, 1975, with 3 major scientific objectives: 1) determine bright X-ray source locations to an accuracy of 15 arcseconds; 2) study selected sources over the energy range 0.1-55 keV; and 3) continuously search the sky for X-ray novae, flares, and other transient phenomena. It was a spinning satellite with pointing capability. SAS 3 was the first to discover X-rays from a highly magnetic WD binary system, AM Her, discovered X-rays from Algol and HZ 43, and surveyed the soft X-ray background (0.1-0.28 kev).
Tenma was the second Japanese X-ray astronomy satellite launched on Feb 20 1983. Tenma carried GSFC detectors which had an improved energy resolution (by a factor of 2) compared to proportional counters and performed the first sensitive measurements of the iron spectral region for many astronomical objects. Energy Range: 0.1 keV - 60 keV. Gas Scintillator Proportional Counter: 10 units of 80 cm2 each, FOV ~ 3deg (FWHM), 2 - 60 keV. Transient Source Monitor: 2 - 10 keV.
India's first dedicated astronomy satellite, scheduled for launch on board the PSLV in mid 2010, Astrosat will monitor the X-ray sky for new transients, among other scientific focuses.
See also
X-ray astronomy
X-ray astrophysical sources
References
External links
HETE-2: High Energy Transient Explorer
BATSE: Burst and Transient Source Explorer
Stellar phenomena
Astronomical events
X-ray transient | X-ray transient | [
"Physics",
"Astronomy"
] | 2,918 | [
"Physical phenomena",
"Astronomical events",
"Astronomical X-ray sources",
"Stellar phenomena",
"Astronomical objects"
] |
23,813,449 | https://en.wikipedia.org/wiki/Virtual%20resource%20partitioning | Virtual resource partitioning (VRP) is an operating system-level virtualization technology that allocates computing resources (such as CPU & I/O) to transactions. Conventional virtualization technologies allocate resources on an operating system (Windows, Linux...) wide basis. VRP works 2 levels deeper by allowing regulation and control of the resources used by specific transactions within an application.
In many computerized environments, a single user, application, or transaction can appropriate all server resources and by that, affect the quality of service & user experience of other active users, applications or transactions. For example, a single report in a data warehouse environment can monopolize data access by demanding large amounts of data. Similarly, a CPU-bound application may consume all server processing power and starve other activities.
VRP allows balancing, regulating and manipulating the resource consumption of individual transactions, and by that, improving the overall quality of service, compliance with service level agreements, and the ultimate end user experience.
Technology overview
VRP is usually implemented in the OS in a way that is completely transparent to the application or transaction. The technology creates virtual resource "lanes", each of which has access to a controllable amount of resources, and redirects specific transactions to those lanes allowing them to take more or less resources.
VRP can be implemented in any OS and is available on Windows, Red Hat, Suse, HP-UX, Solaris, tru64, AIX and others.
In any OS, the application communicates with the OS kernel in a specific way which requires a different VRP implementation. A safe implementation of VRP usually combines several resource allocation techniques. VRP implementations depend on rapidly varying transaction type, consumed resource and kernel state. The VRP implementation must adapt to such changes in real-time.
References
VRP as a new trend in the IT industry
VRP technical overview as implemented by one of the VRP vendors
Virtualization | Virtual resource partitioning | [
"Engineering"
] | 397 | [
"Computer networks engineering",
"Virtualization"
] |
23,814,141 | https://en.wikipedia.org/wiki/Journal%20of%20Cheminformatics | The Journal of Cheminformatics is a peer-reviewed open access scientific journal that covers cheminformatics and molecular modelling. It was established in 2009 with David Wild (Indiana University) and Christoph Steinbeck (then at EMBL-EBI) as founding editors-in-chief, and was originally published by Chemistry Central. At the end of 2015, the Chemistry Central brand was retired and its titles, including Journal of Cheminformatics, were merged with the SpringerOpen portfolio of open access journals.
, the editors-in-chief are Rajarshi Guha (National Center for Advancing Translational Sciences) and Egon Willighagen (Maastricht University). The journal has issued a few special issues ("article collections") in 2011 and 2012, covering topics like PubChem3D, the Resource Description Framework, and the International Chemical Identifier.
In June 2021 Willighagen announced his intention to step down at the end of the year, explaining in an open letter that the publisher Springer Nature was not sufficiently FAIR and open. Barbara Zdrazil started as editor in chief in 2022.
Abstracting and indexing
The journal is abstracted and indexed in:
Chemical Abstracts Service
Current Contents/Physical, Chemical & Earth Sciences
Europe PubMed Central
Science Citation Index Expanded
Scopus
According to the Journal Citation Reports, the journal has a 2021 impact factor of 8.489. The most cited paper is on a cross-platform molecule editor and visualizer called Avogadro, which has been cited more than 6800 times as of June 2024 according to the Web of Science.
References
External links
Computer science journals
Cheminformatics
Creative Commons Attribution-licensed journals
Chemistry journals
Academic journals established in 2009
English-language journals | Journal of Cheminformatics | [
"Chemistry"
] | 359 | [
"Computational chemistry",
"nan",
"Cheminformatics"
] |
23,814,905 | https://en.wikipedia.org/wiki/Normally%20hyperbolic%20invariant%20manifold | A normally hyperbolic invariant manifold (NHIM) is a natural generalization of a hyperbolic fixed point and a hyperbolic set. The difference can be described heuristically as follows: For a manifold to be normally hyperbolic we are allowed to assume that the dynamics of itself is neutral compared with the dynamics nearby, which is not allowed for a hyperbolic set. NHIMs were introduced by Neil Fenichel in 1972. In this and subsequent papers, Fenichel proves that NHIMs possess stable and unstable manifolds and more importantly, NHIMs and their stable and unstable manifolds persist under small perturbations. Thus, in problems involving perturbation theory, invariant manifolds exist with certain hyperbolicity properties, which can in turn be used to obtain qualitative information about a dynamical system.
Definition
Let M be a compact smooth manifold, f: M → M a diffeomorphism, and Df: TM → TM the differential of f. An f-invariant submanifold Λ of M is said to be a normally hyperbolic invariant manifold if the restriction to Λ of the tangent bundle of M admits a splitting into a sum of three Df-invariant subbundles, one being the tangent bundle of , the others being the stable bundle and the unstable bundle and denoted Es and Eu, respectively. With respect to some Riemannian metric on M, the restriction of Df to Es must be a contraction and the restriction of Df to Eu must be an expansion, and must be relatively neutral on . Thus, there exist constants and c > 0 such that
and
See also
Stable manifold
Center manifold
Hyperbolic fixed point
Hyperbolic set
Hyperbolic Lagrangian coherent structures
References
M.W. Hirsch, C.C Pugh, and M. Shub Invariant Manifolds, Springer-Verlag (1977),
Dynamical systems | Normally hyperbolic invariant manifold | [
"Physics",
"Mathematics"
] | 386 | [
"Mechanics",
"Dynamical systems"
] |
22,298,230 | https://en.wikipedia.org/wiki/Solvothermal%20synthesis | Solvothermal synthesis is a method of producing chemical compounds, in which a solvent containing reagents is put under high pressure and temperature in an autoclave. Many substances dissolve better in the same solvent in such conditions than at standard conditions, enabling reactions that would not otherwise occur and leading to new compounds or polymorphs. Solvothermal synthesis is very similar to the hydrothermal route; both are typically conducted in a stainless steel autoclave. The only difference being that the precursor solution is usually non-aqueous.
Solvothermal synthesis has been used prepare MOFs, titanium dioxide, and graphene, carbon spheres, chalcogenides and other materials.
Solvents
Besides water (hydrothermal synthesis), solvothermal syntheses make use of a large range of solvents, including ammonia, carbon dioxide, dimethylformamide, and various alcohols such as methanol, or glycols such as hexane-1,6-diol.
Formic acid as reaction medium
Formic acid decomposes at high temperatures to carbon dioxide and hydrogen or carbon monoxide and water. This property allows formic acid to be used as a reducing and carbon dioxide-rich reaction medium in which it is possible to form various oxides and carbonates.
Ammonia as reaction medium
The critical temperature and pressure of ammonia are 132.2°C and 111bar. In these conditions, it is possible to obtain a range of amides, imides, and nitrides. Although its dielectric constant is lower than that of water, ammonia behaves as a polar solvent especially at high pressures.
References
Chemical synthesis
Titanium compounds | Solvothermal synthesis | [
"Chemistry"
] | 341 | [
"nan",
"Chemical synthesis"
] |
22,303,694 | https://en.wikipedia.org/wiki/Isotopic%20shift | The isotopic shift (also called isotope shift) is the shift in various forms of spectroscopy that occurs when one nuclear isotope is replaced by another.
NMR spectroscopy
In NMR spectroscopy, isotopic effects on chemical shifts are typically small, far less than 1 ppm, the typical unit for measuring shifts. The NMR signals for and ("HD") are readily distinguished in terms of their chemical shifts. The asymmetry of the signal for the "protio" impurity in arises from the differing chemical shifts of and .
Vibrational spectra
Isotopic shifts are best known and most widely used in vibration spectroscopy, where the shifts are large, being proportional to the ratio of the square root of the isotopic masses. In the case of hydrogen, the "H-D shift" is (1/2)1/2 ≈ 1/1.41. Thus, the (totally symmetric) C−H and C−D vibrations for and occur at 2917 cm−1 and 2109 cm−1 respectively. This shift reflects the differing reduced mass for the affected bonds.
Atomic spectra
Isotope shifts in atomic spectra are minute differences between the electronic energy levels of isotopes of the same element. They are the focus of a multitude of theoretical and experimental efforts due to their importance for atomic and nuclear physics. If atomic spectra also have hyperfine structure, the shift refers to the center of gravity of the spectra.
From a nuclear physics perspective, isotope shifts combine different precise atomic physics probes for studying nuclear structure, and their main use is nuclear-model-independent determination of charge-radii differences.
Two effects contribute to this shift:
Mass effects
The mass difference (mass shift), which dominates the isotope shift of light elements. It is traditionally divided to a normal mass shift (NMS) resulting from the change in the reduced electronic mass, and a specific mass shift (SMS), which is present in multi-electron atoms and ions.
The NMS is a purely kinematical effect, studied theoretically by Hughes and Eckart. It can be formulated as follows:
In a theoretical model of atom, which has a infinitely massive nucleus, the energy (in wavenumbers) of a transition can be calculated from Rydberg formula:
where and are principal quantum numbers, and is Rydberg constant.
However, for a nucleus with finite mass , reduced mass is used in the expression of Rydberg constant instead of mass of electron:
For two isotopes with atomic mass approximately and , the difference in the energies of the same transition is
The above equations imply that such mass shift is greatest for hydrogen and deuterium, since their mass ratio is the largest, .
The effect of the specific mass shift was first observed in the spectrum of neon isotopes by Nagaoka and Mishima.
Consider the kinetic energy operator in Schrödinger equation of multi-electron atoms:
For a stationary atom, the conservation of momentum gives
Therefore, the kinetic energy operator becomes
Ignoring the second term, the rest two terms in equation can be combined, and original mass term need to be replaced by the reduced mass , which gives the normal mass shift formulated above.
The second term in the kinetic term gives an additional isotope shift in spectral lines known as specific mass shift, giving
Using perturbation theory, the first-order energy shift can be calculated as
which requires the knowledge of accurate many-electron wave function. Due to the term in the expression, the specific mass shift also decrease as as mass of nucleus increase, same as normal mass shift.
Volume effects
The volume difference (field shift) dominates the isotope shift of heavy elements. This difference induces a change in the electric charge distribution of the nucleus. The phenomenon was described theoretically by Pauli and Peierls. Adopting a simplified picture, the change in an energy level resulting from the volume difference is proportional to the change in total electron probability density at the origin times the mean-square charge radius difference.
For a simple nuclear model of an atom, the nuclear charge is distributed uniformly in a sphere with radius , where A is the atomic mass number, and is a constant.
Similarly, calculating the electrostatic potential of an ideal charge density uniformly distributed in a sphere, the nuclear electrostatic potential is
When the unperturbed Hamiltonian is subtracted, the perturbation is the difference of the potential in the above equation and Coulomb potential :
Such a perturbation of the atomic system neglects all other potential effect, like relativistic corrections. Using the perturbation theory (quantum mechanics), the first-order energy shift due to such perturbation is
The wave function has radial and angular parts, but the perturbation has no angular dependence, so the spherical harmonic normalize integral over the unit sphere:
Since the radius of nuclues is small, and within such a small region , the approximation is valid. And at , only the s sublevel remains, so . Integration gives
The explicit form for hydrogenic wave function, , gives
In a real experiment, the difference of this energy shift of different isotopes is measured. These isotopes have nuclear radius difference . Differentiation of the above equation gives the first order in :
This equation confirms that the volume effect is more significant for hydrogenic atoms with larger Z, which explains why volume effects dominates the isotope shift of heavy elements.
See also
Kinetic isotope effect
Magnetic isotope effect
References
Emission spectroscopy | Isotopic shift | [
"Physics",
"Chemistry"
] | 1,092 | [
"Emission spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
22,306,477 | https://en.wikipedia.org/wiki/Explosion%20welding | Explosion welding (EXW) is a solid state (solid-phase) process where welding is accomplished by accelerating one of the components at extremely high velocity through the use of chemical explosives. This process is often used to clad carbon steel or aluminium plate with a thin layer of a harder or more corrosion-resistant material (e.g., stainless steel, nickel alloy, titanium, or zirconium). Due to the nature of this process, producible geometries are very limited. Typical geometries produced include plates, tubing and tube sheets.
Development
Unlike other forms of welding such as arc welding (which was developed in the late 19th century), explosion welding was developed relatively recently, in the decades after World War II. Its origins, however, go back to World War I, when it was observed that pieces of shrapnel sticking to armor plating were not only embedding themselves, but were actually being welded to the metal. Since the extreme heat involved in other forms of welding did not play a role, it was concluded that the phenomenon was caused by the explosive forces acting on the shrapnel. These results were later duplicated in laboratory tests and, not long afterwards, the process was patented and put to use.
In 1962, DuPont applied for a patent on the explosion welding process, which was granted on June 23, 1964, under US Patent 3,137,937 and resulted in the use of the Detaclad trademark to describe the process. On July 22, 1996, Dynamic Materials Corporation completed the acquisition of DuPont's Detaclad operations for a purchase price of $5,321,850 (or about $ million today).
The response of inhomogeneous plates undergoing explosive welding was analytically modeled in 2011.
Advantages and disadvantages
Explosion welding can produce a bond between two metals that cannot necessarily be welded by conventional means. The process does not melt either metal, instead plasticizing the surfaces of both metals, causing them to come into intimate contact sufficient to create a weld. This is a similar principle to other non-fusion welding techniques, such as friction welding. Large areas can be bonded extremely quickly and the weld itself is very clean, due to the fact that the surface material of both metals is violently expelled during the reaction.
Explosion welding can join a wide array of compatible and non-compatible metals, with more than 260 metal combinations possible. With traditional welding, its components are usually metals that have similar properties. However, with explosion welding, the high initial acceleration of the two components at each other can bypass the properties of metal and join two different metals together. As a result, the new metal has combined properties of the original two metals that can lead to more conductivity, strength, and durability. For example, explosion welding is most commonly used to join materials like stainless steel to copper (Blazynski, 1983). The product is a component that has thermal conductivity and structural stability. Explosion welding offers a solution to the difficulty of joining metals with different properties or melting points.
A disadvantage of this method is that extensive knowledge of explosives is needed before the procedure may be attempted safely. Regulations for the use of high explosives may require special licensing.
See also
Magnetic pulse welding
References
Blazynski, T. Z. Explosion Welding of Metals and Its Applications. London: Applied Science Publishers, 1983.
Further reading
L.R. Carl. (1944). "Brass welds made by detonation impulse". Metal Progress 102-103 46 - brief publication on the explosion welding of metallic plates.
US patent 3,137,937 G. R. Cowan, J. Douglas, and A. Holtzman, (1960). "Explosive bonding" - published a patent on the explosive welding process
Welding
Explosions | Explosion welding | [
"Chemistry",
"Engineering"
] | 773 | [
"Welding",
"Mechanical engineering",
"Explosions"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.